linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V3 00/29] stacktrace: Consolidate stack trace usage
@ 2019-04-25  9:44 Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 01/29] tracing: Cleanup stack trace code Thomas Gleixner
                   ` (30 more replies)
  0 siblings, 31 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

This is an update to V2 which can be found here:

  https://lkml.kernel.org/r/20190418084119.056416939@linutronix.de

Changes vs. V2:

  - Fixed the kernel-doc issue pointed out by Mike

  - Removed the '-1' oddity from the tracer

  - Restricted the tracer nesting to 4

  - Restored the lockdep magic to prevent redundant stack traces

  - Addressed the small nitpicks here and there

  - Picked up Acked/Reviewed tags

The series is based on:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/stacktrace

which contains the removal of the inconsistent and pointless ULONG_MAX
termination of stacktraces.

It's also available from git:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.core/stacktrace

   up to 5160eb663575 ("x86/stacktrace: Use common infrastructure")

Delta patch vs. v2 below.

Thanks,

	tglx

8<-------------
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index 654949dc1c16..f0cfd12cb45e 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -32,14 +32,13 @@ unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
  * @reliable:	True when the stack entry is reliable. Required by
  *		some printk based consumers.
  *
- * Returns:	True, if the entry was consumed or skipped
+ * Return:	True, if the entry was consumed or skipped
  *		False, if there is no space left to store
  */
 typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
 				       bool reliable);
 /**
  * arch_stack_walk - Architecture specific function to walk the stack
-
  * @consume_entry:	Callback which is invoked by the architecture code for
  *			each entry.
  * @cookie:		Caller supplied pointer which is handed back to
@@ -47,10 +46,12 @@ typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
  * @task:		Pointer to a task struct, can be NULL
  * @regs:		Pointer to registers, can be NULL
  *
- * @task	@regs:
- * NULL		NULL	Stack trace from current
+ * ============ ======= ============================================
+ * task	        regs
+ * ============ ======= ============================================
  * task		NULL	Stack trace from task (can be current)
- * NULL		regs	Stack trace starting on regs->stackpointer
+ * current	regs	Stack trace starting on regs->stackpointer
+ * ============ ======= ============================================
  */
 void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 		     struct task_struct *task, struct pt_regs *regs);
diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
index 1e8e246edaad..badd77670d00 100644
--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -705,7 +705,8 @@ static struct dma_debug_entry *dma_entry_alloc(void)
 
 #ifdef CONFIG_STACKTRACE
 	entry->stack_len = stack_trace_save(entry->stack_entries,
-					    ARRAY_SIZE(entry->stack_entries), 1);
+					    ARRAY_SIZE(entry->stack_entries),
+					    1);
 #endif
 	return entry;
 }
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 84423f0bb0b0..45bcaf2e4cb6 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2160,12 +2160,11 @@ check_deadlock(struct task_struct *curr, struct held_lock *next,
  */
 static int
 check_prev_add(struct task_struct *curr, struct held_lock *prev,
-	       struct held_lock *next, int distance)
+	       struct held_lock *next, int distance, struct lock_trace *trace)
 {
 	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list *entry;
 	struct lock_list this;
-	struct lock_trace trace;
 	int ret;
 
 	if (!hlock_class(prev)->key || !hlock_class(next)->key) {
@@ -2198,8 +2197,17 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	this.class = hlock_class(next);
 	this.parent = NULL;
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
-	if (unlikely(!ret))
+	if (unlikely(!ret)) {
+		if (!trace->nr_entries) {
+			/*
+			 * If save_trace fails here, the printing might
+			 * trigger a WARN but because of the !nr_entries it
+			 * should not do bad things.
+			 */
+			save_trace(trace);
+		}
 		return print_circular_bug(&this, target_entry, next, prev);
+	}
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);
 
@@ -2246,7 +2254,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 		return print_bfs_bug(ret);
 
 
-	if (!save_trace(&trace))
+	if (!trace->nr_entries && !save_trace(trace))
 		return 0;
 
 	/*
@@ -2255,14 +2263,14 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 	 */
 	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
-			       next->acquire_ip, distance, &trace);
+			       next->acquire_ip, distance, trace);
 
 	if (!ret)
 		return 0;
 
 	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
-			       next->acquire_ip, distance, &trace);
+			       next->acquire_ip, distance, trace);
 	if (!ret)
 		return 0;
 
@@ -2278,6 +2286,7 @@ check_prev_add(struct task_struct *curr, struct held_lock *prev,
 static int
 check_prevs_add(struct task_struct *curr, struct held_lock *next)
 {
+	struct lock_trace trace = { .nr_entries = 0 };
 	int depth = curr->lockdep_depth;
 	struct held_lock *hlock;
 
@@ -2305,8 +2314,8 @@ check_prevs_add(struct task_struct *curr, struct held_lock *next)
 		 * added:
 		 */
 		if (hlock->read != 2 && hlock->check) {
-			int ret = check_prev_add(curr, hlock, next, distance);
-
+			int ret = check_prev_add(curr, hlock, next, distance,
+						 &trace);
 			if (!ret)
 				return 0;
 
diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index 26cc92288cfb..27bafc1e271e 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -39,6 +39,8 @@ EXPORT_SYMBOL_GPL(stack_trace_print);
  * @entries:	Pointer to storage array
  * @nr_entries:	Number of entries in the storage array
  * @spaces:	Number of leading spaces to print
+ *
+ * Return: Number of bytes printed.
  */
 int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
 			unsigned int nr_entries, int spaces)
@@ -48,7 +50,7 @@ int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
 	if (WARN_ON(!entries))
 		return 0;
 
-	for (i = 0; i < nr_entries; i++) {
+	for (i = 0; i < nr_entries && size; i++) {
 		generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ',
 				     (void *)entries[i]);
 
@@ -105,7 +107,7 @@ static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
  * @size:	Size of the storage array
  * @skipnr:	Number of entries to skip at the start of the stack trace
  *
- * Returns number of entries stored.
+ * Return: Number of trace entries stored.
  */
 unsigned int stack_trace_save(unsigned long *store, unsigned int size,
 			      unsigned int skipnr)
@@ -129,7 +131,7 @@ EXPORT_SYMBOL_GPL(stack_trace_save);
  * @size:	Size of the storage array
  * @skipnr:	Number of entries to skip at the start of the stack trace
  *
- * Returns number of entries stored.
+ * Return: Number of trace entries stored.
  */
 unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
 				  unsigned int size, unsigned int skipnr)
@@ -156,7 +158,7 @@ unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
  * @size:	Size of the storage array
  * @skipnr:	Number of entries to skip at the start of the stack trace
  *
- * Returns number of entries stored.
+ * Return: Number of trace entries stored.
  */
 unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
 				   unsigned int size, unsigned int skipnr)
@@ -179,7 +181,7 @@ unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
  *
- * Returns:	An error if it detects any unreliable features of the
+ * Return:	An error if it detects any unreliable features of the
  *		stack. Otherwise it guarantees that the stack trace is
  *		reliable and returns the number of entries stored.
  *
@@ -214,7 +216,7 @@ int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
  *
- * Returns number of entries stored.
+ * Return: Number of trace entries stored.
  */
 unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
 {
@@ -265,6 +267,8 @@ save_stack_trace_tsk_reliable(struct task_struct *tsk,
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
  * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored
  */
 unsigned int stack_trace_save(unsigned long *store, unsigned int size,
 			      unsigned int skipnr)
@@ -286,6 +290,8 @@ EXPORT_SYMBOL_GPL(stack_trace_save);
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
  * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored
  */
 unsigned int stack_trace_save_tsk(struct task_struct *task,
 				  unsigned long *store, unsigned int size,
@@ -307,6 +313,8 @@ unsigned int stack_trace_save_tsk(struct task_struct *task,
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
  * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored
  */
 unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
 				   unsigned int size, unsigned int skipnr)
@@ -328,7 +336,7 @@ unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
  *
- * Returns:	An error if it detects any unreliable features of the
+ * Return:	An error if it detects any unreliable features of the
  *		stack. Otherwise it guarantees that the stack trace is
  *		reliable and returns the number of entries stored.
  *
@@ -352,6 +360,8 @@ int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
  * stack_trace_save_user - Save a user space stack trace into a storage array
  * @store:	Pointer to storage array
  * @size:	Size of the storage array
+ *
+ * Return: Number of trace entries stored
  */
 unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
 {
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4b6d3e0ebd41..dd00ed3a9473 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2751,15 +2751,15 @@ trace_function(struct trace_array *tr,
 
 #ifdef CONFIG_STACKTRACE
 
-/* 64 entries for kernel stacks are plenty */
-#define FTRACE_KSTACK_ENTRIES	64
+/* Allow 4 levels of nesting: normal, softirq, irq, NMI */
+#define FTRACE_KSTACK_NESTING	4
+
+#define FTRACE_KSTACK_ENTRIES	(PAGE_SIZE / FTRACE_KSTACK_NESTING)
 
 struct ftrace_stack {
 	unsigned long		calls[FTRACE_KSTACK_ENTRIES];
 };
 
-/* This allows 8 level nesting which is plenty */
-#define FTRACE_KSTACK_NESTING	(PAGE_SIZE / sizeof(struct ftrace_stack))
 
 struct ftrace_stacks {
 	struct ftrace_stack	stacks[FTRACE_KSTACK_NESTING];
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index 2fc137f0ee24..031f78b35bae 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -394,7 +394,8 @@ stack_trace_sysctl(struct ctl_table *table, int write,
 		   void __user *buffer, size_t *lenp,
 		   loff_t *ppos)
 {
-	int ret, was_enabled;
+	int was_enabled;
+	int ret;
 
 	mutex_lock(&stack_sysctl_mutex);
 	was_enabled = !!stack_tracer_enabled;
diff --git a/lib/stackdepot.c b/lib/stackdepot.c
index 5936e20cd0a6..605c61f65d94 100644
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -197,7 +197,11 @@ static inline struct stack_record *find_stack(struct stack_record *bucket,
 /**
  * stack_depot_fetch - Fetch stack entries from a depot
  *
+ * @handle:		Stack depot handle which was returned from
+ *			stack_depot_save().
  * @entries:		Pointer to store the entries address
+ *
+ * Return: The number of trace entries for this depot.
  */
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries)
@@ -219,7 +223,7 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch);
  * @nr_entries:		Size of the storage array
  * @alloc_flags:	Allocation gfp flags
  *
- * Returns the handle of the stack struct stored in depot
+ * Return: The handle of the stack struct stored in depot
  */
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries,


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [patch V3 01/29] tracing: Cleanup stack trace code
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
@ 2019-04-25  9:44 ` Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
                   ` (29 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

- Remove the extra array member of stack_dump_trace[] along with the
  ARRAY_SIZE - 1 initialization for struct stack_trace :: max_entries.

  Both are historical leftovers of no value. The stack tracer never exceeds
  the array and there is no extra storage requirement either.

- Make variables which are only used in trace_stack.c static.

- Simplify the enable/disable logic.

- Rename stack_trace_print() as it's using the stack_trace_ namespace. Free
  the name up for stack trace related functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
---
V3: Remove the -1 init and split the variable declaration as requested by Steven.
V2: Add more cleanups and use print_max_stack() as requested by Steven.
---
 include/linux/ftrace.h     |   18 ++++--------------
 kernel/trace/trace_stack.c |   42 +++++++++++++-----------------------------
 2 files changed, 17 insertions(+), 43 deletions(-)

--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -241,21 +241,11 @@ static inline void ftrace_free_mem(struc
 
 #ifdef CONFIG_STACK_TRACER
 
-#define STACK_TRACE_ENTRIES 500
-
-struct stack_trace;
-
-extern unsigned stack_trace_index[];
-extern struct stack_trace stack_trace_max;
-extern unsigned long stack_trace_max_size;
-extern arch_spinlock_t stack_trace_max_lock;
-
 extern int stack_tracer_enabled;
-void stack_trace_print(void);
-int
-stack_trace_sysctl(struct ctl_table *table, int write,
-		   void __user *buffer, size_t *lenp,
-		   loff_t *ppos);
+
+int stack_trace_sysctl(struct ctl_table *table, int write,
+		       void __user *buffer, size_t *lenp,
+		       loff_t *ppos);
 
 /* DO NOT MODIFY THIS VARIABLE DIRECTLY! */
 DECLARE_PER_CPU(int, disable_stack_tracer);
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -18,30 +18,26 @@
 
 #include "trace.h"
 
-static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1];
-unsigned stack_trace_index[STACK_TRACE_ENTRIES];
+#define STACK_TRACE_ENTRIES 500
+
+static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES];
+static unsigned stack_trace_index[STACK_TRACE_ENTRIES];
 
-/*
- * Reserve one entry for the passed in ip. This will allow
- * us to remove most or all of the stack size overhead
- * added by the stack tracer itself.
- */
 struct stack_trace stack_trace_max = {
-	.max_entries		= STACK_TRACE_ENTRIES - 1,
+	.max_entries		= STACK_TRACE_ENTRIES,
 	.entries		= &stack_dump_trace[0],
 };
 
-unsigned long stack_trace_max_size;
-arch_spinlock_t stack_trace_max_lock =
+static unsigned long stack_trace_max_size;
+static arch_spinlock_t stack_trace_max_lock =
 	(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
 
 DEFINE_PER_CPU(int, disable_stack_tracer);
 static DEFINE_MUTEX(stack_sysctl_mutex);
 
 int stack_tracer_enabled;
-static int last_stack_tracer_enabled;
 
-void stack_trace_print(void)
+static void print_max_stack(void)
 {
 	long i;
 	int size;
@@ -61,16 +57,7 @@ void stack_trace_print(void)
 	}
 }
 
-/*
- * When arch-specific code overrides this function, the following
- * data should be filled up, assuming stack_trace_max_lock is held to
- * prevent concurrent updates.
- *     stack_trace_index[]
- *     stack_trace_max
- *     stack_trace_max_size
- */
-void __weak
-check_stack(unsigned long ip, unsigned long *stack)
+static void check_stack(unsigned long ip, unsigned long *stack)
 {
 	unsigned long this_size, flags; unsigned long *p, *top, *start;
 	static int tracer_frame;
@@ -179,7 +166,7 @@ check_stack(unsigned long ip, unsigned l
 	stack_trace_max.nr_entries = x;
 
 	if (task_stack_end_corrupted(current)) {
-		stack_trace_print();
+		print_max_stack();
 		BUG();
 	}
 
@@ -412,23 +399,21 @@ stack_trace_sysctl(struct ctl_table *tab
 		   void __user *buffer, size_t *lenp,
 		   loff_t *ppos)
 {
+	int was_enabled;
 	int ret;
 
 	mutex_lock(&stack_sysctl_mutex);
+	was_enabled = !!stack_tracer_enabled;
 
 	ret = proc_dointvec(table, write, buffer, lenp, ppos);
 
-	if (ret || !write ||
-	    (last_stack_tracer_enabled == !!stack_tracer_enabled))
+	if (ret || !write || (was_enabled == !!stack_tracer_enabled))
 		goto out;
 
-	last_stack_tracer_enabled = !!stack_tracer_enabled;
-
 	if (stack_tracer_enabled)
 		register_ftrace_function(&trace_ops);
 	else
 		unregister_ftrace_function(&trace_ops);
-
  out:
 	mutex_unlock(&stack_sysctl_mutex);
 	return ret;
@@ -444,7 +429,6 @@ static __init int enable_stacktrace(char
 		strncpy(stack_trace_filter_buf, str + len, COMMAND_LINE_SIZE);
 
 	stack_tracer_enabled = 1;
-	last_stack_tracer_enabled = 1;
 	return 1;
 }
 __setup("stacktrace", enable_stacktrace);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 01/29] tracing: Cleanup stack trace code Thomas Gleixner
@ 2019-04-25  9:44 ` Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
                   ` (28 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

All operations with stack traces are based on struct stack_trace. That's a
horrible construct as the struct is a kitchen sink for input and
output. Quite some usage sites embed it into their own data structures
which creates weird indirections.

There is absolutely no point in doing so. For all use cases a storage array
and the number of valid stack trace entries in the array is sufficient.

Provide helper functions which avoid the struct stack_trace indirection so
the usage sites can be cleaned up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
V3: Fix kernel doc.
---
 include/linux/stacktrace.h |   27 +++++++
 kernel/stacktrace.c        |  170 +++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 182 insertions(+), 15 deletions(-)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -3,11 +3,26 @@
 #define __LINUX_STACKTRACE_H
 
 #include <linux/types.h>
+#include <asm/errno.h>
 
 struct task_struct;
 struct pt_regs;
 
 #ifdef CONFIG_STACKTRACE
+void stack_trace_print(unsigned long *trace, unsigned int nr_entries,
+		       int spaces);
+int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+			unsigned int nr_entries, int spaces);
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr);
+unsigned int stack_trace_save_tsk(struct task_struct *task,
+				  unsigned long *store, unsigned int size,
+				  unsigned int skipnr);
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr);
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
+
+/* Internal interfaces. Do not use in generic code */
 struct stack_trace {
 	unsigned int nr_entries, max_entries;
 	unsigned long *entries;
@@ -41,4 +56,16 @@ extern void save_stack_trace_user(struct
 # define save_stack_trace_tsk_reliable(tsk, trace)	({ -ENOSYS; })
 #endif /* CONFIG_STACKTRACE */
 
+#if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
+int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size);
+#else
+static inline int stack_trace_save_tsk_reliable(struct task_struct *tsk,
+						unsigned long *store,
+						unsigned int size)
+{
+	return -ENOSYS;
+}
+#endif
+
 #endif /* __LINUX_STACKTRACE_H */
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -11,35 +11,54 @@
 #include <linux/kallsyms.h>
 #include <linux/stacktrace.h>
 
-void print_stack_trace(struct stack_trace *trace, int spaces)
+/**
+ * stack_trace_print - Print the entries in the stack trace
+ * @entries:	Pointer to storage array
+ * @nr_entries:	Number of entries in the storage array
+ * @spaces:	Number of leading spaces to print
+ */
+void stack_trace_print(unsigned long *entries, unsigned int nr_entries,
+		       int spaces)
 {
-	int i;
+	unsigned int i;
 
-	if (WARN_ON(!trace->entries))
+	if (WARN_ON(!entries))
 		return;
 
-	for (i = 0; i < trace->nr_entries; i++)
-		printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]);
+	for (i = 0; i < nr_entries; i++)
+		printk("%*c%pS\n", 1 + spaces, ' ', (void *)entries[i]);
+}
+EXPORT_SYMBOL_GPL(stack_trace_print);
+
+void print_stack_trace(struct stack_trace *trace, int spaces)
+{
+	stack_trace_print(trace->entries, trace->nr_entries, spaces);
 }
 EXPORT_SYMBOL_GPL(print_stack_trace);
 
-int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces)
+/**
+ * stack_trace_snprint - Print the entries in the stack trace into a buffer
+ * @buf:	Pointer to the print buffer
+ * @size:	Size of the print buffer
+ * @entries:	Pointer to storage array
+ * @nr_entries:	Number of entries in the storage array
+ * @spaces:	Number of leading spaces to print
+ *
+ * Return: Number of bytes printed.
+ */
+int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+			unsigned int nr_entries, int spaces)
 {
-	int i;
-	int generated;
-	int total = 0;
+	unsigned int generated, i, total = 0;
 
-	if (WARN_ON(!trace->entries))
+	if (WARN_ON(!entries))
 		return 0;
 
-	for (i = 0; i < trace->nr_entries; i++) {
+	for (i = 0; i < nr_entries && size; i++) {
 		generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ',
-				     (void *)trace->entries[i]);
+				     (void *)entries[i]);
 
 		total += generated;
-
-		/* Assume that generated isn't a negative number */
 		if (generated >= size) {
 			buf += size;
 			size = 0;
@@ -51,6 +70,14 @@ int snprint_stack_trace(char *buf, size_
 
 	return total;
 }
+EXPORT_SYMBOL_GPL(stack_trace_snprint);
+
+int snprint_stack_trace(char *buf, size_t size,
+			struct stack_trace *trace, int spaces)
+{
+	return stack_trace_snprint(buf, size, trace->entries,
+				   trace->nr_entries, spaces);
+}
 EXPORT_SYMBOL_GPL(snprint_stack_trace);
 
 /*
@@ -77,3 +104,116 @@ save_stack_trace_tsk_reliable(struct tas
 	WARN_ONCE(1, KERN_INFO "save_stack_tsk_reliable() not implemented yet.\n");
 	return -ENOSYS;
 }
+
+/**
+ * stack_trace_save - Save a stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored
+ */
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace(&trace);
+	return trace.nr_entries;
+}
+EXPORT_SYMBOL_GPL(stack_trace_save);
+
+/**
+ * stack_trace_save_tsk - Save a task stack trace into a storage array
+ * @task:	The task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored
+ */
+unsigned int stack_trace_save_tsk(struct task_struct *task,
+				  unsigned long *store, unsigned int size,
+				  unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace_tsk(task, &trace);
+	return trace.nr_entries;
+}
+
+/**
+ * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
+ * @regs:	Pointer to pt_regs to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored
+ */
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr,
+	};
+
+	save_stack_trace_regs(regs, &trace);
+	return trace.nr_entries;
+}
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+/**
+ * stack_trace_save_tsk_reliable - Save task stack with verification
+ * @tsk:	Pointer to the task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Return:	An error if it detects any unreliable features of the
+ *		stack. Otherwise it guarantees that the stack trace is
+ *		reliable and returns the number of entries stored.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
+ */
+int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+	};
+	int ret = save_stack_trace_tsk_reliable(tsk, &trace);
+
+	return ret ? ret : trace.nr_entries;
+}
+#endif
+
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
+/**
+ * stack_trace_save_user - Save a user space stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Return: Number of trace entries stored
+ */
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+	};
+
+	save_stack_trace_user(&trace);
+	return trace.nr_entries;
+}
+#endif /* CONFIG_USER_STACKTRACE_SUPPORT */



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 01/29] tracing: Cleanup stack trace code Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
@ 2019-04-25  9:44 ` Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 04/29] backtrace-test: Simplify stack trace handling Thomas Gleixner
                   ` (27 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Steven Rostedt, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

The struct stack_trace indirection in the stack depot functions is a truly
pointless excercise which requires horrible code at the callsites.

Provide interfaces based on plain storage arrays.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Alexander Potapenko <glider@google.com>
---
V3: Fix kernel-doc
---
 include/linux/stackdepot.h |    4 ++
 lib/stackdepot.c           |   70 ++++++++++++++++++++++++++++++++-------------
 2 files changed, 55 insertions(+), 19 deletions(-)

--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t;
 struct stack_trace;
 
 depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
+depot_stack_handle_t stack_depot_save(unsigned long *entries,
+				      unsigned int nr_entries, gfp_t gfp_flags);
 
 void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries);
 
 #endif
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -194,40 +194,60 @@ static inline struct stack_record *find_
 	return NULL;
 }
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
+/**
+ * stack_depot_fetch - Fetch stack entries from a depot
+ *
+ * @handle:		Stack depot handle which was returned from
+ *			stack_depot_save().
+ * @entries:		Pointer to store the entries address
+ *
+ * Return: The number of trace entries for this depot.
+ */
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries)
 {
 	union handle_parts parts = { .handle = handle };
 	void *slab = stack_slabs[parts.slabindex];
 	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
 	struct stack_record *stack = slab + offset;
 
-	trace->nr_entries = trace->max_entries = stack->size;
-	trace->entries = stack->entries;
-	trace->skip = 0;
+	*entries = stack->entries;
+	return stack->size;
+}
+EXPORT_SYMBOL_GPL(stack_depot_fetch);
+
+void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
+{
+	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
+
+	trace->max_entries = trace->nr_entries = nent;
 }
 EXPORT_SYMBOL_GPL(depot_fetch_stack);
 
 /**
- * depot_save_stack - save stack in a stack depot.
- * @trace - the stacktrace to save.
- * @alloc_flags - flags for allocating additional memory if required.
+ * stack_depot_save - Save a stack trace from an array
+ *
+ * @entries:		Pointer to storage array
+ * @nr_entries:		Size of the storage array
+ * @alloc_flags:	Allocation gfp flags
  *
- * Returns the handle of the stack struct stored in depot.
+ * Return: The handle of the stack struct stored in depot
  */
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
-				    gfp_t alloc_flags)
+depot_stack_handle_t stack_depot_save(unsigned long *entries,
+				      unsigned int nr_entries,
+				      gfp_t alloc_flags)
 {
-	u32 hash;
-	depot_stack_handle_t retval = 0;
 	struct stack_record *found = NULL, **bucket;
-	unsigned long flags;
+	depot_stack_handle_t retval = 0;
 	struct page *page = NULL;
 	void *prealloc = NULL;
+	unsigned long flags;
+	u32 hash;
 
-	if (unlikely(trace->nr_entries == 0))
+	if (unlikely(nr_entries == 0))
 		goto fast_exit;
 
-	hash = hash_stack(trace->entries, trace->nr_entries);
+	hash = hash_stack(entries, nr_entries);
 	bucket = &stack_table[hash & STACK_HASH_MASK];
 
 	/*
@@ -235,8 +255,8 @@ depot_stack_handle_t depot_save_stack(st
 	 * The smp_load_acquire() here pairs with smp_store_release() to
 	 * |bucket| below.
 	 */
-	found = find_stack(smp_load_acquire(bucket), trace->entries,
-			   trace->nr_entries, hash);
+	found = find_stack(smp_load_acquire(bucket), entries,
+			   nr_entries, hash);
 	if (found)
 		goto exit;
 
@@ -264,10 +284,10 @@ depot_stack_handle_t depot_save_stack(st
 
 	spin_lock_irqsave(&depot_lock, flags);
 
-	found = find_stack(*bucket, trace->entries, trace->nr_entries, hash);
+	found = find_stack(*bucket, entries, nr_entries, hash);
 	if (!found) {
 		struct stack_record *new =
-			depot_alloc_stack(trace->entries, trace->nr_entries,
+			depot_alloc_stack(entries, nr_entries,
 					  hash, &prealloc, alloc_flags);
 		if (new) {
 			new->next = *bucket;
@@ -297,4 +317,16 @@ depot_stack_handle_t depot_save_stack(st
 fast_exit:
 	return retval;
 }
+EXPORT_SYMBOL_GPL(stack_depot_save);
+
+/**
+ * depot_save_stack - save stack in a stack depot.
+ * @trace - the stacktrace to save.
+ * @alloc_flags - flags for allocating additional memory if required.
+ */
+depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
+				      gfp_t alloc_flags)
+{
+	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
+}
 EXPORT_SYMBOL_GPL(depot_save_stack);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 04/29] backtrace-test: Simplify stack trace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (2 preceding siblings ...)
  2019-04-25  9:44 ` [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
@ 2019-04-25  9:44 ` Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 05/29] proc: Simplify task stack retrieval Thomas Gleixner
                   ` (26 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/backtracetest.c |   11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

--- a/kernel/backtracetest.c
+++ b/kernel/backtracetest.c
@@ -48,19 +48,14 @@ static void backtrace_test_irq(void)
 #ifdef CONFIG_STACKTRACE
 static void backtrace_test_saved(void)
 {
-	struct stack_trace trace;
 	unsigned long entries[8];
+	unsigned int nr_entries;
 
 	pr_info("Testing a saved backtrace.\n");
 	pr_info("The following trace is a kernel self test and not a bug!\n");
 
-	trace.nr_entries = 0;
-	trace.max_entries = ARRAY_SIZE(entries);
-	trace.entries = entries;
-	trace.skip = 0;
-
-	save_stack_trace(&trace);
-	print_stack_trace(&trace, 0);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	stack_trace_print(entries, nr_entries, 0);
 }
 #else
 static void backtrace_test_saved(void)



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 05/29] proc: Simplify task stack retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (3 preceding siblings ...)
  2019-04-25  9:44 ` [patch V3 04/29] backtrace-test: Simplify stack trace handling Thomas Gleixner
@ 2019-04-25  9:44 ` Thomas Gleixner
  2019-04-25  9:44 ` [patch V3 06/29] latency_top: Simplify stack trace handling Thomas Gleixner
                   ` (25 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Alexey Dobriyan,
	Andrew Morton, Steven Rostedt, Alexander Potapenko,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 fs/proc/base.c |   14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -407,7 +407,6 @@ static void unlock_trace(struct task_str
 static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
 			  struct pid *pid, struct task_struct *task)
 {
-	struct stack_trace trace;
 	unsigned long *entries;
 	int err;
 
@@ -430,20 +429,17 @@ static int proc_pid_stack(struct seq_fil
 	if (!entries)
 		return -ENOMEM;
 
-	trace.nr_entries	= 0;
-	trace.max_entries	= MAX_STACK_TRACE_DEPTH;
-	trace.entries		= entries;
-	trace.skip		= 0;
-
 	err = lock_trace(task);
 	if (!err) {
-		unsigned int i;
+		unsigned int i, nr_entries;
 
-		save_stack_trace_tsk(task, &trace);
+		nr_entries = stack_trace_save_tsk(task, entries,
+						  MAX_STACK_TRACE_DEPTH, 0);
 
-		for (i = 0; i < trace.nr_entries; i++) {
+		for (i = 0; i < nr_entries; i++) {
 			seq_printf(m, "[<0>] %pB\n", (void *)entries[i]);
 		}
+
 		unlock_trace(task);
 	}
 	kfree(entries);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 06/29] latency_top: Simplify stack trace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (4 preceding siblings ...)
  2019-04-25  9:44 ` [patch V3 05/29] proc: Simplify task stack retrieval Thomas Gleixner
@ 2019-04-25  9:44 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 07/29] mm/slub: Simplify stack trace retrieval Thomas Gleixner
                   ` (24 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:44 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/latencytop.c |   17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -141,20 +141,6 @@ account_global_scheduler_latency(struct
 	memcpy(&latency_record[i], lat, sizeof(struct latency_record));
 }
 
-/*
- * Iterator to store a backtrace into a latency record entry
- */
-static inline void store_stacktrace(struct task_struct *tsk,
-					struct latency_record *lat)
-{
-	struct stack_trace trace;
-
-	memset(&trace, 0, sizeof(trace));
-	trace.max_entries = LT_BACKTRACEDEPTH;
-	trace.entries = &lat->backtrace[0];
-	save_stack_trace_tsk(tsk, &trace);
-}
-
 /**
  * __account_scheduler_latency - record an occurred latency
  * @tsk - the task struct of the task hitting the latency
@@ -191,7 +177,8 @@ void __sched
 	lat.count = 1;
 	lat.time = usecs;
 	lat.max = usecs;
-	store_stacktrace(tsk, &lat);
+
+	stack_trace_save_tsk(tsk, lat.backtrace, LT_BACKTRACEDEPTH, 0);
 
 	raw_spin_lock_irqsave(&latency_lock, flags);
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 07/29] mm/slub: Simplify stack trace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (5 preceding siblings ...)
  2019-04-25  9:44 ` [patch V3 06/29] latency_top: Simplify stack trace handling Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
                   ` (23 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Christoph Lameter,
	Andrew Morton, Pekka Enberg, linux-mm, David Rientjes,
	Steven Rostedt, Alexander Potapenko, Alexey Dobriyan,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
---
 mm/slub.c |   12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -552,18 +552,14 @@ static void set_track(struct kmem_cache
 
 	if (addr) {
 #ifdef CONFIG_STACKTRACE
-		struct stack_trace trace;
+		unsigned int nr_entries;
 
-		trace.nr_entries = 0;
-		trace.max_entries = TRACK_ADDRS_COUNT;
-		trace.entries = p->addrs;
-		trace.skip = 3;
 		metadata_access_enable();
-		save_stack_trace(&trace);
+		nr_entries = stack_trace_save(p->addrs, TRACK_ADDRS_COUNT, 3);
 		metadata_access_disable();
 
-		if (trace.nr_entries < TRACK_ADDRS_COUNT)
-			p->addrs[trace.nr_entries] = 0;
+		if (nr_entries < TRACK_ADDRS_COUNT)
+			p->addrs[nr_entries] = 0;
 #endif
 		p->addr = addr;
 		p->cpu = smp_processor_id();



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 08/29] mm/kmemleak: Simplify stacktrace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (6 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 07/29] mm/slub: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 09/29] mm/kasan: " Thomas Gleixner
                   ` (22 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Catalin Marinas, linux-mm,
	Steven Rostedt, Alexander Potapenko, Alexey Dobriyan,
	Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, Christoph Hellwig, iommu, Robin Murphy,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-mm@kvack.org
---
 mm/kmemleak.c |   24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -410,11 +410,6 @@ static void print_unreferenced(struct se
  */
 static void dump_object_info(struct kmemleak_object *object)
 {
-	struct stack_trace trace;
-
-	trace.nr_entries = object->trace_len;
-	trace.entries = object->trace;
-
 	pr_notice("Object 0x%08lx (size %zu):\n",
 		  object->pointer, object->size);
 	pr_notice("  comm \"%s\", pid %d, jiffies %lu\n",
@@ -424,7 +419,7 @@ static void dump_object_info(struct kmem
 	pr_notice("  flags = 0x%x\n", object->flags);
 	pr_notice("  checksum = %u\n", object->checksum);
 	pr_notice("  backtrace:\n");
-	print_stack_trace(&trace, 4);
+	stack_trace_print(object->trace, object->trace_len, 4);
 }
 
 /*
@@ -553,15 +548,7 @@ static struct kmemleak_object *find_and_
  */
 static int __save_stack_trace(unsigned long *trace)
 {
-	struct stack_trace stack_trace;
-
-	stack_trace.max_entries = MAX_TRACE;
-	stack_trace.nr_entries = 0;
-	stack_trace.entries = trace;
-	stack_trace.skip = 2;
-	save_stack_trace(&stack_trace);
-
-	return stack_trace.nr_entries;
+	return stack_trace_save(trace, MAX_TRACE, 2);
 }
 
 /*
@@ -2019,13 +2006,8 @@ early_param("kmemleak", kmemleak_boot_co
 
 static void __init print_log_trace(struct early_log *log)
 {
-	struct stack_trace trace;
-
-	trace.nr_entries = log->trace_len;
-	trace.entries = log->trace;
-
 	pr_notice("Early log backtrace:\n");
-	print_stack_trace(&trace, 2);
+	stack_trace_print(log->trace, log->trace_len, 2);
 }
 
 /*



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 09/29] mm/kasan: Simplify stacktrace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (7 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 10/29] mm/page_owner: Simplify stack trace handling Thomas Gleixner
                   ` (21 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Dmitry Vyukov,
	Andrey Ryabinin, Alexander Potapenko, kasan-dev, linux-mm,
	Steven Rostedt, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, David Rientjes, Catalin Marinas,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
---
 mm/kasan/common.c |   30 ++++++++++++------------------
 mm/kasan/report.c |    7 ++++---
 2 files changed, 16 insertions(+), 21 deletions(-)

--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
 		 ptr < (unsigned long)&__softirqentry_text_end);
 }
 
-static inline void filter_irq_stacks(struct stack_trace *trace)
+static inline unsigned int filter_irq_stacks(unsigned long *entries,
+					     unsigned int nr_entries)
 {
-	int i;
+	unsigned int i;
 
-	if (!trace->nr_entries)
-		return;
-	for (i = 0; i < trace->nr_entries; i++)
-		if (in_irqentry_text(trace->entries[i])) {
+	for (i = 0; i < nr_entries; i++) {
+		if (in_irqentry_text(entries[i])) {
 			/* Include the irqentry function into the stack. */
-			trace->nr_entries = i + 1;
-			break;
+			return i + 1;
 		}
+	}
+	return nr_entries;
 }
 
 static inline depot_stack_handle_t save_stack(gfp_t flags)
 {
 	unsigned long entries[KASAN_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = KASAN_STACK_DEPTH,
-		.skip = 0
-	};
+	unsigned int nr_entries;
 
-	save_stack_trace(&trace);
-	filter_irq_stacks(&trace);
-
-	return depot_save_stack(&trace, flags);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	nr_entries = filter_irq_stacks(entries, nr_entries);
+	return stack_depot_save(entries, nr_entries, flags);
 }
 
 static inline void set_track(struct kasan_track *track, gfp_t flags)
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
 {
 	pr_err("%s by task %u:\n", prefix, track->pid);
 	if (track->stack) {
-		struct stack_trace trace;
+		unsigned long *entries;
+		unsigned int nr_entries;
 
-		depot_fetch_stack(track->stack, &trace);
-		print_stack_trace(&trace, 0);
+		nr_entries = stack_depot_fetch(track->stack, &entries);
+		stack_trace_print(entries, nr_entries, 0);
 	} else {
 		pr_err("(stack is not available)\n");
 	}



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 10/29] mm/page_owner: Simplify stack trace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (8 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 09/29] mm/kasan: " Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 11/29] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
                   ` (20 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, linux-mm, Mike Rapoport,
	David Rientjes, Andrew Morton, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Christoph Lameter,
	Pekka Enberg, Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin,
	kasan-dev, Akinobu Mita, Christoph Hellwig, iommu, Robin Murphy,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 mm/page_owner.c |   79 +++++++++++++++++++-------------------------------------
 1 file changed, 28 insertions(+), 51 deletions(-)

--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -58,15 +58,10 @@ static bool need_page_owner(void)
 static __always_inline depot_stack_handle_t create_dummy_stack(void)
 {
 	unsigned long entries[4];
-	struct stack_trace dummy;
+	unsigned int nr_entries;
 
-	dummy.nr_entries = 0;
-	dummy.max_entries = ARRAY_SIZE(entries);
-	dummy.entries = &entries[0];
-	dummy.skip = 0;
-
-	save_stack_trace(&dummy);
-	return depot_save_stack(&dummy, GFP_KERNEL);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	return stack_depot_save(entries, nr_entries, GFP_KERNEL);
 }
 
 static noinline void register_dummy_stack(void)
@@ -120,46 +115,39 @@ void __reset_page_owner(struct page *pag
 	}
 }
 
-static inline bool check_recursive_alloc(struct stack_trace *trace,
-					unsigned long ip)
+static inline bool check_recursive_alloc(unsigned long *entries,
+					 unsigned int nr_entries,
+					 unsigned long ip)
 {
-	int i;
+	unsigned int i;
 
-	if (!trace->nr_entries)
-		return false;
-
-	for (i = 0; i < trace->nr_entries; i++) {
-		if (trace->entries[i] == ip)
+	for (i = 0; i < nr_entries; i++) {
+		if (entries[i] == ip)
 			return true;
 	}
-
 	return false;
 }
 
 static noinline depot_stack_handle_t save_stack(gfp_t flags)
 {
 	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 2
-	};
 	depot_stack_handle_t handle;
+	unsigned int nr_entries;
 
-	save_stack_trace(&trace);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2);
 
 	/*
-	 * We need to check recursion here because our request to stackdepot
-	 * could trigger memory allocation to save new entry. New memory
-	 * allocation would reach here and call depot_save_stack() again
-	 * if we don't catch it. There is still not enough memory in stackdepot
-	 * so it would try to allocate memory again and loop forever.
+	 * We need to check recursion here because our request to
+	 * stackdepot could trigger memory allocation to save new
+	 * entry. New memory allocation would reach here and call
+	 * stack_depot_save_entries() again if we don't catch it. There is
+	 * still not enough memory in stackdepot so it would try to
+	 * allocate memory again and loop forever.
 	 */
-	if (check_recursive_alloc(&trace, _RET_IP_))
+	if (check_recursive_alloc(entries, nr_entries, _RET_IP_))
 		return dummy_handle;
 
-	handle = depot_save_stack(&trace, flags);
+	handle = stack_depot_save(entries, nr_entries, flags);
 	if (!handle)
 		handle = failure_handle;
 
@@ -337,16 +325,10 @@ print_page_owner(char __user *buf, size_
 		struct page *page, struct page_owner *page_owner,
 		depot_stack_handle_t handle)
 {
-	int ret;
-	int pageblock_mt, page_mt;
+	int ret, pageblock_mt, page_mt;
+	unsigned long *entries;
+	unsigned int nr_entries;
 	char *kbuf;
-	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 0
-	};
 
 	count = min_t(size_t, count, PAGE_SIZE);
 	kbuf = kmalloc(count, GFP_KERNEL);
@@ -375,8 +357,8 @@ print_page_owner(char __user *buf, size_
 	if (ret >= count)
 		goto err;
 
-	depot_fetch_stack(handle, &trace);
-	ret += snprint_stack_trace(kbuf + ret, count - ret, &trace, 0);
+	nr_entries = stack_depot_fetch(handle, &entries);
+	ret += stack_trace_snprint(kbuf + ret, count - ret, entries, nr_entries, 0);
 	if (ret >= count)
 		goto err;
 
@@ -407,14 +389,9 @@ void __dump_page_owner(struct page *page
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
 	struct page_owner *page_owner;
-	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 0
-	};
 	depot_stack_handle_t handle;
+	unsigned long *entries;
+	unsigned int nr_entries;
 	gfp_t gfp_mask;
 	int mt;
 
@@ -438,10 +415,10 @@ void __dump_page_owner(struct page *page
 		return;
 	}
 
-	depot_fetch_stack(handle, &trace);
+	nr_entries = stack_depot_fetch(handle, &entries);
 	pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n",
 		 page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask);
-	print_stack_trace(&trace, 0);
+	stack_trace_print(entries, nr_entries, 0);
 
 	if (page_owner->last_migrate_reason != -1)
 		pr_alert("page has been migrated, last migrate reason: %s\n",



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 11/29] fault-inject: Simplify stacktrace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (9 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 10/29] mm/page_owner: Simplify stack trace handling Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
                   ` (19 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Akinobu Mita,
	Steven Rostedt, Alexander Potapenko, Alexey Dobriyan,
	Andrew Morton, Christoph Lameter, Pekka Enberg, linux-mm,
	David Rientjes, Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin,
	kasan-dev, Mike Rapoport, Christoph Hellwig, iommu, Robin Murphy,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
---
 lib/fault-inject.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/lib/fault-inject.c
+++ b/lib/fault-inject.c
@@ -65,22 +65,16 @@ static bool fail_task(struct fault_attr
 
 static bool fail_stacktrace(struct fault_attr *attr)
 {
-	struct stack_trace trace;
 	int depth = attr->stacktrace_depth;
 	unsigned long entries[MAX_STACK_TRACE_DEPTH];
-	int n;
+	int n, nr_entries;
 	bool found = (attr->require_start == 0 && attr->require_end == ULONG_MAX);
 
 	if (depth == 0)
 		return found;
 
-	trace.nr_entries = 0;
-	trace.entries = entries;
-	trace.max_entries = depth;
-	trace.skip = 1;
-
-	save_stack_trace(&trace);
-	for (n = 0; n < trace.nr_entries; n++) {
+	nr_entries = stack_trace_save(entries, depth, 1);
+	for (n = 0; n < nr_entries; n++) {
 		if (attr->reject_start <= entries[n] &&
 			       entries[n] < attr->reject_end)
 			return false;



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 12/29] dma/debug: Simplify stracktrace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (10 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 11/29] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 13/29] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
                   ` (18 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: iommu@lists.linux-foundation.org
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
 kernel/dma/debug.c |   14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -89,8 +89,8 @@ struct dma_debug_entry {
 	int		 sg_mapped_ents;
 	enum map_err_types  map_err_type;
 #ifdef CONFIG_STACKTRACE
-	struct		 stack_trace stacktrace;
-	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
+	unsigned int	stack_len;
+	unsigned long	stack_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
 #endif
 };
 
@@ -174,7 +174,7 @@ static inline void dump_entry_trace(stru
 #ifdef CONFIG_STACKTRACE
 	if (entry) {
 		pr_warning("Mapped at:\n");
-		print_stack_trace(&entry->stacktrace, 0);
+		stack_trace_print(entry->stack_entries, entry->stack_len, 0);
 	}
 #endif
 }
@@ -704,12 +704,10 @@ static struct dma_debug_entry *dma_entry
 	spin_unlock_irqrestore(&free_entries_lock, flags);
 
 #ifdef CONFIG_STACKTRACE
-	entry->stacktrace.max_entries = DMA_DEBUG_STACKTRACE_ENTRIES;
-	entry->stacktrace.entries = entry->st_entries;
-	entry->stacktrace.skip = 1;
-	save_stack_trace(&entry->stacktrace);
+	entry->stack_len = stack_trace_save(entry->stack_entries,
+					    ARRAY_SIZE(entry->stack_entries),
+					    1);
 #endif
-
 	return entry;
 }
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 13/29] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (11 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 14/29] dm bufio: " Thomas Gleixner
                   ` (17 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs,
	Steven Rostedt, Alexander Potapenko, Alexey Dobriyan,
	Andrew Morton, Christoph Lameter, Pekka Enberg, linux-mm,
	David Rientjes, Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin,
	kasan-dev, Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Acked-by: David Sterba <dsterba@suse.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs@vger.kernel.org
---
 fs/btrfs/ref-verify.c |   15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

--- a/fs/btrfs/ref-verify.c
+++ b/fs/btrfs/ref-verify.c
@@ -205,28 +205,17 @@ static struct root_entry *lookup_root_en
 #ifdef CONFIG_STACKTRACE
 static void __save_stack_trace(struct ref_action *ra)
 {
-	struct stack_trace stack_trace;
-
-	stack_trace.max_entries = MAX_TRACE;
-	stack_trace.nr_entries = 0;
-	stack_trace.entries = ra->trace;
-	stack_trace.skip = 2;
-	save_stack_trace(&stack_trace);
-	ra->trace_len = stack_trace.nr_entries;
+	ra->trace_len = stack_trace_save(ra->trace, MAX_TRACE, 2);
 }
 
 static void __print_stack_trace(struct btrfs_fs_info *fs_info,
 				struct ref_action *ra)
 {
-	struct stack_trace trace;
-
 	if (ra->trace_len == 0) {
 		btrfs_err(fs_info, "  ref-verify: no stacktrace");
 		return;
 	}
-	trace.nr_entries = ra->trace_len;
-	trace.entries = ra->trace;
-	print_stack_trace(&trace, 2);
+	stack_trace_print(ra->trace, ra->trace_len, 2);
 }
 #else
 static void inline __save_stack_trace(struct ref_action *ra)



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 14/29] dm bufio: Simplify stack trace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (12 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 13/29] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 15/29] dm persistent data: Simplify stack trace handling Thomas Gleixner
                   ` (16 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, dm-devel, Mike Snitzer,
	Alasdair Kergon, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Christoph Lameter, Pekka Enberg,
	linux-mm, David Rientjes, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita,
	Christoph Hellwig, iommu, Robin Murphy, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: dm-devel@redhat.com
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair Kergon <agk@redhat.com>
---
 drivers/md/dm-bufio.c |   15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -150,7 +150,7 @@ struct dm_buffer {
 	void (*end_io)(struct dm_buffer *, blk_status_t);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 #define MAX_STACK 10
-	struct stack_trace stack_trace;
+	unsigned int stack_len;
 	unsigned long stack_entries[MAX_STACK];
 #endif
 };
@@ -232,11 +232,7 @@ static DEFINE_MUTEX(dm_bufio_clients_loc
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 static void buffer_record_stack(struct dm_buffer *b)
 {
-	b->stack_trace.nr_entries = 0;
-	b->stack_trace.max_entries = MAX_STACK;
-	b->stack_trace.entries = b->stack_entries;
-	b->stack_trace.skip = 2;
-	save_stack_trace(&b->stack_trace);
+	b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);
 }
 #endif
 
@@ -438,7 +434,7 @@ static struct dm_buffer *alloc_buffer(st
 	adjust_total_allocated(b->data_mode, (long)c->block_size);
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	memset(&b->stack_trace, 0, sizeof(b->stack_trace));
+	b->stack_len = 0;
 #endif
 	return b;
 }
@@ -1520,8 +1516,9 @@ static void drop_buffers(struct dm_bufio
 			DMERR("leaked buffer %llx, hold count %u, list %d",
 			      (unsigned long long)b->block, b->hold_count, i);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-			print_stack_trace(&b->stack_trace, 1);
-			b->hold_count = 0; /* mark unclaimed to avoid BUG_ON below */
+			stack_trace_print(b->stack_entries, b->stack_len, 1);
+			/* mark unclaimed to avoid BUG_ON below */
+			b->hold_count = 0;
 #endif
 		}
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 15/29] dm persistent data: Simplify stack trace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (13 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 14/29] dm bufio: " Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 16/29] drm: Simplify stacktrace handling Thomas Gleixner
                   ` (15 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, dm-devel, Mike Snitzer,
	Alasdair Kergon, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Christoph Lameter, Pekka Enberg,
	linux-mm, David Rientjes, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita,
	Christoph Hellwig, iommu, Robin Murphy, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface. This results in less storage space and
indirection.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: dm-devel@redhat.com
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair Kergon <agk@redhat.com>
---
 drivers/md/persistent-data/dm-block-manager.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

--- a/drivers/md/persistent-data/dm-block-manager.c
+++ b/drivers/md/persistent-data/dm-block-manager.c
@@ -35,7 +35,10 @@
 #define MAX_HOLDERS 4
 #define MAX_STACK 10
 
-typedef unsigned long stack_entries[MAX_STACK];
+struct stack_store {
+	unsigned int	nr_entries;
+	unsigned long	entries[MAX_STACK];
+};
 
 struct block_lock {
 	spinlock_t lock;
@@ -44,8 +47,7 @@ struct block_lock {
 	struct task_struct *holders[MAX_HOLDERS];
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	struct stack_trace traces[MAX_HOLDERS];
-	stack_entries entries[MAX_HOLDERS];
+	struct stack_store traces[MAX_HOLDERS];
 #endif
 };
 
@@ -73,7 +75,7 @@ static void __add_holder(struct block_lo
 {
 	unsigned h = __find_holder(lock, NULL);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	struct stack_trace *t;
+	struct stack_store *t;
 #endif
 
 	get_task_struct(task);
@@ -81,11 +83,7 @@ static void __add_holder(struct block_lo
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 	t = lock->traces + h;
-	t->nr_entries = 0;
-	t->max_entries = MAX_STACK;
-	t->entries = lock->entries[h];
-	t->skip = 2;
-	save_stack_trace(t);
+	t->nr_entries = stack_trace_save(t->entries, MAX_STACK, 2);
 #endif
 }
 
@@ -106,7 +104,8 @@ static int __check_holder(struct block_l
 			DMERR("recursive lock detected in metadata");
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 			DMERR("previously held here:");
-			print_stack_trace(lock->traces + i, 4);
+			stack_trace_print(lock->traces[i].entries,
+					  lock->traces[i].nr_entries, 4);
 
 			DMERR("subsequent acquisition attempted here:");
 			dump_stack();



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 16/29] drm: Simplify stacktrace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (14 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 15/29] dm persistent data: Simplify stack trace handling Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 17/29] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
                   ` (14 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Daniel Vetter, intel-gfx,
	Joonas Lahtinen, Maarten Lankhorst, dri-devel, David Airlie,
	Jani Nikula, Rodrigo Vivi, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Christoph Lameter, Pekka Enberg,
	linux-mm, David Rientjes, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita,
	Christoph Hellwig, iommu, Robin Murphy, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon,
	Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Vetter <daniel@ffwll.ch>
Cc: intel-gfx@lists.freedesktop.org
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/drm_mm.c                |   22 +++++++---------------
 drivers/gpu/drm/i915/i915_vma.c         |   11 ++++-------
 drivers/gpu/drm/i915/intel_runtime_pm.c |   21 +++++++--------------
 3 files changed, 18 insertions(+), 36 deletions(-)

--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -106,22 +106,19 @@
 static noinline void save_stack(struct drm_mm_node *node)
 {
 	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = STACKDEPTH,
-		.skip = 1
-	};
+	unsigned int n;
 
-	save_stack_trace(&trace);
+	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
 
 	/* May be called under spinlock, so avoid sleeping */
-	node->stack = depot_save_stack(&trace, GFP_NOWAIT);
+	node->stack = stack_depot_save(entries, n, GFP_NOWAIT);
 }
 
 static void show_leaks(struct drm_mm *mm)
 {
 	struct drm_mm_node *node;
-	unsigned long entries[STACKDEPTH];
+	unsigned long *entries;
+	unsigned int nr_entries;
 	char *buf;
 
 	buf = kmalloc(BUFSZ, GFP_KERNEL);
@@ -129,19 +126,14 @@ static void show_leaks(struct drm_mm *mm
 		return;
 
 	list_for_each_entry(node, drm_mm_nodes(mm), node_list) {
-		struct stack_trace trace = {
-			.entries = entries,
-			.max_entries = STACKDEPTH
-		};
-
 		if (!node->stack) {
 			DRM_ERROR("node [%08llx + %08llx]: unknown owner\n",
 				  node->start, node->size);
 			continue;
 		}
 
-		depot_fetch_stack(node->stack, &trace);
-		snprint_stack_trace(buf, BUFSZ, &trace, 0);
+		nr_entries = stack_depot_fetch(node->stack, &entries);
+		stack_trace_snprint(buf, BUFSZ, entries, nr_entries, 0);
 		DRM_ERROR("node [%08llx + %08llx]: inserted at\n%s",
 			  node->start, node->size, buf);
 	}
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -36,11 +36,8 @@
 
 static void vma_print_allocator(struct i915_vma *vma, const char *reason)
 {
-	unsigned long entries[12];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-	};
+	unsigned long *entries;
+	unsigned int nr_entries;
 	char buf[512];
 
 	if (!vma->node.stack) {
@@ -49,8 +46,8 @@ static void vma_print_allocator(struct i
 		return;
 	}
 
-	depot_fetch_stack(vma->node.stack, &trace);
-	snprint_stack_trace(buf, sizeof(buf), &trace, 0);
+	nr_entries = stack_depot_fetch(vma->node.stack, &entries);
+	stack_trace_snprint(buf, sizeof(buf), entries, nr_entries, 0);
 	DRM_DEBUG_DRIVER("vma.node [%08llx + %08llx] %s: inserted at %s\n",
 			 vma->node.start, vma->node.size, reason, buf);
 }
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -60,27 +60,20 @@
 static noinline depot_stack_handle_t __save_depot_stack(void)
 {
 	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-		.skip = 1,
-	};
+	unsigned int n;
 
-	save_stack_trace(&trace);
-	return depot_save_stack(&trace, GFP_NOWAIT | __GFP_NOWARN);
+	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
+	return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
 }
 
 static void __print_depot_stack(depot_stack_handle_t stack,
 				char *buf, int sz, int indent)
 {
-	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-	};
+	unsigned long *entries;
+	unsigned int nr_entries;
 
-	depot_fetch_stack(stack, &trace);
-	snprint_stack_trace(buf, sz, &trace, indent);
+	nr_entries = stack_depot_fetch(stack, &entries);
+	stack_trace_snprint(buf, sz, entries, nr_entries, indent);
 }
 
 static void init_intel_runtime_pm_wakeref(struct drm_i915_private *i915)



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 17/29] lockdep: Remove unused trace argument from print_circular_bug()
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (15 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 16/29] drm: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 18/29] lockdep: Remove save argument from check_prev_add() Thomas Gleixner
                   ` (13 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/lockdep.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1522,10 +1522,9 @@ static inline int class_equal(struct loc
 }
 
 static noinline int print_circular_bug(struct lock_list *this,
-				struct lock_list *target,
-				struct held_lock *check_src,
-				struct held_lock *check_tgt,
-				struct stack_trace *trace)
+				       struct lock_list *target,
+				       struct held_lock *check_src,
+				       struct held_lock *check_tgt)
 {
 	struct task_struct *curr = current;
 	struct lock_list *parent;
@@ -2206,7 +2205,7 @@ check_prev_add(struct task_struct *curr,
 			 */
 			save(trace);
 		}
-		return print_circular_bug(&this, target_entry, next, prev, trace);
+		return print_circular_bug(&this, target_entry, next, prev);
 	}
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 18/29] lockdep: Remove save argument from check_prev_add()
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (16 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 17/29] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25 13:35   ` Peter Zijlstra
  2019-04-25  9:45 ` [patch V3 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
                   ` (12 subsequent siblings)
  30 siblings, 1 reply; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

There is only one caller which hands in save_trace as function pointer.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/lockdep.c |   16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2158,8 +2158,7 @@ check_deadlock(struct task_struct *curr,
  */
 static int
 check_prev_add(struct task_struct *curr, struct held_lock *prev,
-	       struct held_lock *next, int distance, struct stack_trace *trace,
-	       int (*save)(struct stack_trace *trace))
+	       struct held_lock *next, int distance, struct stack_trace *trace)
 {
 	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list *entry;
@@ -2199,11 +2198,11 @@ check_prev_add(struct task_struct *curr,
 	if (unlikely(!ret)) {
 		if (!trace->entries) {
 			/*
-			 * If @save fails here, the printing might trigger
-			 * a WARN but because of the !nr_entries it should
-			 * not do bad things.
+			 * If save_trace fails here, the printing might
+			 * trigger a WARN but because of the !nr_entries it
+			 * should not do bad things.
 			 */
-			save(trace);
+			save_trace(trace);
 		}
 		return print_circular_bug(&this, target_entry, next, prev);
 	}
@@ -2253,7 +2252,7 @@ check_prev_add(struct task_struct *curr,
 		return print_bfs_bug(ret);
 
 
-	if (!trace->entries && !save(trace))
+	if (!trace->entries && !save_trace(trace))
 		return 0;
 
 	/*
@@ -2318,7 +2317,8 @@ check_prevs_add(struct task_struct *curr
 		 * added:
 		 */
 		if (hlock->read != 2 && hlock->check) {
-			int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace);
+			int ret = check_prev_add(curr, hlock, next, distance,
+						 &trace);
 			if (!ret)
 				return 0;
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 19/29] lockdep: Simplify stack trace handling
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (17 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 18/29] lockdep: Remove save argument from check_prev_add() Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
                   ` (11 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces and storing the information is a small lockdep
specific data structure.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/lockdep.h  |    9 +++++--
 kernel/locking/lockdep.c |   55 +++++++++++++++++++++++------------------------
 2 files changed, 35 insertions(+), 29 deletions(-)

--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -66,6 +66,11 @@ struct lock_class_key {
 
 extern struct lock_class_key __lockdep_no_validate__;
 
+struct lock_trace {
+	unsigned int		nr_entries;
+	unsigned int		offset;
+};
+
 #define LOCKSTAT_POINTS		4
 
 /*
@@ -100,7 +105,7 @@ struct lock_class {
 	 * IRQ/softirq usage tracking bits:
 	 */
 	unsigned long			usage_mask;
-	struct stack_trace		usage_traces[XXX_LOCK_USAGE_STATES];
+	struct lock_trace		usage_traces[XXX_LOCK_USAGE_STATES];
 
 	/*
 	 * Generation counter, when doing certain classes of graph walking,
@@ -188,7 +193,7 @@ struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
 	struct lock_class		*links_to;
-	struct stack_trace		trace;
+	struct lock_trace		trace;
 	int				distance;
 
 	/*
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -434,18 +434,14 @@ static void print_lockdep_off(const char
 #endif
 }
 
-static int save_trace(struct stack_trace *trace)
+static int save_trace(struct lock_trace *trace)
 {
-	trace->nr_entries = 0;
-	trace->max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
-	trace->entries = stack_trace + nr_stack_trace_entries;
-
-	trace->skip = 3;
-
-	save_stack_trace(trace);
-
-	trace->max_entries = trace->nr_entries;
+	unsigned long *entries = stack_trace + nr_stack_trace_entries;
+	unsigned int max_entries;
 
+	trace->offset = nr_stack_trace_entries;
+	max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
+	trace->nr_entries = stack_trace_save(entries, max_entries, 3);
 	nr_stack_trace_entries += trace->nr_entries;
 
 	if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) {
@@ -1196,7 +1192,7 @@ static struct lock_list *alloc_list_entr
 static int add_lock_to_list(struct lock_class *this,
 			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
-			    struct stack_trace *trace)
+			    struct lock_trace *trace)
 {
 	struct lock_list *entry;
 	/*
@@ -1415,6 +1411,13 @@ static inline int __bfs_backwards(struct
  * checking.
  */
 
+static void print_lock_trace(struct lock_trace *trace, unsigned int spaces)
+{
+	unsigned long *entries = stack_trace + trace->offset;
+
+	stack_trace_print(entries, trace->nr_entries, spaces);
+}
+
 /*
  * Print a dependency chain entry (this is only done when a deadlock
  * has been detected):
@@ -1427,8 +1430,7 @@ print_circular_bug_entry(struct lock_lis
 	printk("\n-> #%u", depth);
 	print_lock_name(target->class);
 	printk(KERN_CONT ":\n");
-	print_stack_trace(&target->trace, 6);
-
+	print_lock_trace(&target->trace, 6);
 	return 0;
 }
 
@@ -1740,7 +1742,7 @@ static void print_lock_class_header(stru
 
 			len += printk("%*s   %s", depth, "", usage_str[bit]);
 			len += printk(KERN_CONT " at:\n");
-			print_stack_trace(class->usage_traces + bit, len);
+			print_lock_trace(class->usage_traces + bit, len);
 		}
 	}
 	printk("%*s }\n", depth, "");
@@ -1765,7 +1767,7 @@ print_shortest_lock_dependencies(struct
 	do {
 		print_lock_class_header(entry->class, depth);
 		printk("%*s ... acquired at:\n", depth, "");
-		print_stack_trace(&entry->trace, 2);
+		print_lock_trace(&entry->trace, 2);
 		printk("\n");
 
 		if (depth == 0 && (entry != root)) {
@@ -1878,14 +1880,14 @@ print_bad_irq_dependency(struct task_str
 	print_lock_name(backwards_entry->class);
 	pr_warn("\n... which became %s-irq-safe at:\n", irqclass);
 
-	print_stack_trace(backwards_entry->class->usage_traces + bit1, 1);
+	print_lock_trace(backwards_entry->class->usage_traces + bit1, 1);
 
 	pr_warn("\nto a %s-irq-unsafe lock:\n", irqclass);
 	print_lock_name(forwards_entry->class);
 	pr_warn("\n... which became %s-irq-unsafe at:\n", irqclass);
 	pr_warn("...");
 
-	print_stack_trace(forwards_entry->class->usage_traces + bit2, 1);
+	print_lock_trace(forwards_entry->class->usage_traces + bit2, 1);
 
 	pr_warn("\nother info that might help us debug this:\n\n");
 	print_irq_lock_scenario(backwards_entry, forwards_entry,
@@ -2158,7 +2160,7 @@ check_deadlock(struct task_struct *curr,
  */
 static int
 check_prev_add(struct task_struct *curr, struct held_lock *prev,
-	       struct held_lock *next, int distance, struct stack_trace *trace)
+	       struct held_lock *next, int distance, struct lock_trace *trace)
 {
 	struct lock_list *uninitialized_var(target_entry);
 	struct lock_list *entry;
@@ -2196,7 +2198,7 @@ check_prev_add(struct task_struct *curr,
 	this.parent = NULL;
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
 	if (unlikely(!ret)) {
-		if (!trace->entries) {
+		if (!trace->nr_entries) {
 			/*
 			 * If save_trace fails here, the printing might
 			 * trigger a WARN but because of the !nr_entries it
@@ -2252,7 +2254,7 @@ check_prev_add(struct task_struct *curr,
 		return print_bfs_bug(ret);
 
 
-	if (!trace->entries && !save_trace(trace))
+	if (!trace->nr_entries && !save_trace(trace))
 		return 0;
 
 	/*
@@ -2284,14 +2286,9 @@ check_prev_add(struct task_struct *curr,
 static int
 check_prevs_add(struct task_struct *curr, struct held_lock *next)
 {
+	struct lock_trace trace = { .nr_entries = 0 };
 	int depth = curr->lockdep_depth;
 	struct held_lock *hlock;
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.max_entries = 0,
-		.entries = NULL,
-		.skip = 0,
-	};
 
 	/*
 	 * Debugging checks.
@@ -2719,6 +2716,10 @@ static inline int validate_chain(struct
 {
 	return 1;
 }
+
+static void print_lock_trace(struct lock_trace *trace, unsigned int spaces)
+{
+}
 #endif
 
 /*
@@ -2815,7 +2816,7 @@ print_usage_bug(struct task_struct *curr
 	print_lock(this);
 
 	pr_warn("{%s} state was registered at:\n", usage_str[prev_bit]);
-	print_stack_trace(hlock_class(this)->usage_traces + prev_bit, 1);
+	print_lock_trace(hlock_class(this)->usage_traces + prev_bit, 1);
 
 	print_irqtrace_events(curr);
 	pr_warn("\nother info that might help us debug this:\n");



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 20/29] tracing: Simplify stacktrace retrieval in histograms
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (18 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
                   ` (10 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Tom Zanussi,
	Steven Rostedt, Alexander Potapenko, Alexey Dobriyan,
	Andrew Morton, Christoph Lameter, Pekka Enberg, linux-mm,
	David Rientjes, Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin,
	kasan-dev, Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Miroslav Benes, linux-arch

The indirection through struct stack_trace is not necessary at all. Use the
storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_events_hist.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5186,7 +5186,6 @@ static void event_hist_trigger(struct ev
 	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
 	char compound_key[HIST_KEY_SIZE_MAX];
 	struct tracing_map_elt *elt = NULL;
-	struct stack_trace stacktrace;
 	struct hist_field *key_field;
 	u64 field_contents;
 	void *key = NULL;
@@ -5198,14 +5197,9 @@ static void event_hist_trigger(struct ev
 		key_field = hist_data->fields[i];
 
 		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
-			stacktrace.max_entries = HIST_STACKTRACE_DEPTH;
-			stacktrace.entries = entries;
-			stacktrace.nr_entries = 0;
-			stacktrace.skip = HIST_STACKTRACE_SKIP;
-
-			memset(stacktrace.entries, 0, HIST_STACKTRACE_SIZE);
-			save_stack_trace(&stacktrace);
-
+			memset(entries, 0, HIST_STACKTRACE_SIZE);
+			stack_trace_save(entries, HIST_STACKTRACE_DEPTH,
+					 HIST_STACKTRACE_SKIP);
 			key = entries;
 		} else {
 			field_contents = key_field->fn(key_field, elt, rbe, rec);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (19 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25 13:29   ` Josh Poimboeuf
  2019-04-25  9:45 ` [patch V3 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
                   ` (9 subsequent siblings)
  30 siblings, 1 reply; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

The per cpu stack trace buffer usage pattern is odd at best. The buffer has
place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When
interrupts or exceptions nest after the per cpu buffer was acquired the
stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries
in kernel stacks are unrealistic so the buffer is a complete waste.

Split the buffer into 4 nest levels, which are 128/256 entries per
level. This allows nesting contexts (interrupts, exceptions) to utilize the
cpu buffer for stack retrieval and avoids the fixed length allocation along
with the conditional execution pathes.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
V3: Limit to 4 nest levels and increase size per level.
---
 kernel/trace/trace.c |   77 +++++++++++++++++++++++++--------------------------
 1 file changed, 39 insertions(+), 38 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2749,12 +2749,21 @@ trace_function(struct trace_array *tr,
 
 #ifdef CONFIG_STACKTRACE
 
-#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
+/* Allow 4 levels of nesting: normal, softirq, irq, NMI */
+#define FTRACE_KSTACK_NESTING	4
+
+#define FTRACE_KSTACK_ENTRIES	(PAGE_SIZE / FTRACE_KSTACK_NESTING)
+
 struct ftrace_stack {
-	unsigned long		calls[FTRACE_STACK_MAX_ENTRIES];
+	unsigned long		calls[FTRACE_KSTACK_ENTRIES];
+};
+
+
+struct ftrace_stacks {
+	struct ftrace_stack	stacks[FTRACE_KSTACK_NESTING];
 };
 
-static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
+static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
 static DEFINE_PER_CPU(int, ftrace_stack_reserve);
 
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
@@ -2763,10 +2772,11 @@ static void __ftrace_trace_stack(struct
 {
 	struct trace_event_call *call = &event_kernel_stack;
 	struct ring_buffer_event *event;
+	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
 	struct stack_trace trace;
-	int use_stack;
-	int size = FTRACE_STACK_ENTRIES;
+	int size = FTRACE_KSTACK_ENTRIES;
+	int stackidx;
 
 	trace.nr_entries	= 0;
 	trace.skip		= skip;
@@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct
 	 */
 	preempt_disable_notrace();
 
-	use_stack = __this_cpu_inc_return(ftrace_stack_reserve);
+	stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
+
+	/* This should never happen. If it does, yell once and skip */
+	if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
+		goto out;
+
 	/*
-	 * We don't need any atomic variables, just a barrier.
-	 * If an interrupt comes in, we don't care, because it would
-	 * have exited and put the counter back to what we want.
-	 * We just need a barrier to keep gcc from moving things
-	 * around.
+	 * The above __this_cpu_inc_return() is 'atomic' cpu local. An
+	 * interrupt will either see the value pre increment or post
+	 * increment. If the interrupt happens pre increment it will have
+	 * restored the counter when it returns.  We just need a barrier to
+	 * keep gcc from moving things around.
 	 */
 	barrier();
-	if (use_stack == 1) {
-		trace.entries		= this_cpu_ptr(ftrace_stack.calls);
-		trace.max_entries	= FTRACE_STACK_MAX_ENTRIES;
-
-		if (regs)
-			save_stack_trace_regs(regs, &trace);
-		else
-			save_stack_trace(&trace);
-
-		if (trace.nr_entries > size)
-			size = trace.nr_entries;
-	} else
-		/* From now on, use_stack is a boolean */
-		use_stack = 0;
+
+	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
+	trace.entries		= fstack->calls;
+	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
+
+	if (regs)
+		save_stack_trace_regs(regs, &trace);
+	else
+		save_stack_trace(&trace);
+
+	if (trace.nr_entries > size)
+		size = trace.nr_entries;
 
 	size *= sizeof(unsigned long);
 
@@ -2820,19 +2833,7 @@ static void __ftrace_trace_stack(struct
 		goto out;
 	entry = ring_buffer_event_data(event);
 
-	memset(&entry->caller, 0, size);
-
-	if (use_stack)
-		memcpy(&entry->caller, trace.entries,
-		       trace.nr_entries * sizeof(unsigned long));
-	else {
-		trace.max_entries	= FTRACE_STACK_ENTRIES;
-		trace.entries		= entry->caller;
-		if (regs)
-			save_stack_trace_regs(regs, &trace);
-		else
-			save_stack_trace(&trace);
-	}
+	memcpy(&entry->caller, trace.entries, size);
 
 	entry->size = trace.nr_entries;
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 22/29] tracing: Make ftrace_trace_userstack() static and conditional
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (20 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
                   ` (8 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

It's only used in trace.c and there is absolutely no point in compiling it
in when user space stack traces are not supported.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   14 ++++++++------
 kernel/trace/trace.h |    8 --------
 2 files changed, 8 insertions(+), 14 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -159,6 +159,8 @@ static union trace_eval_map_item *trace_
 #endif /* CONFIG_TRACE_EVAL_MAP_FILE */
 
 static int tracing_set_tracer(struct trace_array *tr, const char *buf);
+static void ftrace_trace_userstack(struct ring_buffer *buffer,
+				   unsigned long flags, int pc);
 
 #define MAX_TRACER_SIZE		100
 static char bootup_tracer_buf[MAX_TRACER_SIZE] __initdata;
@@ -2905,9 +2907,10 @@ void trace_dump_stack(int skip)
 }
 EXPORT_SYMBOL_GPL(trace_dump_stack);
 
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
 static DEFINE_PER_CPU(int, user_stack_count);
 
-void
+static void
 ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
 {
 	struct trace_event_call *call = &event_user_stack;
@@ -2958,13 +2961,12 @@ ftrace_trace_userstack(struct ring_buffe
  out:
 	preempt_enable();
 }
-
-#ifdef UNUSED
-static void __trace_userstack(struct trace_array *tr, unsigned long flags)
+#else /* CONFIG_USER_STACKTRACE_SUPPORT */
+static void ftrace_trace_userstack(struct ring_buffer *buffer,
+				   unsigned long flags, int pc)
 {
-	ftrace_trace_userstack(tr, flags, preempt_count());
 }
-#endif /* UNUSED */
+#endif /* !CONFIG_USER_STACKTRACE_SUPPORT */
 
 #endif /* CONFIG_STACKTRACE */
 
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -782,17 +782,9 @@ void update_max_tr_single(struct trace_a
 #endif /* CONFIG_TRACER_MAX_TRACE */
 
 #ifdef CONFIG_STACKTRACE
-void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
-			    int pc);
-
 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
 		   int pc);
 #else
-static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
-					  unsigned long flags, int pc)
-{
-}
-
 static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
 				 int skip, int pc)
 {



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 23/29] tracing: Simplify stack trace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (21 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
                   ` (7 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   40 +++++++++++++---------------------------
 1 file changed, 13 insertions(+), 27 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2774,22 +2774,18 @@ static void __ftrace_trace_stack(struct
 {
 	struct trace_event_call *call = &event_kernel_stack;
 	struct ring_buffer_event *event;
+	unsigned int size, nr_entries;
 	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
-	struct stack_trace trace;
-	int size = FTRACE_KSTACK_ENTRIES;
 	int stackidx;
 
-	trace.nr_entries	= 0;
-	trace.skip		= skip;
-
 	/*
 	 * Add one, for this function and the call to save_stack_trace()
 	 * If regs is set, then these functions will not be in the way.
 	 */
 #ifndef CONFIG_UNWINDER_ORC
 	if (!regs)
-		trace.skip++;
+		skip++;
 #endif
 
 	/*
@@ -2816,28 +2812,24 @@ static void __ftrace_trace_stack(struct
 	barrier();
 
 	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
-	trace.entries		= fstack->calls;
-	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
-
-	if (regs)
-		save_stack_trace_regs(regs, &trace);
-	else
-		save_stack_trace(&trace);
-
-	if (trace.nr_entries > size)
-		size = trace.nr_entries;
+	size = ARRAY_SIZE(fstack->calls);
 
-	size *= sizeof(unsigned long);
+	if (regs) {
+		nr_entries = stack_trace_save_regs(regs, fstack->calls,
+						   size, skip);
+	} else {
+		nr_entries = stack_trace_save(fstack->calls, size, skip);
+	}
 
+	size = nr_entries * sizeof(unsigned long);
 	event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
 					    sizeof(*entry) + size, flags, pc);
 	if (!event)
 		goto out;
 	entry = ring_buffer_event_data(event);
 
-	memcpy(&entry->caller, trace.entries, size);
-
-	entry->size = trace.nr_entries;
+	memcpy(&entry->caller, fstack->calls, size);
+	entry->size = nr_entries;
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
 		__buffer_unlock_commit(buffer, event);
@@ -2916,7 +2908,6 @@ ftrace_trace_userstack(struct ring_buffe
 	struct trace_event_call *call = &event_user_stack;
 	struct ring_buffer_event *event;
 	struct userstack_entry *entry;
-	struct stack_trace trace;
 
 	if (!(global_trace.trace_flags & TRACE_ITER_USERSTACKTRACE))
 		return;
@@ -2947,12 +2938,7 @@ ftrace_trace_userstack(struct ring_buffe
 	entry->tgid		= current->tgid;
 	memset(&entry->caller, 0, sizeof(entry->caller));
 
-	trace.nr_entries	= 0;
-	trace.max_entries	= FTRACE_STACK_ENTRIES;
-	trace.skip		= 0;
-	trace.entries		= entry->caller;
-
-	save_stack_trace_user(&trace);
+	stack_trace_save_user(entry->caller, FTRACE_STACK_ENTRIES);
 	if (!call_filter_check_discard(call, entry, buffer, event))
 		__buffer_unlock_commit(buffer, event);
 



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 24/29] tracing: Remove the last struct stack_trace usage
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (22 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25 13:30   ` Josh Poimboeuf
  2019-04-25  9:45 ` [patch V3 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
                   ` (6 subsequent siblings)
  30 siblings, 1 reply; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

Simplify the stack retrieval code by using the storage array based
interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 kernel/trace/trace_stack.c |   37 ++++++++++++++++---------------------
 1 file changed, 16 insertions(+), 21 deletions(-)

--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -23,11 +23,7 @@
 static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES];
 static unsigned stack_trace_index[STACK_TRACE_ENTRIES];
 
-struct stack_trace stack_trace_max = {
-	.max_entries		= STACK_TRACE_ENTRIES,
-	.entries		= &stack_dump_trace[0],
-};
-
+static unsigned int stack_trace_entries;
 static unsigned long stack_trace_max_size;
 static arch_spinlock_t stack_trace_max_lock =
 	(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
@@ -44,10 +40,10 @@ static void print_max_stack(void)
 
 	pr_emerg("        Depth    Size   Location    (%d entries)\n"
 			   "        -----    ----   --------\n",
-			   stack_trace_max.nr_entries);
+			   stack_trace_entries);
 
-	for (i = 0; i < stack_trace_max.nr_entries; i++) {
-		if (i + 1 == stack_trace_max.nr_entries)
+	for (i = 0; i < stack_trace_entries; i++) {
+		if (i + 1 == stack_trace_entries)
 			size = stack_trace_index[i];
 		else
 			size = stack_trace_index[i] - stack_trace_index[i+1];
@@ -93,13 +89,12 @@ static void check_stack(unsigned long ip
 
 	stack_trace_max_size = this_size;
 
-	stack_trace_max.nr_entries = 0;
-	stack_trace_max.skip = 0;
-
-	save_stack_trace(&stack_trace_max);
+	stack_trace_entries = stack_trace_save(stack_dump_trace,
+					       ARRAY_SIZE(stack_dump_trace) - 1,
+					       0);
 
 	/* Skip over the overhead of the stack tracer itself */
-	for (i = 0; i < stack_trace_max.nr_entries; i++) {
+	for (i = 0; i < stack_trace_entries; i++) {
 		if (stack_dump_trace[i] == ip)
 			break;
 	}
@@ -108,7 +103,7 @@ static void check_stack(unsigned long ip
 	 * Some archs may not have the passed in ip in the dump.
 	 * If that happens, we need to show everything.
 	 */
-	if (i == stack_trace_max.nr_entries)
+	if (i == stack_trace_entries)
 		i = 0;
 
 	/*
@@ -126,13 +121,13 @@ static void check_stack(unsigned long ip
 	 * loop will only happen once. This code only takes place
 	 * on a new max, so it is far from a fast path.
 	 */
-	while (i < stack_trace_max.nr_entries) {
+	while (i < stack_trace_entries) {
 		int found = 0;
 
 		stack_trace_index[x] = this_size;
 		p = start;
 
-		for (; p < top && i < stack_trace_max.nr_entries; p++) {
+		for (; p < top && i < stack_trace_entries; p++) {
 			/*
 			 * The READ_ONCE_NOCHECK is used to let KASAN know that
 			 * this is not a stack-out-of-bounds error.
@@ -163,7 +158,7 @@ static void check_stack(unsigned long ip
 			i++;
 	}
 
-	stack_trace_max.nr_entries = x;
+	stack_trace_entries = x;
 
 	if (task_stack_end_corrupted(current)) {
 		print_max_stack();
@@ -265,7 +260,7 @@ static void *
 {
 	long n = *pos - 1;
 
-	if (n >= stack_trace_max.nr_entries)
+	if (n >= stack_trace_entries)
 		return NULL;
 
 	m->private = (void *)n;
@@ -329,7 +324,7 @@ static int t_show(struct seq_file *m, vo
 		seq_printf(m, "        Depth    Size   Location"
 			   "    (%d entries)\n"
 			   "        -----    ----   --------\n",
-			   stack_trace_max.nr_entries);
+			   stack_trace_entries);
 
 		if (!stack_tracer_enabled && !stack_trace_max_size)
 			print_disabled(m);
@@ -339,10 +334,10 @@ static int t_show(struct seq_file *m, vo
 
 	i = *(long *)v;
 
-	if (i >= stack_trace_max.nr_entries)
+	if (i >= stack_trace_entries)
 		return 0;
 
-	if (i + 1 == stack_trace_max.nr_entries)
+	if (i + 1 == stack_trace_entries)
 		size = stack_trace_index[i];
 	else
 		size = stack_trace_index[i] - stack_trace_index[i+1];



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 25/29] livepatch: Simplify stack trace retrieval
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (23 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 26/29] stacktrace: Remove obsolete functions Thomas Gleixner
                   ` (5 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Miroslav Benes,
	Steven Rostedt, Alexander Potapenko, Alexey Dobriyan,
	Andrew Morton, Christoph Lameter, Pekka Enberg, linux-mm,
	David Rientjes, Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin,
	kasan-dev, Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Miroslav Benes <mbenes@suse.cz>
---
 kernel/livepatch/transition.c |   22 +++++++++-------------
 1 file changed, 9 insertions(+), 13 deletions(-)

--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -202,15 +202,15 @@ void klp_update_patch_state(struct task_
  * Determine whether the given stack trace includes any references to a
  * to-be-patched or to-be-unpatched function.
  */
-static int klp_check_stack_func(struct klp_func *func,
-				struct stack_trace *trace)
+static int klp_check_stack_func(struct klp_func *func, unsigned long *entries,
+				unsigned int nr_entries)
 {
 	unsigned long func_addr, func_size, address;
 	struct klp_ops *ops;
 	int i;
 
-	for (i = 0; i < trace->nr_entries; i++) {
-		address = trace->entries[i];
+	for (i = 0; i < nr_entries; i++) {
+		address = entries[i];
 
 		if (klp_target_state == KLP_UNPATCHED) {
 			 /*
@@ -254,29 +254,25 @@ static int klp_check_stack_func(struct k
 static int klp_check_stack(struct task_struct *task, char *err_buf)
 {
 	static unsigned long entries[MAX_STACK_ENTRIES];
-	struct stack_trace trace;
 	struct klp_object *obj;
 	struct klp_func *func;
-	int ret;
+	int ret, nr_entries;
 
-	trace.skip = 0;
-	trace.nr_entries = 0;
-	trace.max_entries = MAX_STACK_ENTRIES;
-	trace.entries = entries;
-	ret = save_stack_trace_tsk_reliable(task, &trace);
+	ret = stack_trace_save_tsk_reliable(task, entries, ARRAY_SIZE(entries));
 	WARN_ON_ONCE(ret == -ENOSYS);
-	if (ret) {
+	if (ret < 0) {
 		snprintf(err_buf, STACK_ERR_BUF_SIZE,
 			 "%s: %s:%d has an unreliable stack\n",
 			 __func__, task->comm, task->pid);
 		return ret;
 	}
+	nr_entries = ret;
 
 	klp_for_each_object(klp_transition_patch, obj) {
 		if (!obj->patched)
 			continue;
 		klp_for_each_func(obj, func) {
-			ret = klp_check_stack_func(func, &trace);
+			ret = klp_check_stack_func(func, entries, nr_entries);
 			if (ret) {
 				snprintf(err_buf, STACK_ERR_BUF_SIZE,
 					 "%s: %s:%d is sleeping on function %s\n",



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 26/29] stacktrace: Remove obsolete functions
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (24 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 27/29] lib/stackdepot: " Thomas Gleixner
                   ` (4 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

No more users of the struct stack_trace based interfaces. Remove them.

Remove the macro stubs for !CONFIG_STACKTRACE as well as they are pointless
because the storage on the call sites is conditional on CONFIG_STACKTRACE
already. No point to be 'smart'.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stacktrace.h |   17 -----------------
 kernel/stacktrace.c        |   14 --------------
 2 files changed, 31 deletions(-)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -36,24 +36,7 @@ extern void save_stack_trace_tsk(struct
 				struct stack_trace *trace);
 extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
 					 struct stack_trace *trace);
-
-extern void print_stack_trace(struct stack_trace *trace, int spaces);
-extern int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces);
-
-#ifdef CONFIG_USER_STACKTRACE_SUPPORT
 extern void save_stack_trace_user(struct stack_trace *trace);
-#else
-# define save_stack_trace_user(trace)              do { } while (0)
-#endif
-
-#else /* !CONFIG_STACKTRACE */
-# define save_stack_trace(trace)			do { } while (0)
-# define save_stack_trace_tsk(tsk, trace)		do { } while (0)
-# define save_stack_trace_user(trace)			do { } while (0)
-# define print_stack_trace(trace, spaces)		do { } while (0)
-# define snprint_stack_trace(buf, size, trace, spaces)	do { } while (0)
-# define save_stack_trace_tsk_reliable(tsk, trace)	({ -ENOSYS; })
 #endif /* CONFIG_STACKTRACE */
 
 #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -30,12 +30,6 @@ void stack_trace_print(unsigned long *en
 }
 EXPORT_SYMBOL_GPL(stack_trace_print);
 
-void print_stack_trace(struct stack_trace *trace, int spaces)
-{
-	stack_trace_print(trace->entries, trace->nr_entries, spaces);
-}
-EXPORT_SYMBOL_GPL(print_stack_trace);
-
 /**
  * stack_trace_snprint - Print the entries in the stack trace into a buffer
  * @buf:	Pointer to the print buffer
@@ -72,14 +66,6 @@ int stack_trace_snprint(char *buf, size_
 }
 EXPORT_SYMBOL_GPL(stack_trace_snprint);
 
-int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces)
-{
-	return stack_trace_snprint(buf, size, trace->entries,
-				   trace->nr_entries, spaces);
-}
-EXPORT_SYMBOL_GPL(snprint_stack_trace);
-
 /*
  * Architectures that do not implement save_stack_trace_*()
  * get these weak aliases and once-per-bootup warnings



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 27/29] lib/stackdepot: Remove obsolete functions
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (25 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 26/29] stacktrace: Remove obsolete functions Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
                   ` (3 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Steven Rostedt, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

No more users of the struct stack_trace based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Alexander Potapenko <glider@google.com>
---
 include/linux/stackdepot.h |    4 ----
 lib/stackdepot.c           |   20 --------------------
 2 files changed, 24 deletions(-)

--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -23,13 +23,9 @@
 
 typedef u32 depot_stack_handle_t;
 
-struct stack_trace;
-
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries, gfp_t gfp_flags);
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries);
 
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -216,14 +216,6 @@ unsigned int stack_depot_fetch(depot_sta
 }
 EXPORT_SYMBOL_GPL(stack_depot_fetch);
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
-{
-	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
-
-	trace->max_entries = trace->nr_entries = nent;
-}
-EXPORT_SYMBOL_GPL(depot_fetch_stack);
-
 /**
  * stack_depot_save - Save a stack trace from an array
  *
@@ -318,15 +310,3 @@ depot_stack_handle_t stack_depot_save(un
 	return retval;
 }
 EXPORT_SYMBOL_GPL(stack_depot_save);
-
-/**
- * depot_save_stack - save stack in a stack depot.
- * @trace - the stacktrace to save.
- * @alloc_flags - flags for allocating additional memory if required.
- */
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
-				      gfp_t alloc_flags)
-{
-	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
-}
-EXPORT_SYMBOL_GPL(depot_save_stack);



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 28/29] stacktrace: Provide common infrastructure
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (26 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 27/29] lib/stackdepot: " Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25  9:45 ` [patch V3 29/29] x86/stacktrace: Use " Thomas Gleixner
                   ` (2 subsequent siblings)
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, linux-arch, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes

All architectures which support stacktrace carry duplicated code and
do the stack storage and filtering at the architecture side.

Provide a consolidated interface with a callback function for consuming the
stack entries provided by the architecture specific stack walker. This
removes lots of duplicated code and allows to implement better filtering
than 'skip number of entries' in the future without touching any
architecture specific code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
---
V3: Fix kernel doc
---
 include/linux/stacktrace.h |   39 ++++++++++
 kernel/stacktrace.c        |  173 +++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig                |    4 +
 3 files changed, 216 insertions(+)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -23,6 +23,44 @@ unsigned int stack_trace_save_regs(struc
 unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
 
 /* Internal interfaces. Do not use in generic code */
+#ifdef CONFIG_ARCH_STACKWALK
+
+/**
+ * stack_trace_consume_fn - Callback for arch_stack_walk()
+ * @cookie:	Caller supplied pointer handed back by arch_stack_walk()
+ * @addr:	The stack entry address to consume
+ * @reliable:	True when the stack entry is reliable. Required by
+ *		some printk based consumers.
+ *
+ * Return:	True, if the entry was consumed or skipped
+ *		False, if there is no space left to store
+ */
+typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
+				       bool reliable);
+/**
+ * arch_stack_walk - Architecture specific function to walk the stack
+ * @consume_entry:	Callback which is invoked by the architecture code for
+ *			each entry.
+ * @cookie:		Caller supplied pointer which is handed back to
+ *			@consume_entry
+ * @task:		Pointer to a task struct, can be NULL
+ * @regs:		Pointer to registers, can be NULL
+ *
+ * ============ ======= ============================================
+ * task	        regs
+ * ============ ======= ============================================
+ * task		NULL	Stack trace from task (can be current)
+ * current	regs	Stack trace starting on regs->stackpointer
+ * ============ ======= ============================================
+ */
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+		     struct task_struct *task, struct pt_regs *regs);
+int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
+			     struct task_struct *task);
+void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
+			  const struct pt_regs *regs);
+
+#else /* CONFIG_ARCH_STACKWALK */
 struct stack_trace {
 	unsigned int nr_entries, max_entries;
 	unsigned long *entries;
@@ -37,6 +75,7 @@ extern void save_stack_trace_tsk(struct
 extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
 					 struct stack_trace *trace);
 extern void save_stack_trace_user(struct stack_trace *trace);
+#endif /* !CONFIG_ARCH_STACKWALK */
 #endif /* CONFIG_STACKTRACE */
 
 #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -5,6 +5,8 @@
  *
  *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
  */
+#include <linux/sched/task_stack.h>
+#include <linux/sched/debug.h>
 #include <linux/sched.h>
 #include <linux/kernel.h>
 #include <linux/export.h>
@@ -66,6 +68,175 @@ int stack_trace_snprint(char *buf, size_
 }
 EXPORT_SYMBOL_GPL(stack_trace_snprint);
 
+#ifdef CONFIG_ARCH_STACKWALK
+
+struct stacktrace_cookie {
+	unsigned long	*store;
+	unsigned int	size;
+	unsigned int	skip;
+	unsigned int	len;
+};
+
+static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
+				      bool reliable)
+{
+	struct stacktrace_cookie *c = cookie;
+
+	if (c->len >= c->size)
+		return false;
+
+	if (c->skip > 0) {
+		c->skip--;
+		return true;
+	}
+	c->store[c->len++] = addr;
+	return c->len < c->size;
+}
+
+static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
+					      bool reliable)
+{
+	if (in_sched_functions(addr))
+		return true;
+	return stack_trace_consume_entry(cookie, addr, reliable);
+}
+
+/**
+ * stack_trace_save - Save a stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored.
+ */
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+		.skip	= skipnr + 1,
+	};
+
+	arch_stack_walk(consume_entry, &c, current, NULL);
+	return c.len;
+}
+EXPORT_SYMBOL_GPL(stack_trace_save);
+
+/**
+ * stack_trace_save_tsk - Save a task stack trace into a storage array
+ * @task:	The task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored.
+ */
+unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size, unsigned int skipnr)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+		.skip	= skipnr + 1,
+	};
+
+	if (!try_get_task_stack(tsk))
+		return 0;
+
+	arch_stack_walk(consume_entry, &c, tsk, NULL);
+	put_task_stack(tsk);
+	return c.len;
+}
+
+/**
+ * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
+ * @regs:	Pointer to pt_regs to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Return: Number of trace entries stored.
+ */
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+		.skip	= skipnr,
+	};
+
+	arch_stack_walk(consume_entry, &c, current, regs);
+	return c.len;
+}
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+/**
+ * stack_trace_save_tsk_reliable - Save task stack with verification
+ * @tsk:	Pointer to the task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Return:	An error if it detects any unreliable features of the
+ *		stack. Otherwise it guarantees that the stack trace is
+ *		reliable and returns the number of entries stored.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
+ */
+int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+	};
+	int ret;
+
+	/*
+	 * If the task doesn't have a stack (e.g., a zombie), the stack is
+	 * "reliably" empty.
+	 */
+	if (!try_get_task_stack(tsk))
+		return 0;
+
+	ret = arch_stack_walk_reliable(consume_entry, &c, tsk);
+	put_task_stack(tsk);
+	return ret;
+}
+#endif
+
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
+/**
+ * stack_trace_save_user - Save a user space stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Return: Number of trace entries stored.
+ */
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+	};
+
+	/* Trace user stack if not a kernel thread */
+	if (!current->mm)
+		return 0;
+
+	arch_stack_walk_user(consume_entry, &c, task_pt_regs(current));
+	return c.len;
+}
+#endif
+
+#else /* CONFIG_ARCH_STACKWALK */
+
 /*
  * Architectures that do not implement save_stack_trace_*()
  * get these weak aliases and once-per-bootup warnings
@@ -203,3 +374,5 @@ unsigned int stack_trace_save_user(unsig
 	return trace.nr_entries;
 }
 #endif /* CONFIG_USER_STACKTRACE_SUPPORT */
+
+#endif /* !CONFIG_ARCH_STACKWALK */
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -597,6 +597,10 @@ config ARCH_HAS_UACCESS_FLUSHCACHE
 config ARCH_HAS_UACCESS_MCSAFE
 	bool
 
+# Temporary. Goes away when all archs are cleaned up
+config ARCH_STACKWALK
+       bool
+
 config STACKDEPOT
 	bool
 	select STACKTRACE



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [patch V3 29/29] x86/stacktrace: Use common infrastructure
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (27 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
@ 2019-04-25  9:45 ` Thomas Gleixner
  2019-04-25 10:09 ` [patch V3 00/29] stacktrace: Consolidate stack trace usage Ingo Molnar
  2019-04-25 13:31 ` Josh Poimboeuf
  30 siblings, 0 replies; 35+ messages in thread
From: Thomas Gleixner @ 2019-04-25  9:45 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, linux-arch, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes

Replace the stack_trace_save*() functions with the new arch_stack_walk()
interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
---
 arch/x86/Kconfig             |    1 
 arch/x86/kernel/stacktrace.c |  116 +++++++------------------------------------
 2 files changed, 20 insertions(+), 97 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -74,6 +74,7 @@ config X86
 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
+	select ARCH_STACKWALK
 	select ARCH_SUPPORTS_ACPI
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -12,75 +12,31 @@
 #include <asm/stacktrace.h>
 #include <asm/unwind.h>
 
-static int save_stack_address(struct stack_trace *trace, unsigned long addr,
-			      bool nosched)
-{
-	if (nosched && in_sched_functions(addr))
-		return 0;
-
-	if (trace->skip > 0) {
-		trace->skip--;
-		return 0;
-	}
-
-	if (trace->nr_entries >= trace->max_entries)
-		return -1;
-
-	trace->entries[trace->nr_entries++] = addr;
-	return 0;
-}
-
-static void noinline __save_stack_trace(struct stack_trace *trace,
-			       struct task_struct *task, struct pt_regs *regs,
-			       bool nosched)
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+		     struct task_struct *task, struct pt_regs *regs)
 {
 	struct unwind_state state;
 	unsigned long addr;
 
-	if (regs)
-		save_stack_address(trace, regs->ip, nosched);
+	if (regs && !consume_entry(cookie, regs->ip, false))
+		return;
 
 	for (unwind_start(&state, task, regs, NULL); !unwind_done(&state);
 	     unwind_next_frame(&state)) {
 		addr = unwind_get_return_address(&state);
-		if (!addr || save_stack_address(trace, addr, nosched))
+		if (!addr || !consume_entry(cookie, addr, false))
 			break;
 	}
 }
 
 /*
- * Save stack-backtrace addresses into a stack_trace buffer.
+ * This function returns an error if it detects any unreliable features of the
+ * stack.  Otherwise it guarantees that the stack trace is reliable.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
  */
-void save_stack_trace(struct stack_trace *trace)
-{
-	trace->skip++;
-	__save_stack_trace(trace, current, NULL, false);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace);
-
-void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
-{
-	__save_stack_trace(trace, current, regs, false);
-}
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
-{
-	if (!try_get_task_stack(tsk))
-		return;
-
-	if (tsk == current)
-		trace->skip++;
-	__save_stack_trace(trace, tsk, NULL, true);
-
-	put_task_stack(tsk);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
-
-#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
-
-static int __always_inline
-__save_stack_trace_reliable(struct stack_trace *trace,
-			    struct task_struct *task)
+int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+			     void *cookie, struct task_struct *task)
 {
 	struct unwind_state state;
 	struct pt_regs *regs;
@@ -117,7 +73,7 @@ static int __always_inline
 		if (!addr)
 			return -EINVAL;
 
-		if (save_stack_address(trace, addr, false))
+		if (!consume_entry(cookie, addr, false))
 			return -EINVAL;
 	}
 
@@ -132,32 +88,6 @@ static int __always_inline
 	return 0;
 }
 
-/*
- * This function returns an error if it detects any unreliable features of the
- * stack.  Otherwise it guarantees that the stack trace is reliable.
- *
- * If the task is not 'current', the caller *must* ensure the task is inactive.
- */
-int save_stack_trace_tsk_reliable(struct task_struct *tsk,
-				  struct stack_trace *trace)
-{
-	int ret;
-
-	/*
-	 * If the task doesn't have a stack (e.g., a zombie), the stack is
-	 * "reliably" empty.
-	 */
-	if (!try_get_task_stack(tsk))
-		return 0;
-
-	ret = __save_stack_trace_reliable(trace, tsk);
-
-	put_task_stack(tsk);
-
-	return ret;
-}
-#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */
-
 /* Userspace stacktrace - based on kernel/trace/trace_sysprof.c */
 
 struct stack_frame_user {
@@ -182,15 +112,15 @@ copy_stack_frame(const void __user *fp,
 	return ret;
 }
 
-static inline void __save_stack_trace_user(struct stack_trace *trace)
+void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
+			  const struct pt_regs *regs)
 {
-	const struct pt_regs *regs = task_pt_regs(current);
 	const void __user *fp = (const void __user *)regs->bp;
 
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = regs->ip;
+	if (!consume_entry(cookie, regs->ip, false))
+		return;
 
-	while (trace->nr_entries < trace->max_entries) {
+	while (1) {
 		struct stack_frame_user frame;
 
 		frame.next_fp = NULL;
@@ -200,8 +130,8 @@ static inline void __save_stack_trace_us
 		if ((unsigned long)fp < regs->sp)
 			break;
 		if (frame.ret_addr) {
-			trace->entries[trace->nr_entries++] =
-				frame.ret_addr;
+			if (!consume_entry(cookie, frame.ret_addr, false))
+				return;
 		}
 		if (fp == frame.next_fp)
 			break;
@@ -209,11 +139,3 @@ static inline void __save_stack_trace_us
 	}
 }
 
-void save_stack_trace_user(struct stack_trace *trace)
-{
-	/*
-	 * Trace user stack if we are not a kernel thread
-	 */
-	if (current->mm)
-		__save_stack_trace_user(trace);
-}



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [patch V3 00/29] stacktrace: Consolidate stack trace usage
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (28 preceding siblings ...)
  2019-04-25  9:45 ` [patch V3 29/29] x86/stacktrace: Use " Thomas Gleixner
@ 2019-04-25 10:09 ` Ingo Molnar
  2019-04-25 13:31 ` Josh Poimboeuf
  30 siblings, 0 replies; 35+ messages in thread
From: Ingo Molnar @ 2019-04-25 10:09 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch


* Thomas Gleixner <tglx@linutronix.de> wrote:

> -	if (unlikely(!ret))
> +	if (unlikely(!ret)) {
> +		if (!trace->nr_entries) {
> +			/*
> +			 * If save_trace fails here, the printing might
> +			 * trigger a WARN but because of the !nr_entries it
> +			 * should not do bad things.
> +			 */
> +			save_trace(trace);
> +		}
>  		return print_circular_bug(&this, target_entry, next, prev);
> +	}
>  	else if (unlikely(ret < 0))
>  		return print_bfs_bug(ret);

Just a minor style nit: the 'else' should probably on the same line as 
the '}' it belongs to, to make it really obvious that the 'if' has an 
'else' branch?

At that point the condition should probably also use balanced curly 
braces.

Interdiff looks good otherwise.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently
  2019-04-25  9:45 ` [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
@ 2019-04-25 13:29   ` Josh Poimboeuf
  0 siblings, 0 replies; 35+ messages in thread
From: Josh Poimboeuf @ 2019-04-25 13:29 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Christoph Lameter, Pekka Enberg,
	linux-mm, David Rientjes, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita,
	Christoph Hellwig, iommu, Robin Murphy, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon,
	Daniel Vetter, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Rodrigo Vivi, Tom Zanussi,
	Miroslav Benes, linux-arch

On Thu, Apr 25, 2019 at 11:45:14AM +0200, Thomas Gleixner wrote:
> @@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct
>  	 */
>  	preempt_disable_notrace();
>  
> -	use_stack = __this_cpu_inc_return(ftrace_stack_reserve);
> +	stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
> +
> +	/* This should never happen. If it does, yell once and skip */
> +	if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
> +		goto out;
> +
>  	/*
> -	 * We don't need any atomic variables, just a barrier.
> -	 * If an interrupt comes in, we don't care, because it would
> -	 * have exited and put the counter back to what we want.
> -	 * We just need a barrier to keep gcc from moving things
> -	 * around.
> +	 * The above __this_cpu_inc_return() is 'atomic' cpu local. An
> +	 * interrupt will either see the value pre increment or post
> +	 * increment. If the interrupt happens pre increment it will have
> +	 * restored the counter when it returns.  We just need a barrier to
> +	 * keep gcc from moving things around.
>  	 */
>  	barrier();
> -	if (use_stack == 1) {
> -		trace.entries		= this_cpu_ptr(ftrace_stack.calls);
> -		trace.max_entries	= FTRACE_STACK_MAX_ENTRIES;
> -
> -		if (regs)
> -			save_stack_trace_regs(regs, &trace);
> -		else
> -			save_stack_trace(&trace);
> -
> -		if (trace.nr_entries > size)
> -			size = trace.nr_entries;
> -	} else
> -		/* From now on, use_stack is a boolean */
> -		use_stack = 0;
> +
> +	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);

nit: it would be slightly less surprising if stackidx were 0-based:

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d3f6ec7eb729..4fc93004feab 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2798,10 +2798,10 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 	 */
 	preempt_disable_notrace();
 
-	stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
+	stackidx = __this_cpu_inc_return(ftrace_stack_reserve) - 1;
 
 	/* This should never happen. If it does, yell once and skip */
-	if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
+	if (WARN_ON_ONCE(stackidx > FTRACE_KSTACK_NESTING))
 		goto out;
 
 	/*
@@ -2813,7 +2813,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
 	 */
 	barrier();
 
-	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
+	fstack = this_cpu_ptr(ftrace_stacks.stacks) + stackidx;
 	trace.entries		= fstack->calls;
 	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
 

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [patch V3 24/29] tracing: Remove the last struct stack_trace usage
  2019-04-25  9:45 ` [patch V3 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
@ 2019-04-25 13:30   ` Josh Poimboeuf
  0 siblings, 0 replies; 35+ messages in thread
From: Josh Poimboeuf @ 2019-04-25 13:30 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Christoph Lameter, Pekka Enberg,
	linux-mm, David Rientjes, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita,
	Christoph Hellwig, iommu, Robin Murphy, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon,
	Daniel Vetter, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Rodrigo Vivi, Tom Zanussi,
	Miroslav Benes, linux-arch

On Thu, Apr 25, 2019 at 11:45:17AM +0200, Thomas Gleixner wrote:
> Simplify the stack retrieval code by using the storage array based
> interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  kernel/trace/trace_stack.c |   37 ++++++++++++++++---------------------
>  1 file changed, 16 insertions(+), 21 deletions(-)
> 
> --- a/kernel/trace/trace_stack.c
> +++ b/kernel/trace/trace_stack.c
> @@ -23,11 +23,7 @@
>  static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES];
>  static unsigned stack_trace_index[STACK_TRACE_ENTRIES];
>  
> -struct stack_trace stack_trace_max = {
> -	.max_entries		= STACK_TRACE_ENTRIES,
> -	.entries		= &stack_dump_trace[0],
> -};
> -
> +static unsigned int stack_trace_entries;

"stack_trace_entries" -> "nr_stack_trace_entries"

-- 
Josh

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [patch V3 00/29] stacktrace: Consolidate stack trace usage
  2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (29 preceding siblings ...)
  2019-04-25 10:09 ` [patch V3 00/29] stacktrace: Consolidate stack trace usage Ingo Molnar
@ 2019-04-25 13:31 ` Josh Poimboeuf
  30 siblings, 0 replies; 35+ messages in thread
From: Josh Poimboeuf @ 2019-04-25 13:31 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Christoph Lameter, Pekka Enberg,
	linux-mm, David Rientjes, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita,
	Christoph Hellwig, iommu, Robin Murphy, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon,
	Daniel Vetter, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Rodrigo Vivi, Tom Zanussi,
	Miroslav Benes, linux-arch

On Thu, Apr 25, 2019 at 11:44:53AM +0200, Thomas Gleixner wrote:
> This is an update to V2 which can be found here:
> 
>   https://lkml.kernel.org/r/20190418084119.056416939@linutronix.de
> 
> Changes vs. V2:
> 
>   - Fixed the kernel-doc issue pointed out by Mike
> 
>   - Removed the '-1' oddity from the tracer
> 
>   - Restricted the tracer nesting to 4
> 
>   - Restored the lockdep magic to prevent redundant stack traces
> 
>   - Addressed the small nitpicks here and there
> 
>   - Picked up Acked/Reviewed tags

Other than the 2 minor nits:

Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com>

-- 
Josh

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [patch V3 18/29] lockdep: Remove save argument from check_prev_add()
  2019-04-25  9:45 ` [patch V3 18/29] lockdep: Remove save argument from check_prev_add() Thomas Gleixner
@ 2019-04-25 13:35   ` Peter Zijlstra
  0 siblings, 0 replies; 35+ messages in thread
From: Peter Zijlstra @ 2019-04-25 13:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Christoph Lameter, Pekka Enberg, linux-mm, David Rientjes,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, Christoph Hellwig, iommu,
	Robin Murphy, Marek Szyprowski, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, Daniel Vetter, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Rodrigo Vivi, Tom Zanussi, Miroslav Benes, linux-arch

On Thu, Apr 25, 2019 at 11:45:11AM +0200, Thomas Gleixner wrote:
> There is only one caller which hands in save_trace as function pointer.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2019-04-25 13:37 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-25  9:44 [patch V3 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
2019-04-25  9:44 ` [patch V3 01/29] tracing: Cleanup stack trace code Thomas Gleixner
2019-04-25  9:44 ` [patch V3 02/29] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
2019-04-25  9:44 ` [patch V3 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
2019-04-25  9:44 ` [patch V3 04/29] backtrace-test: Simplify stack trace handling Thomas Gleixner
2019-04-25  9:44 ` [patch V3 05/29] proc: Simplify task stack retrieval Thomas Gleixner
2019-04-25  9:44 ` [patch V3 06/29] latency_top: Simplify stack trace handling Thomas Gleixner
2019-04-25  9:45 ` [patch V3 07/29] mm/slub: Simplify stack trace retrieval Thomas Gleixner
2019-04-25  9:45 ` [patch V3 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
2019-04-25  9:45 ` [patch V3 09/29] mm/kasan: " Thomas Gleixner
2019-04-25  9:45 ` [patch V3 10/29] mm/page_owner: Simplify stack trace handling Thomas Gleixner
2019-04-25  9:45 ` [patch V3 11/29] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
2019-04-25  9:45 ` [patch V3 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
2019-04-25  9:45 ` [patch V3 13/29] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
2019-04-25  9:45 ` [patch V3 14/29] dm bufio: " Thomas Gleixner
2019-04-25  9:45 ` [patch V3 15/29] dm persistent data: Simplify stack trace handling Thomas Gleixner
2019-04-25  9:45 ` [patch V3 16/29] drm: Simplify stacktrace handling Thomas Gleixner
2019-04-25  9:45 ` [patch V3 17/29] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
2019-04-25  9:45 ` [patch V3 18/29] lockdep: Remove save argument from check_prev_add() Thomas Gleixner
2019-04-25 13:35   ` Peter Zijlstra
2019-04-25  9:45 ` [patch V3 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
2019-04-25  9:45 ` [patch V3 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
2019-04-25  9:45 ` [patch V3 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
2019-04-25 13:29   ` Josh Poimboeuf
2019-04-25  9:45 ` [patch V3 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
2019-04-25  9:45 ` [patch V3 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
2019-04-25  9:45 ` [patch V3 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
2019-04-25 13:30   ` Josh Poimboeuf
2019-04-25  9:45 ` [patch V3 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
2019-04-25  9:45 ` [patch V3 26/29] stacktrace: Remove obsolete functions Thomas Gleixner
2019-04-25  9:45 ` [patch V3 27/29] lib/stackdepot: " Thomas Gleixner
2019-04-25  9:45 ` [patch V3 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
2019-04-25  9:45 ` [patch V3 29/29] x86/stacktrace: Use " Thomas Gleixner
2019-04-25 10:09 ` [patch V3 00/29] stacktrace: Consolidate stack trace usage Ingo Molnar
2019-04-25 13:31 ` Josh Poimboeuf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).