linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch V2 00/29] stacktrace: Consolidate stack trace usage
@ 2019-04-18  8:41 Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 01/29] tracing: Cleanup stack trace code Thomas Gleixner
                   ` (28 more replies)
  0 siblings, 29 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

This is an update to V1:

 https://lkml.kernel.org/r/20190410102754.387743324@linutronix.de

Struct stack_trace is a sinkhole for input and output parameters which is
largely pointless for most usage sites. In fact if embedded into other data
structures it creates indirections and extra storage overhead for no
benefit.

Looking at all usage sites makes it clear that they just require an
interface which is based on a storage array. That array is either on stack,
global or embedded into some other data structure.

Some of the stack depot usage sites are outright wrong, but fortunately the
wrongness just causes more stack being used for nothing and does not have
functional impact.

Fix this up by:

  1) Providing plain storage array based interfaces for stacktrace and
     stackdepot.

  2) Cleaning up the mess at the callsites including some related
     cleanups.

  3) Removing the struct stack_trace based interfaces

  This is not yet changing the struct stack_trace interfaces at the
  architecture level, but it removes the exposure to the usage sites.

The last two patches are extending the cleanup to the architecture level by
replacing the various save_stack_trace.* architecture interfaces with a
more unified arch_stack_walk() interface. x86 is converted, but I have
worked through all architectures already and it removes lots of duplicated
code and allows consolidation across the board. The rest of the
architecture patches are not included in this posting as I want to get
feedback on the approach itself. The diffstat of cleaning up the remaining
architectures is currently on top of the current lot is:

   47 files changed, 402 insertions(+), 1196 deletions(-)

Once this has settled, the core interfaces can be improved by adding
features, which allow to get rid of the imprecise 'skip number of entries'
approach which tries to remove the stack tracer and the callsites themself
from the trace. That's error prone due to inlining and other issues. Having
e.g. a _RET_IP_ based filter allows to do that far more reliable.

The series is based on:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/stacktrace

which contains the removal of the inconsistent and pointless ULONG_MAX
termination of stacktraces.

It's also available from git:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.core/stacktrace

   up to:  131038eb3e2f ("x86/stacktrace: Use common infrastructure")

Changes vs. V1:

   - Applied the ULONG_MAX termination cleanup in tip

   - Addressed the review comments

   - Fixed up the last users of struct stack_trace outside the stacktrace
     core and architecture code (livepatch, tracing)

   - Added the new arch_stack_walk() model and converted x86 to it

Thanks,

	tglx

---
 arch/x86/Kconfig                              |    1 
 arch/x86/kernel/stacktrace.c                  |  116 +--------
 drivers/gpu/drm/drm_mm.c                      |   22 -
 drivers/gpu/drm/i915/i915_vma.c               |   11 
 drivers/gpu/drm/i915/intel_runtime_pm.c       |   21 -
 drivers/md/dm-bufio.c                         |   15 -
 drivers/md/persistent-data/dm-block-manager.c |   19 -
 fs/btrfs/ref-verify.c                         |   15 -
 fs/proc/base.c                                |   14 -
 include/linux/ftrace.h                        |   18 -
 include/linux/lockdep.h                       |    9 
 include/linux/stackdepot.h                    |    8 
 include/linux/stacktrace.h                    |   80 +++++-
 kernel/backtracetest.c                        |   11 
 kernel/dma/debug.c                            |   13 -
 kernel/latencytop.c                           |   17 -
 kernel/livepatch/transition.c                 |   22 -
 kernel/locking/lockdep.c                      |   81 ++----
 kernel/stacktrace.c                           |  323 ++++++++++++++++++++++++--
 kernel/trace/trace.c                          |  105 +++-----
 kernel/trace/trace.h                          |    8 
 kernel/trace/trace_events_hist.c              |   12 
 kernel/trace/trace_stack.c                    |   76 ++----
 lib/Kconfig                                   |    4 
 lib/fault-inject.c                            |   12 
 lib/stackdepot.c                              |   50 ++--
 mm/kasan/common.c                             |   30 --
 mm/kasan/report.c                             |    7 
 mm/kmemleak.c                                 |   24 -
 mm/page_owner.c                               |   79 ++----
 mm/slub.c                                     |   12 
 31 files changed, 664 insertions(+), 571 deletions(-)




^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 13:57   ` Josh Poimboeuf
  2019-04-18 22:19   ` Steven Rostedt
  2019-04-18  8:41 ` [patch V2 02/29] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
                   ` (27 subsequent siblings)
  28 siblings, 2 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

- Remove the extra array member of stack_dump_trace[]. It's not required as
  the stack tracer stores at max array size - 1 entries so there is still
  an empty slot.

- Make variables which are only used in trace_stack.c static.

- Simplify the enable/disable logic.

- Rename stack_trace_print() as it's using the stack_trace_ namespace. Free
  the name up for stack trace related functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
V2: Add more cleanups and use print_max_stack() as requested by Steven.
---
 include/linux/ftrace.h     |   18 ++++--------------
 kernel/trace/trace_stack.c |   36 ++++++++++++------------------------
 2 files changed, 16 insertions(+), 38 deletions(-)

--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -241,21 +241,11 @@ static inline void ftrace_free_mem(struc
 
 #ifdef CONFIG_STACK_TRACER
 
-#define STACK_TRACE_ENTRIES 500
-
-struct stack_trace;
-
-extern unsigned stack_trace_index[];
-extern struct stack_trace stack_trace_max;
-extern unsigned long stack_trace_max_size;
-extern arch_spinlock_t stack_trace_max_lock;
-
 extern int stack_tracer_enabled;
-void stack_trace_print(void);
-int
-stack_trace_sysctl(struct ctl_table *table, int write,
-		   void __user *buffer, size_t *lenp,
-		   loff_t *ppos);
+
+int stack_trace_sysctl(struct ctl_table *table, int write,
+		       void __user *buffer, size_t *lenp,
+		       loff_t *ppos);
 
 /* DO NOT MODIFY THIS VARIABLE DIRECTLY! */
 DECLARE_PER_CPU(int, disable_stack_tracer);
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -18,8 +18,10 @@
 
 #include "trace.h"
 
-static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1];
-unsigned stack_trace_index[STACK_TRACE_ENTRIES];
+#define STACK_TRACE_ENTRIES 500
+
+static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES];
+static unsigned stack_trace_index[STACK_TRACE_ENTRIES];
 
 /*
  * Reserve one entry for the passed in ip. This will allow
@@ -31,17 +33,16 @@ struct stack_trace stack_trace_max = {
 	.entries		= &stack_dump_trace[0],
 };
 
-unsigned long stack_trace_max_size;
-arch_spinlock_t stack_trace_max_lock =
+static unsigned long stack_trace_max_size;
+static arch_spinlock_t stack_trace_max_lock =
 	(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
 
 DEFINE_PER_CPU(int, disable_stack_tracer);
 static DEFINE_MUTEX(stack_sysctl_mutex);
 
 int stack_tracer_enabled;
-static int last_stack_tracer_enabled;
 
-void stack_trace_print(void)
+static void print_max_stack(void)
 {
 	long i;
 	int size;
@@ -61,16 +62,7 @@ void stack_trace_print(void)
 	}
 }
 
-/*
- * When arch-specific code overrides this function, the following
- * data should be filled up, assuming stack_trace_max_lock is held to
- * prevent concurrent updates.
- *     stack_trace_index[]
- *     stack_trace_max
- *     stack_trace_max_size
- */
-void __weak
-check_stack(unsigned long ip, unsigned long *stack)
+static void check_stack(unsigned long ip, unsigned long *stack)
 {
 	unsigned long this_size, flags; unsigned long *p, *top, *start;
 	static int tracer_frame;
@@ -179,7 +171,7 @@ check_stack(unsigned long ip, unsigned l
 	stack_trace_max.nr_entries = x;
 
 	if (task_stack_end_corrupted(current)) {
-		stack_trace_print();
+		print_max_stack();
 		BUG();
 	}
 
@@ -412,23 +404,20 @@ stack_trace_sysctl(struct ctl_table *tab
 		   void __user *buffer, size_t *lenp,
 		   loff_t *ppos)
 {
-	int ret;
+	int ret, was_enabled;
 
 	mutex_lock(&stack_sysctl_mutex);
+	was_enabled = !!stack_tracer_enabled;
 
 	ret = proc_dointvec(table, write, buffer, lenp, ppos);
 
-	if (ret || !write ||
-	    (last_stack_tracer_enabled == !!stack_tracer_enabled))
+	if (ret || !write || (was_enabled == !!stack_tracer_enabled))
 		goto out;
 
-	last_stack_tracer_enabled = !!stack_tracer_enabled;
-
 	if (stack_tracer_enabled)
 		register_ftrace_function(&trace_ops);
 	else
 		unregister_ftrace_function(&trace_ops);
-
  out:
 	mutex_unlock(&stack_sysctl_mutex);
 	return ret;
@@ -444,7 +433,6 @@ static __init int enable_stacktrace(char
 		strncpy(stack_trace_filter_buf, str + len, COMMAND_LINE_SIZE);
 
 	stack_tracer_enabled = 1;
-	last_stack_tracer_enabled = 1;
 	return 1;
 }
 __setup("stacktrace", enable_stacktrace);



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 02/29] stacktrace: Provide helpers for common stack trace operations
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 01/29] tracing: Cleanup stack trace code Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
                   ` (26 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

All operations with stack traces are based on struct stack_trace. That's a
horrible construct as the struct is a kitchen sink for input and
output. Quite some usage sites embed it into their own data structures
which creates weird indirections.

There is absolutely no point in doing so. For all use cases a storage array
and the number of valid stack trace entries in the array is sufficient.

Provide helper functions which avoid the struct stack_trace indirection so
the usage sites can be cleaned up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stacktrace.h |   27 +++++++
 kernel/stacktrace.c        |  160 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 172 insertions(+), 15 deletions(-)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -3,11 +3,26 @@
 #define __LINUX_STACKTRACE_H
 
 #include <linux/types.h>
+#include <asm/errno.h>
 
 struct task_struct;
 struct pt_regs;
 
 #ifdef CONFIG_STACKTRACE
+void stack_trace_print(unsigned long *trace, unsigned int nr_entries,
+		       int spaces);
+int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+			unsigned int nr_entries, int spaces);
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr);
+unsigned int stack_trace_save_tsk(struct task_struct *task,
+				  unsigned long *store, unsigned int size,
+				  unsigned int skipnr);
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr);
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
+
+/* Internal interfaces. Do not use in generic code */
 struct stack_trace {
 	unsigned int nr_entries, max_entries;
 	unsigned long *entries;
@@ -41,4 +56,16 @@ extern void save_stack_trace_user(struct
 # define save_stack_trace_tsk_reliable(tsk, trace)	({ -ENOSYS; })
 #endif /* CONFIG_STACKTRACE */
 
+#if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
+int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size);
+#else
+static inline int stack_trace_save_tsk_reliable(struct task_struct *tsk,
+						unsigned long *store,
+						unsigned int size)
+{
+	return -ENOSYS;
+}
+#endif
+
 #endif /* __LINUX_STACKTRACE_H */
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -11,35 +11,52 @@
 #include <linux/kallsyms.h>
 #include <linux/stacktrace.h>
 
-void print_stack_trace(struct stack_trace *trace, int spaces)
+/**
+ * stack_trace_print - Print the entries in the stack trace
+ * @entries:	Pointer to storage array
+ * @nr_entries:	Number of entries in the storage array
+ * @spaces:	Number of leading spaces to print
+ */
+void stack_trace_print(unsigned long *entries, unsigned int nr_entries,
+		       int spaces)
 {
-	int i;
+	unsigned int i;
 
-	if (WARN_ON(!trace->entries))
+	if (WARN_ON(!entries))
 		return;
 
-	for (i = 0; i < trace->nr_entries; i++)
-		printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]);
+	for (i = 0; i < nr_entries; i++)
+		printk("%*c%pS\n", 1 + spaces, ' ', (void *)entries[i]);
+}
+EXPORT_SYMBOL_GPL(stack_trace_print);
+
+void print_stack_trace(struct stack_trace *trace, int spaces)
+{
+	stack_trace_print(trace->entries, trace->nr_entries, spaces);
 }
 EXPORT_SYMBOL_GPL(print_stack_trace);
 
-int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces)
+/**
+ * stack_trace_snprint - Print the entries in the stack trace into a buffer
+ * @buf:	Pointer to the print buffer
+ * @size:	Size of the print buffer
+ * @entries:	Pointer to storage array
+ * @nr_entries:	Number of entries in the storage array
+ * @spaces:	Number of leading spaces to print
+ */
+int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+			unsigned int nr_entries, int spaces)
 {
-	int i;
-	int generated;
-	int total = 0;
+	unsigned int generated, i, total = 0;
 
-	if (WARN_ON(!trace->entries))
+	if (WARN_ON(!entries))
 		return 0;
 
-	for (i = 0; i < trace->nr_entries; i++) {
+	for (i = 0; i < nr_entries; i++) {
 		generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ',
-				     (void *)trace->entries[i]);
+				     (void *)entries[i]);
 
 		total += generated;
-
-		/* Assume that generated isn't a negative number */
 		if (generated >= size) {
 			buf += size;
 			size = 0;
@@ -51,6 +68,14 @@ int snprint_stack_trace(char *buf, size_
 
 	return total;
 }
+EXPORT_SYMBOL_GPL(stack_trace_snprint);
+
+int snprint_stack_trace(char *buf, size_t size,
+			struct stack_trace *trace, int spaces)
+{
+	return stack_trace_snprint(buf, size, trace->entries,
+				   trace->nr_entries, spaces);
+}
 EXPORT_SYMBOL_GPL(snprint_stack_trace);
 
 /*
@@ -77,3 +102,108 @@ save_stack_trace_tsk_reliable(struct tas
 	WARN_ONCE(1, KERN_INFO "save_stack_tsk_reliable() not implemented yet.\n");
 	return -ENOSYS;
 }
+
+/**
+ * stack_trace_save - Save a stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace(&trace);
+	return trace.nr_entries;
+}
+EXPORT_SYMBOL_GPL(stack_trace_save);
+
+/**
+ * stack_trace_save_tsk - Save a task stack trace into a storage array
+ * @task:	The task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save_tsk(struct task_struct *task,
+				  unsigned long *store, unsigned int size,
+				  unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace_tsk(task, &trace);
+	return trace.nr_entries;
+}
+
+/**
+ * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
+ * @regs:	Pointer to pt_regs to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr,
+	};
+
+	save_stack_trace_regs(regs, &trace);
+	return trace.nr_entries;
+}
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+/**
+ * stack_trace_save_tsk_reliable - Save task stack with verification
+ * @tsk:	Pointer to the task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Returns:	An error if it detects any unreliable features of the
+ *		stack. Otherwise it guarantees that the stack trace is
+ *		reliable and returns the number of entries stored.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
+ */
+int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+	};
+	int ret = save_stack_trace_tsk_reliable(tsk, &trace);
+
+	return ret ? ret : trace.nr_entries;
+}
+#endif
+
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
+/**
+ * stack_trace_save_user - Save a user space stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ */
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+	};
+
+	save_stack_trace_user(&trace);
+	return trace.nr_entries;
+}
+#endif /* CONFIG_USER_STACKTRACE_SUPPORT */



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 01/29] tracing: Cleanup stack trace code Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 02/29] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 11:51   ` Mike Rapoport
  2019-04-18  8:41 ` [patch V2 04/29] backtrace-test: Simplify stack trace handling Thomas Gleixner
                   ` (25 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

The struct stack_trace indirection in the stack depot functions is a truly
pointless excercise which requires horrible code at the callsites.

Provide interfaces based on plain storage arrays.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Alexander Potapenko <glider@google.com>
---
 include/linux/stackdepot.h |    4 ++
 lib/stackdepot.c           |   66 ++++++++++++++++++++++++++++++++-------------
 2 files changed, 51 insertions(+), 19 deletions(-)

--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t;
 struct stack_trace;
 
 depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
+depot_stack_handle_t stack_depot_save(unsigned long *entries,
+				      unsigned int nr_entries, gfp_t gfp_flags);
 
 void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries);
 
 #endif
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -194,40 +194,56 @@ static inline struct stack_record *find_
 	return NULL;
 }
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
+/**
+ * stack_depot_fetch - Fetch stack entries from a depot
+ *
+ * @entries:		Pointer to store the entries address
+ */
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries)
 {
 	union handle_parts parts = { .handle = handle };
 	void *slab = stack_slabs[parts.slabindex];
 	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
 	struct stack_record *stack = slab + offset;
 
-	trace->nr_entries = trace->max_entries = stack->size;
-	trace->entries = stack->entries;
-	trace->skip = 0;
+	*entries = stack->entries;
+	return stack->size;
+}
+EXPORT_SYMBOL_GPL(stack_depot_fetch);
+
+void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
+{
+	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
+
+	trace->max_entries = trace->nr_entries = nent;
 }
 EXPORT_SYMBOL_GPL(depot_fetch_stack);
 
 /**
- * depot_save_stack - save stack in a stack depot.
- * @trace - the stacktrace to save.
- * @alloc_flags - flags for allocating additional memory if required.
+ * stack_depot_save - Save a stack trace from an array
  *
- * Returns the handle of the stack struct stored in depot.
+ * @entries:		Pointer to storage array
+ * @nr_entries:		Size of the storage array
+ * @alloc_flags:	Allocation gfp flags
+ *
+ * Returns the handle of the stack struct stored in depot
  */
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
-				    gfp_t alloc_flags)
+depot_stack_handle_t stack_depot_save(unsigned long *entries,
+				      unsigned int nr_entries,
+				      gfp_t alloc_flags)
 {
-	u32 hash;
-	depot_stack_handle_t retval = 0;
 	struct stack_record *found = NULL, **bucket;
-	unsigned long flags;
+	depot_stack_handle_t retval = 0;
 	struct page *page = NULL;
 	void *prealloc = NULL;
+	unsigned long flags;
+	u32 hash;
 
-	if (unlikely(trace->nr_entries == 0))
+	if (unlikely(nr_entries == 0))
 		goto fast_exit;
 
-	hash = hash_stack(trace->entries, trace->nr_entries);
+	hash = hash_stack(entries, nr_entries);
 	bucket = &stack_table[hash & STACK_HASH_MASK];
 
 	/*
@@ -235,8 +251,8 @@ depot_stack_handle_t depot_save_stack(st
 	 * The smp_load_acquire() here pairs with smp_store_release() to
 	 * |bucket| below.
 	 */
-	found = find_stack(smp_load_acquire(bucket), trace->entries,
-			   trace->nr_entries, hash);
+	found = find_stack(smp_load_acquire(bucket), entries,
+			   nr_entries, hash);
 	if (found)
 		goto exit;
 
@@ -264,10 +280,10 @@ depot_stack_handle_t depot_save_stack(st
 
 	spin_lock_irqsave(&depot_lock, flags);
 
-	found = find_stack(*bucket, trace->entries, trace->nr_entries, hash);
+	found = find_stack(*bucket, entries, nr_entries, hash);
 	if (!found) {
 		struct stack_record *new =
-			depot_alloc_stack(trace->entries, trace->nr_entries,
+			depot_alloc_stack(entries, nr_entries,
 					  hash, &prealloc, alloc_flags);
 		if (new) {
 			new->next = *bucket;
@@ -297,4 +313,16 @@ depot_stack_handle_t depot_save_stack(st
 fast_exit:
 	return retval;
 }
+EXPORT_SYMBOL_GPL(stack_depot_save);
+
+/**
+ * depot_save_stack - save stack in a stack depot.
+ * @trace - the stacktrace to save.
+ * @alloc_flags - flags for allocating additional memory if required.
+ */
+depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
+				      gfp_t alloc_flags)
+{
+	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
+}
 EXPORT_SYMBOL_GPL(depot_save_stack);



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 04/29] backtrace-test: Simplify stack trace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (2 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 05/29] proc: Simplify task stack retrieval Thomas Gleixner
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/backtracetest.c |   11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

--- a/kernel/backtracetest.c
+++ b/kernel/backtracetest.c
@@ -48,19 +48,14 @@ static void backtrace_test_irq(void)
 #ifdef CONFIG_STACKTRACE
 static void backtrace_test_saved(void)
 {
-	struct stack_trace trace;
 	unsigned long entries[8];
+	unsigned int nr_entries;
 
 	pr_info("Testing a saved backtrace.\n");
 	pr_info("The following trace is a kernel self test and not a bug!\n");
 
-	trace.nr_entries = 0;
-	trace.max_entries = ARRAY_SIZE(entries);
-	trace.entries = entries;
-	trace.skip = 0;
-
-	save_stack_trace(&trace);
-	print_stack_trace(&trace, 0);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	stack_trace_print(entries, nr_entries, 0);
 }
 #else
 static void backtrace_test_saved(void)



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 05/29] proc: Simplify task stack retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (3 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 04/29] backtrace-test: Simplify stack trace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 06/29] latency_top: Simplify stack trace handling Thomas Gleixner
                   ` (23 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 fs/proc/base.c |   14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -407,7 +407,6 @@ static void unlock_trace(struct task_str
 static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
 			  struct pid *pid, struct task_struct *task)
 {
-	struct stack_trace trace;
 	unsigned long *entries;
 	int err;
 
@@ -430,20 +429,17 @@ static int proc_pid_stack(struct seq_fil
 	if (!entries)
 		return -ENOMEM;
 
-	trace.nr_entries	= 0;
-	trace.max_entries	= MAX_STACK_TRACE_DEPTH;
-	trace.entries		= entries;
-	trace.skip		= 0;
-
 	err = lock_trace(task);
 	if (!err) {
-		unsigned int i;
+		unsigned int i, nr_entries;
 
-		save_stack_trace_tsk(task, &trace);
+		nr_entries = stack_trace_save_tsk(task, entries,
+						  MAX_STACK_TRACE_DEPTH, 0);
 
-		for (i = 0; i < trace.nr_entries; i++) {
+		for (i = 0; i < nr_entries; i++) {
 			seq_printf(m, "[<0>] %pB\n", (void *)entries[i]);
 		}
+
 		unlock_trace(task);
 	}
 	kfree(entries);



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 06/29] latency_top: Simplify stack trace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (4 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 05/29] proc: Simplify task stack retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 07/29] mm/slub: Simplify stack trace retrieval Thomas Gleixner
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/latencytop.c |   17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -141,20 +141,6 @@ account_global_scheduler_latency(struct
 	memcpy(&latency_record[i], lat, sizeof(struct latency_record));
 }
 
-/*
- * Iterator to store a backtrace into a latency record entry
- */
-static inline void store_stacktrace(struct task_struct *tsk,
-					struct latency_record *lat)
-{
-	struct stack_trace trace;
-
-	memset(&trace, 0, sizeof(trace));
-	trace.max_entries = LT_BACKTRACEDEPTH;
-	trace.entries = &lat->backtrace[0];
-	save_stack_trace_tsk(tsk, &trace);
-}
-
 /**
  * __account_scheduler_latency - record an occurred latency
  * @tsk - the task struct of the task hitting the latency
@@ -191,7 +177,8 @@ void __sched
 	lat.count = 1;
 	lat.time = usecs;
 	lat.max = usecs;
-	store_stacktrace(tsk, &lat);
+
+	stack_trace_save_tsk(tsk, lat.backtrace, LT_BACKTRACEDEPTH, 0);
 
 	raw_spin_lock_irqsave(&latency_lock, flags);
 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 07/29] mm/slub: Simplify stack trace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (5 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 06/29] latency_top: Simplify stack trace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
                   ` (21 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Alexey Dobriyan,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
---
 mm/slub.c |   12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -552,18 +552,14 @@ static void set_track(struct kmem_cache
 
 	if (addr) {
 #ifdef CONFIG_STACKTRACE
-		struct stack_trace trace;
+		unsigned int nr_entries;
 
-		trace.nr_entries = 0;
-		trace.max_entries = TRACK_ADDRS_COUNT;
-		trace.entries = p->addrs;
-		trace.skip = 3;
 		metadata_access_enable();
-		save_stack_trace(&trace);
+		nr_entries = stack_trace_save(p->addrs, TRACK_ADDRS_COUNT, 3);
 		metadata_access_disable();
 
-		if (trace.nr_entries < TRACK_ADDRS_COUNT)
-			p->addrs[trace.nr_entries] = 0;
+		if (nr_entries < TRACK_ADDRS_COUNT)
+			p->addrs[nr_entries] = 0;
 #endif
 		p->addr = addr;
 		p->cpu = smp_processor_id();



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 08/29] mm/kmemleak: Simplify stacktrace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (6 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 07/29] mm/slub: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 15:17   ` Catalin Marinas
  2019-04-18  8:41 ` [patch V2 09/29] mm/kasan: " Thomas Gleixner
                   ` (20 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Catalin Marinas, linux-mm, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, David Rientjes, Christoph Lameter,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-mm@kvack.org
---
 mm/kmemleak.c |   24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -410,11 +410,6 @@ static void print_unreferenced(struct se
  */
 static void dump_object_info(struct kmemleak_object *object)
 {
-	struct stack_trace trace;
-
-	trace.nr_entries = object->trace_len;
-	trace.entries = object->trace;
-
 	pr_notice("Object 0x%08lx (size %zu):\n",
 		  object->pointer, object->size);
 	pr_notice("  comm \"%s\", pid %d, jiffies %lu\n",
@@ -424,7 +419,7 @@ static void dump_object_info(struct kmem
 	pr_notice("  flags = 0x%x\n", object->flags);
 	pr_notice("  checksum = %u\n", object->checksum);
 	pr_notice("  backtrace:\n");
-	print_stack_trace(&trace, 4);
+	stack_trace_print(object->trace, object->trace_len, 4);
 }
 
 /*
@@ -553,15 +548,7 @@ static struct kmemleak_object *find_and_
  */
 static int __save_stack_trace(unsigned long *trace)
 {
-	struct stack_trace stack_trace;
-
-	stack_trace.max_entries = MAX_TRACE;
-	stack_trace.nr_entries = 0;
-	stack_trace.entries = trace;
-	stack_trace.skip = 2;
-	save_stack_trace(&stack_trace);
-
-	return stack_trace.nr_entries;
+	return stack_trace_save(trace, MAX_TRACE, 2);
 }
 
 /*
@@ -2019,13 +2006,8 @@ early_param("kmemleak", kmemleak_boot_co
 
 static void __init print_log_trace(struct early_log *log)
 {
-	struct stack_trace trace;
-
-	trace.nr_entries = log->trace_len;
-	trace.entries = log->trace;
-
 	pr_notice("Early log backtrace:\n");
-	print_stack_trace(&trace, 2);
+	stack_trace_print(log->trace, log->trace_len, 2);
 }
 
 /*



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 09/29] mm/kasan: Simplify stacktrace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (7 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 10:39   ` Andrey Ryabinin
  2019-04-18  8:41 ` [patch V2 10/29] mm/page_owner: Simplify stack trace handling Thomas Gleixner
                   ` (19 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	linux-mm, Alexey Dobriyan, Andrew Morton, Pekka Enberg,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
---
 mm/kasan/common.c |   30 ++++++++++++------------------
 mm/kasan/report.c |    7 ++++---
 2 files changed, 16 insertions(+), 21 deletions(-)

--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
 		 ptr < (unsigned long)&__softirqentry_text_end);
 }
 
-static inline void filter_irq_stacks(struct stack_trace *trace)
+static inline unsigned int filter_irq_stacks(unsigned long *entries,
+					     unsigned int nr_entries)
 {
-	int i;
+	unsigned int i;
 
-	if (!trace->nr_entries)
-		return;
-	for (i = 0; i < trace->nr_entries; i++)
-		if (in_irqentry_text(trace->entries[i])) {
+	for (i = 0; i < nr_entries; i++) {
+		if (in_irqentry_text(entries[i])) {
 			/* Include the irqentry function into the stack. */
-			trace->nr_entries = i + 1;
-			break;
+			return i + 1;
 		}
+	}
+	return nr_entries;
 }
 
 static inline depot_stack_handle_t save_stack(gfp_t flags)
 {
 	unsigned long entries[KASAN_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = KASAN_STACK_DEPTH,
-		.skip = 0
-	};
+	unsigned int nr_entries;
 
-	save_stack_trace(&trace);
-	filter_irq_stacks(&trace);
-
-	return depot_save_stack(&trace, flags);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	nr_entries = filter_irq_stacks(entries, nr_entries);
+	return stack_depot_save(entries, nr_entries, flags);
 }
 
 static inline void set_track(struct kasan_track *track, gfp_t flags)
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
 {
 	pr_err("%s by task %u:\n", prefix, track->pid);
 	if (track->stack) {
-		struct stack_trace trace;
+		unsigned long *entries;
+		unsigned int nr_entries;
 
-		depot_fetch_stack(track->stack, &trace);
-		print_stack_trace(&trace, 0);
+		nr_entries = stack_depot_fetch(track->stack, &entries);
+		stack_trace_print(entries, nr_entries, 0);
 	} else {
 		pr_err("(stack is not available)\n");
 	}



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 10/29] mm/page_owner: Simplify stack trace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (8 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 09/29] mm/kasan: " Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 11/29] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
                   ` (18 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-mm, Mike Rapoport, David Rientjes,
	Andrew Morton, Alexey Dobriyan, Pekka Enberg, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 mm/page_owner.c |   79 +++++++++++++++++++-------------------------------------
 1 file changed, 28 insertions(+), 51 deletions(-)

--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -58,15 +58,10 @@ static bool need_page_owner(void)
 static __always_inline depot_stack_handle_t create_dummy_stack(void)
 {
 	unsigned long entries[4];
-	struct stack_trace dummy;
+	unsigned int nr_entries;
 
-	dummy.nr_entries = 0;
-	dummy.max_entries = ARRAY_SIZE(entries);
-	dummy.entries = &entries[0];
-	dummy.skip = 0;
-
-	save_stack_trace(&dummy);
-	return depot_save_stack(&dummy, GFP_KERNEL);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	return stack_depot_save(entries, nr_entries, GFP_KERNEL);
 }
 
 static noinline void register_dummy_stack(void)
@@ -120,46 +115,39 @@ void __reset_page_owner(struct page *pag
 	}
 }
 
-static inline bool check_recursive_alloc(struct stack_trace *trace,
-					unsigned long ip)
+static inline bool check_recursive_alloc(unsigned long *entries,
+					 unsigned int nr_entries,
+					 unsigned long ip)
 {
-	int i;
+	unsigned int i;
 
-	if (!trace->nr_entries)
-		return false;
-
-	for (i = 0; i < trace->nr_entries; i++) {
-		if (trace->entries[i] == ip)
+	for (i = 0; i < nr_entries; i++) {
+		if (entries[i] == ip)
 			return true;
 	}
-
 	return false;
 }
 
 static noinline depot_stack_handle_t save_stack(gfp_t flags)
 {
 	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 2
-	};
 	depot_stack_handle_t handle;
+	unsigned int nr_entries;
 
-	save_stack_trace(&trace);
+	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2);
 
 	/*
-	 * We need to check recursion here because our request to stackdepot
-	 * could trigger memory allocation to save new entry. New memory
-	 * allocation would reach here and call depot_save_stack() again
-	 * if we don't catch it. There is still not enough memory in stackdepot
-	 * so it would try to allocate memory again and loop forever.
+	 * We need to check recursion here because our request to
+	 * stackdepot could trigger memory allocation to save new
+	 * entry. New memory allocation would reach here and call
+	 * stack_depot_save_entries() again if we don't catch it. There is
+	 * still not enough memory in stackdepot so it would try to
+	 * allocate memory again and loop forever.
 	 */
-	if (check_recursive_alloc(&trace, _RET_IP_))
+	if (check_recursive_alloc(entries, nr_entries, _RET_IP_))
 		return dummy_handle;
 
-	handle = depot_save_stack(&trace, flags);
+	handle = stack_depot_save(entries, nr_entries, flags);
 	if (!handle)
 		handle = failure_handle;
 
@@ -337,16 +325,10 @@ print_page_owner(char __user *buf, size_
 		struct page *page, struct page_owner *page_owner,
 		depot_stack_handle_t handle)
 {
-	int ret;
-	int pageblock_mt, page_mt;
+	int ret, pageblock_mt, page_mt;
+	unsigned long *entries;
+	unsigned int nr_entries;
 	char *kbuf;
-	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 0
-	};
 
 	count = min_t(size_t, count, PAGE_SIZE);
 	kbuf = kmalloc(count, GFP_KERNEL);
@@ -375,8 +357,8 @@ print_page_owner(char __user *buf, size_
 	if (ret >= count)
 		goto err;
 
-	depot_fetch_stack(handle, &trace);
-	ret += snprint_stack_trace(kbuf + ret, count - ret, &trace, 0);
+	nr_entries = stack_depot_fetch(handle, &entries);
+	ret += stack_trace_snprint(kbuf + ret, count - ret, entries, nr_entries, 0);
 	if (ret >= count)
 		goto err;
 
@@ -407,14 +389,9 @@ void __dump_page_owner(struct page *page
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
 	struct page_owner *page_owner;
-	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 0
-	};
 	depot_stack_handle_t handle;
+	unsigned long *entries;
+	unsigned int nr_entries;
 	gfp_t gfp_mask;
 	int mt;
 
@@ -438,10 +415,10 @@ void __dump_page_owner(struct page *page
 		return;
 	}
 
-	depot_fetch_stack(handle, &trace);
+	nr_entries = stack_depot_fetch(handle, &entries);
 	pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n",
 		 page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask);
-	print_stack_trace(&trace, 0);
+	stack_trace_print(entries, nr_entries, 0);
 
 	if (page_owner->last_migrate_reason != -1)
 		pr_alert("page has been migrated, last migrate reason: %s\n",



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 11/29] fault-inject: Simplify stacktrace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (9 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 10/29] mm/page_owner: Simplify stack trace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
                   ` (17 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Akinobu Mita, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, linux-mm, David Rientjes,
	Christoph Lameter, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
---
 lib/fault-inject.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/lib/fault-inject.c
+++ b/lib/fault-inject.c
@@ -65,22 +65,16 @@ static bool fail_task(struct fault_attr
 
 static bool fail_stacktrace(struct fault_attr *attr)
 {
-	struct stack_trace trace;
 	int depth = attr->stacktrace_depth;
 	unsigned long entries[MAX_STACK_TRACE_DEPTH];
-	int n;
+	int n, nr_entries;
 	bool found = (attr->require_start == 0 && attr->require_end == ULONG_MAX);
 
 	if (depth == 0)
 		return found;
 
-	trace.nr_entries = 0;
-	trace.entries = entries;
-	trace.max_entries = depth;
-	trace.skip = 1;
-
-	save_stack_trace(&trace);
-	for (n = 0; n < trace.nr_entries; n++) {
+	nr_entries = stack_trace_save(entries, depth, 1);
+	for (n = 0; n < nr_entries; n++) {
 		if (attr->reject_start <= entries[n] &&
 			       entries[n] < attr->reject_end)
 			return false;



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 12/29] dma/debug: Simplify stracktrace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (10 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 11/29] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-19  7:05   ` Christoph Hellwig
  2019-04-18  8:41 ` [patch V2 13/29] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
                   ` (16 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Alexey Dobriyan, Andrew Morton, Pekka Enberg,
	linux-mm, David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: iommu@lists.linux-foundation.org
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
 kernel/dma/debug.c |   13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -89,8 +89,8 @@ struct dma_debug_entry {
 	int		 sg_mapped_ents;
 	enum map_err_types  map_err_type;
 #ifdef CONFIG_STACKTRACE
-	struct		 stack_trace stacktrace;
-	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
+	unsigned int	stack_len;
+	unsigned long	stack_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
 #endif
 };
 
@@ -174,7 +174,7 @@ static inline void dump_entry_trace(stru
 #ifdef CONFIG_STACKTRACE
 	if (entry) {
 		pr_warning("Mapped at:\n");
-		print_stack_trace(&entry->stacktrace, 0);
+		stack_trace_print(entry->stack_entries, entry->stack_len, 0);
 	}
 #endif
 }
@@ -704,12 +704,9 @@ static struct dma_debug_entry *dma_entry
 	spin_unlock_irqrestore(&free_entries_lock, flags);
 
 #ifdef CONFIG_STACKTRACE
-	entry->stacktrace.max_entries = DMA_DEBUG_STACKTRACE_ENTRIES;
-	entry->stacktrace.entries = entry->st_entries;
-	entry->stacktrace.skip = 1;
-	save_stack_trace(&entry->stacktrace);
+	entry->stack_len = stack_trace_save(entry->stack_entries,
+					    ARRAY_SIZE(entry->stack_entries), 1);
 #endif
-
 	return entry;
 }
 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 13/29] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (11 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 14/29] dm bufio: " Thomas Gleixner
                   ` (15 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Johannes Thumshirn, David Sterba,
	Chris Mason, Josef Bacik, linux-btrfs, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, linux-mm, David Rientjes,
	Christoph Lameter, Catalin Marinas, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita, iommu,
	Robin Murphy, Christoph Hellwig, Marek Szyprowski, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Acked-by: David Sterba <dsterba@suse.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs@vger.kernel.org
---
 fs/btrfs/ref-verify.c |   15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

--- a/fs/btrfs/ref-verify.c
+++ b/fs/btrfs/ref-verify.c
@@ -205,28 +205,17 @@ static struct root_entry *lookup_root_en
 #ifdef CONFIG_STACKTRACE
 static void __save_stack_trace(struct ref_action *ra)
 {
-	struct stack_trace stack_trace;
-
-	stack_trace.max_entries = MAX_TRACE;
-	stack_trace.nr_entries = 0;
-	stack_trace.entries = ra->trace;
-	stack_trace.skip = 2;
-	save_stack_trace(&stack_trace);
-	ra->trace_len = stack_trace.nr_entries;
+	ra->trace_len = stack_trace_save(ra->trace, MAX_TRACE, 2);
 }
 
 static void __print_stack_trace(struct btrfs_fs_info *fs_info,
 				struct ref_action *ra)
 {
-	struct stack_trace trace;
-
 	if (ra->trace_len == 0) {
 		btrfs_err(fs_info, "  ref-verify: no stacktrace");
 		return;
 	}
-	trace.nr_entries = ra->trace_len;
-	trace.entries = ra->trace;
-	print_stack_trace(&trace, 2);
+	stack_trace_print(ra->trace, ra->trace_len, 2);
 }
 #else
 static void inline __save_stack_trace(struct ref_action *ra)



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 14/29] dm bufio: Simplify stack trace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (12 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 13/29] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 10:44   ` Alexander Potapenko
  2019-04-18  8:41 ` [patch V2 15/29] dm persistent data: Simplify stack trace handling Thomas Gleixner
                   ` (14 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, dm-devel, Mike Snitzer, Alasdair Kergon,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: dm-devel@redhat.com
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair Kergon <agk@redhat.com>
---
 drivers/md/dm-bufio.c |   15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -150,7 +150,7 @@ struct dm_buffer {
 	void (*end_io)(struct dm_buffer *, blk_status_t);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 #define MAX_STACK 10
-	struct stack_trace stack_trace;
+	unsigned int stack_len;
 	unsigned long stack_entries[MAX_STACK];
 #endif
 };
@@ -232,11 +232,7 @@ static DEFINE_MUTEX(dm_bufio_clients_loc
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 static void buffer_record_stack(struct dm_buffer *b)
 {
-	b->stack_trace.nr_entries = 0;
-	b->stack_trace.max_entries = MAX_STACK;
-	b->stack_trace.entries = b->stack_entries;
-	b->stack_trace.skip = 2;
-	save_stack_trace(&b->stack_trace);
+	b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);
 }
 #endif
 
@@ -438,7 +434,7 @@ static struct dm_buffer *alloc_buffer(st
 	adjust_total_allocated(b->data_mode, (long)c->block_size);
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	memset(&b->stack_trace, 0, sizeof(b->stack_trace));
+	b->stack_len = 0;
 #endif
 	return b;
 }
@@ -1520,8 +1516,9 @@ static void drop_buffers(struct dm_bufio
 			DMERR("leaked buffer %llx, hold count %u, list %d",
 			      (unsigned long long)b->block, b->hold_count, i);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-			print_stack_trace(&b->stack_trace, 1);
-			b->hold_count = 0; /* mark unclaimed to avoid BUG_ON below */
+			stack_trace_print(b->stack_entries, b->stack_len, 1);
+			/* mark unclaimed to avoid BUG_ON below */
+			b->hold_count = 0;
 #endif
 		}
 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 15/29] dm persistent data: Simplify stack trace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (13 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 14/29] dm bufio: " Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 16/29] drm: Simplify stacktrace handling Thomas Gleixner
                   ` (13 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, dm-devel, Mike Snitzer, Alasdair Kergon,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface. This results in less storage space and
indirection.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: dm-devel@redhat.com
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair Kergon <agk@redhat.com>
---
 drivers/md/persistent-data/dm-block-manager.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

--- a/drivers/md/persistent-data/dm-block-manager.c
+++ b/drivers/md/persistent-data/dm-block-manager.c
@@ -35,7 +35,10 @@
 #define MAX_HOLDERS 4
 #define MAX_STACK 10
 
-typedef unsigned long stack_entries[MAX_STACK];
+struct stack_store {
+	unsigned int	nr_entries;
+	unsigned long	entries[MAX_STACK];
+};
 
 struct block_lock {
 	spinlock_t lock;
@@ -44,8 +47,7 @@ struct block_lock {
 	struct task_struct *holders[MAX_HOLDERS];
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	struct stack_trace traces[MAX_HOLDERS];
-	stack_entries entries[MAX_HOLDERS];
+	struct stack_store traces[MAX_HOLDERS];
 #endif
 };
 
@@ -73,7 +75,7 @@ static void __add_holder(struct block_lo
 {
 	unsigned h = __find_holder(lock, NULL);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	struct stack_trace *t;
+	struct stack_store *t;
 #endif
 
 	get_task_struct(task);
@@ -81,11 +83,7 @@ static void __add_holder(struct block_lo
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 	t = lock->traces + h;
-	t->nr_entries = 0;
-	t->max_entries = MAX_STACK;
-	t->entries = lock->entries[h];
-	t->skip = 2;
-	save_stack_trace(t);
+	t->nr_entries = stack_trace_save(t->entries, MAX_STACK, 2);
 #endif
 }
 
@@ -106,7 +104,8 @@ static int __check_holder(struct block_l
 			DMERR("recursive lock detected in metadata");
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 			DMERR("previously held here:");
-			print_stack_trace(lock->traces + i, 4);
+			stack_trace_print(lock->traces[i].entries,
+					  lock->traces[i].nr_entries, 4);
 
 			DMERR("subsequent acquisition attempted here:");
 			dump_stack();



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 16/29] drm: Simplify stacktrace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (14 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 15/29] dm persistent data: Simplify stack trace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-23  7:36   ` Daniel Vetter
  2019-04-18  8:41 ` [patch V2 17/29] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
                   ` (12 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: intel-gfx@lists.freedesktop.org
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/drm_mm.c                |   22 +++++++---------------
 drivers/gpu/drm/i915/i915_vma.c         |   11 ++++-------
 drivers/gpu/drm/i915/intel_runtime_pm.c |   21 +++++++--------------
 3 files changed, 18 insertions(+), 36 deletions(-)

--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -106,22 +106,19 @@
 static noinline void save_stack(struct drm_mm_node *node)
 {
 	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = STACKDEPTH,
-		.skip = 1
-	};
+	unsigned int n;
 
-	save_stack_trace(&trace);
+	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
 
 	/* May be called under spinlock, so avoid sleeping */
-	node->stack = depot_save_stack(&trace, GFP_NOWAIT);
+	node->stack = stack_depot_save(entries, n, GFP_NOWAIT);
 }
 
 static void show_leaks(struct drm_mm *mm)
 {
 	struct drm_mm_node *node;
-	unsigned long entries[STACKDEPTH];
+	unsigned long *entries;
+	unsigned int nr_entries;
 	char *buf;
 
 	buf = kmalloc(BUFSZ, GFP_KERNEL);
@@ -129,19 +126,14 @@ static void show_leaks(struct drm_mm *mm
 		return;
 
 	list_for_each_entry(node, drm_mm_nodes(mm), node_list) {
-		struct stack_trace trace = {
-			.entries = entries,
-			.max_entries = STACKDEPTH
-		};
-
 		if (!node->stack) {
 			DRM_ERROR("node [%08llx + %08llx]: unknown owner\n",
 				  node->start, node->size);
 			continue;
 		}
 
-		depot_fetch_stack(node->stack, &trace);
-		snprint_stack_trace(buf, BUFSZ, &trace, 0);
+		nr_entries = stack_depot_fetch(node->stack, &entries);
+		stack_trace_snprint(buf, BUFSZ, entries, nr_entries, 0);
 		DRM_ERROR("node [%08llx + %08llx]: inserted at\n%s",
 			  node->start, node->size, buf);
 	}
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -36,11 +36,8 @@
 
 static void vma_print_allocator(struct i915_vma *vma, const char *reason)
 {
-	unsigned long entries[12];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-	};
+	unsigned long *entries;
+	unsigned int nr_entries;
 	char buf[512];
 
 	if (!vma->node.stack) {
@@ -49,8 +46,8 @@ static void vma_print_allocator(struct i
 		return;
 	}
 
-	depot_fetch_stack(vma->node.stack, &trace);
-	snprint_stack_trace(buf, sizeof(buf), &trace, 0);
+	nr_entries = stack_depot_fetch(vma->node.stack, &entries);
+	stack_trace_snprint(buf, sizeof(buf), entries, nr_entries, 0);
 	DRM_DEBUG_DRIVER("vma.node [%08llx + %08llx] %s: inserted at %s\n",
 			 vma->node.start, vma->node.size, reason, buf);
 }
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -60,27 +60,20 @@
 static noinline depot_stack_handle_t __save_depot_stack(void)
 {
 	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-		.skip = 1,
-	};
+	unsigned int n;
 
-	save_stack_trace(&trace);
-	return depot_save_stack(&trace, GFP_NOWAIT | __GFP_NOWARN);
+	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
+	return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
 }
 
 static void __print_depot_stack(depot_stack_handle_t stack,
 				char *buf, int sz, int indent)
 {
-	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-	};
+	unsigned long *entries;
+	unsigned int nr_entries;
 
-	depot_fetch_stack(stack, &trace);
-	snprint_stack_trace(buf, sz, &trace, indent);
+	nr_entries = stack_depot_fetch(stack, &entries);
+	stack_trace_snprint(buf, sz, entries, nr_entries, indent);
 }
 
 static void init_intel_runtime_pm_wakeref(struct drm_i915_private *i915)



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 17/29] lockdep: Remove unused trace argument from print_circular_bug()
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (15 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 16/29] drm: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
                   ` (11 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/lockdep.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1522,10 +1522,9 @@ static inline int class_equal(struct loc
 }
 
 static noinline int print_circular_bug(struct lock_list *this,
-				struct lock_list *target,
-				struct held_lock *check_src,
-				struct held_lock *check_tgt,
-				struct stack_trace *trace)
+				       struct lock_list *target,
+				       struct held_lock *check_src,
+				       struct held_lock *check_tgt)
 {
 	struct task_struct *curr = current;
 	struct lock_list *parent;
@@ -2206,7 +2205,7 @@ check_prev_add(struct task_struct *curr,
 			 */
 			save(trace);
 		}
-		return print_circular_bug(&this, target_entry, next, prev, trace);
+		return print_circular_bug(&this, target_entry, next, prev);
 	}
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add()
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (16 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 17/29] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-24 19:45   ` Peter Zijlstra
  2019-04-18  8:41 ` [patch V2 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
                   ` (10 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

There is only one caller of check_prev_add() which hands in a zeroed struct
stack trace and a function pointer to save_stack(). Inside check_prev_add()
the stack_trace struct is checked for being empty, which is always
true. Based on that one code path stores a stack trace which is unused. The
comment there does not make sense either. It's all leftovers from
historical lockdep code (cross release).

Move the variable into check_prev_add() itself and cleanup the nonsensical
checks and the pointless stack trace recording.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/lockdep.c |   30 ++++++++----------------------
 1 file changed, 8 insertions(+), 22 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2158,10 +2158,10 @@ check_deadlock(struct task_struct *curr,
  */
 static int
 check_prev_add(struct task_struct *curr, struct held_lock *prev,
-	       struct held_lock *next, int distance, struct stack_trace *trace,
-	       int (*save)(struct stack_trace *trace))
+	       struct held_lock *next, int distance)
 {
 	struct lock_list *uninitialized_var(target_entry);
+	struct stack_trace trace;
 	struct lock_list *entry;
 	struct lock_list this;
 	int ret;
@@ -2196,17 +2196,8 @@ check_prev_add(struct task_struct *curr,
 	this.class = hlock_class(next);
 	this.parent = NULL;
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
-	if (unlikely(!ret)) {
-		if (!trace->entries) {
-			/*
-			 * If @save fails here, the printing might trigger
-			 * a WARN but because of the !nr_entries it should
-			 * not do bad things.
-			 */
-			save(trace);
-		}
+	if (unlikely(!ret))
 		return print_circular_bug(&this, target_entry, next, prev);
-	}
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);
 
@@ -2253,7 +2244,7 @@ check_prev_add(struct task_struct *curr,
 		return print_bfs_bug(ret);
 
 
-	if (!trace->entries && !save(trace))
+	if (!save_trace(&trace))
 		return 0;
 
 	/*
@@ -2262,14 +2253,14 @@ check_prev_add(struct task_struct *curr,
 	 */
 	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance, &trace);
 
 	if (!ret)
 		return 0;
 
 	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance, &trace);
 	if (!ret)
 		return 0;
 
@@ -2287,12 +2278,6 @@ check_prevs_add(struct task_struct *curr
 {
 	int depth = curr->lockdep_depth;
 	struct held_lock *hlock;
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.max_entries = 0,
-		.entries = NULL,
-		.skip = 0,
-	};
 
 	/*
 	 * Debugging checks.
@@ -2318,7 +2303,8 @@ check_prevs_add(struct task_struct *curr
 		 * added:
 		 */
 		if (hlock->read != 2 && hlock->check) {
-			int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace);
+			int ret = check_prev_add(curr, hlock, next, distance);
+
 			if (!ret)
 				return 0;
 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 19/29] lockdep: Simplify stack trace handling
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (17 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-24 19:45   ` Peter Zijlstra
  2019-04-18  8:41 ` [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
                   ` (9 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces and storing the information is a small lockdep
specific data structure.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/lockdep.h  |    9 +++++++--
 kernel/locking/lockdep.c |   44 +++++++++++++++++++++++++-------------------
 2 files changed, 32 insertions(+), 21 deletions(-)

--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -66,6 +66,11 @@ struct lock_class_key {
 
 extern struct lock_class_key __lockdep_no_validate__;
 
+struct lock_trace {
+	unsigned int		nr_entries;
+	unsigned int		offset;
+};
+
 #define LOCKSTAT_POINTS		4
 
 /*
@@ -100,7 +105,7 @@ struct lock_class {
 	 * IRQ/softirq usage tracking bits:
 	 */
 	unsigned long			usage_mask;
-	struct stack_trace		usage_traces[XXX_LOCK_USAGE_STATES];
+	struct lock_trace		usage_traces[XXX_LOCK_USAGE_STATES];
 
 	/*
 	 * Generation counter, when doing certain classes of graph walking,
@@ -188,7 +193,7 @@ struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
 	struct lock_class		*links_to;
-	struct stack_trace		trace;
+	struct lock_trace		trace;
 	int				distance;
 
 	/*
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -434,18 +434,14 @@ static void print_lockdep_off(const char
 #endif
 }
 
-static int save_trace(struct stack_trace *trace)
+static int save_trace(struct lock_trace *trace)
 {
-	trace->nr_entries = 0;
-	trace->max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
-	trace->entries = stack_trace + nr_stack_trace_entries;
-
-	trace->skip = 3;
-
-	save_stack_trace(trace);
-
-	trace->max_entries = trace->nr_entries;
+	unsigned long *entries = stack_trace + nr_stack_trace_entries;
+	unsigned int max_entries;
 
+	trace->offset = nr_stack_trace_entries;
+	max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
+	trace->nr_entries = stack_trace_save(entries, max_entries, 3);
 	nr_stack_trace_entries += trace->nr_entries;
 
 	if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) {
@@ -1196,7 +1192,7 @@ static struct lock_list *alloc_list_entr
 static int add_lock_to_list(struct lock_class *this,
 			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
-			    struct stack_trace *trace)
+			    struct lock_trace *trace)
 {
 	struct lock_list *entry;
 	/*
@@ -1415,6 +1411,13 @@ static inline int __bfs_backwards(struct
  * checking.
  */
 
+static void print_lock_trace(struct lock_trace *trace, unsigned int spaces)
+{
+	unsigned long *entries = stack_trace + trace->offset;
+
+	stack_trace_print(entries, trace->nr_entries, spaces);
+}
+
 /*
  * Print a dependency chain entry (this is only done when a deadlock
  * has been detected):
@@ -1427,8 +1430,7 @@ print_circular_bug_entry(struct lock_lis
 	printk("\n-> #%u", depth);
 	print_lock_name(target->class);
 	printk(KERN_CONT ":\n");
-	print_stack_trace(&target->trace, 6);
-
+	print_lock_trace(&target->trace, 6);
 	return 0;
 }
 
@@ -1740,7 +1742,7 @@ static void print_lock_class_header(stru
 
 			len += printk("%*s   %s", depth, "", usage_str[bit]);
 			len += printk(KERN_CONT " at:\n");
-			print_stack_trace(class->usage_traces + bit, len);
+			print_lock_trace(class->usage_traces + bit, len);
 		}
 	}
 	printk("%*s }\n", depth, "");
@@ -1765,7 +1767,7 @@ print_shortest_lock_dependencies(struct
 	do {
 		print_lock_class_header(entry->class, depth);
 		printk("%*s ... acquired at:\n", depth, "");
-		print_stack_trace(&entry->trace, 2);
+		print_lock_trace(&entry->trace, 2);
 		printk("\n");
 
 		if (depth == 0 && (entry != root)) {
@@ -1878,14 +1880,14 @@ print_bad_irq_dependency(struct task_str
 	print_lock_name(backwards_entry->class);
 	pr_warn("\n... which became %s-irq-safe at:\n", irqclass);
 
-	print_stack_trace(backwards_entry->class->usage_traces + bit1, 1);
+	print_lock_trace(backwards_entry->class->usage_traces + bit1, 1);
 
 	pr_warn("\nto a %s-irq-unsafe lock:\n", irqclass);
 	print_lock_name(forwards_entry->class);
 	pr_warn("\n... which became %s-irq-unsafe at:\n", irqclass);
 	pr_warn("...");
 
-	print_stack_trace(forwards_entry->class->usage_traces + bit2, 1);
+	print_lock_trace(forwards_entry->class->usage_traces + bit2, 1);
 
 	pr_warn("\nother info that might help us debug this:\n\n");
 	print_irq_lock_scenario(backwards_entry, forwards_entry,
@@ -2161,9 +2163,9 @@ check_prev_add(struct task_struct *curr,
 	       struct held_lock *next, int distance)
 {
 	struct lock_list *uninitialized_var(target_entry);
-	struct stack_trace trace;
 	struct lock_list *entry;
 	struct lock_list this;
+	struct lock_trace trace;
 	int ret;
 
 	if (!hlock_class(prev)->key || !hlock_class(next)->key) {
@@ -2705,6 +2707,10 @@ static inline int validate_chain(struct
 {
 	return 1;
 }
+
+static void print_lock_trace(struct lock_trace *trace, unsigned int spaces)
+{
+}
 #endif
 
 /*
@@ -2801,7 +2807,7 @@ print_usage_bug(struct task_struct *curr
 	print_lock(this);
 
 	pr_warn("{%s} state was registered at:\n", usage_str[prev_bit]);
-	print_stack_trace(hlock_class(this)->usage_traces + prev_bit, 1);
+	print_lock_trace(hlock_class(this)->usage_traces + prev_bit, 1);
 
 	print_irqtrace_events(curr);
 	pr_warn("\nother info that might help us debug this:\n");



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (18 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 13:40   ` Steven Rostedt
  2019-04-18  8:41 ` [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
                   ` (8 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

The indirection through struct stack_trace is not necessary at all. Use the
storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_events_hist.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5186,7 +5186,6 @@ static void event_hist_trigger(struct ev
 	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
 	char compound_key[HIST_KEY_SIZE_MAX];
 	struct tracing_map_elt *elt = NULL;
-	struct stack_trace stacktrace;
 	struct hist_field *key_field;
 	u64 field_contents;
 	void *key = NULL;
@@ -5198,14 +5197,9 @@ static void event_hist_trigger(struct ev
 		key_field = hist_data->fields[i];
 
 		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
-			stacktrace.max_entries = HIST_STACKTRACE_DEPTH;
-			stacktrace.entries = entries;
-			stacktrace.nr_entries = 0;
-			stacktrace.skip = HIST_STACKTRACE_SKIP;
-
-			memset(stacktrace.entries, 0, HIST_STACKTRACE_SIZE);
-			save_stack_trace(&stacktrace);
-
+			memset(entries, 0, HIST_STACKTRACE_SIZE);
+			stack_trace_save(entries, HIST_STACKTRACE_DEPTH,
+					 HIST_STACKTRACE_SKIP);
 			key = entries;
 		} else {
 			field_contents = key_field->fn(key_field, elt, rbe, rec);



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (19 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 14:53   ` Steven Rostedt
  2019-04-18  8:41 ` [patch V2 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
                   ` (7 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

The per cpu stack trace buffer usage pattern is odd at best. The buffer has
place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When
interrupts or exceptions nest after the per cpu buffer was acquired the
stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries
in kernel stacks are unrealistic so the buffer is a complete waste.

Split the buffer into chunks of 64 stack entries which is plenty. This
allows nesting contexts (interrupts, exceptions) to utilize the cpu buffer
for stack retrieval and avoids the fixed length allocation along with the
conditional execution pathes.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   77 +++++++++++++++++++++++++--------------------------
 1 file changed, 39 insertions(+), 38 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2749,12 +2749,21 @@ trace_function(struct trace_array *tr,
 
 #ifdef CONFIG_STACKTRACE
 
-#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
+/* 64 entries for kernel stacks are plenty */
+#define FTRACE_KSTACK_ENTRIES	64
+
 struct ftrace_stack {
-	unsigned long		calls[FTRACE_STACK_MAX_ENTRIES];
+	unsigned long		calls[FTRACE_KSTACK_ENTRIES];
 };
 
-static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
+/* This allows 8 level nesting which is plenty */
+#define FTRACE_KSTACK_NESTING	(PAGE_SIZE / sizeof(struct ftrace_stack))
+
+struct ftrace_stacks {
+	struct ftrace_stack	stacks[FTRACE_KSTACK_NESTING];
+};
+
+static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
 static DEFINE_PER_CPU(int, ftrace_stack_reserve);
 
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
@@ -2763,10 +2772,11 @@ static void __ftrace_trace_stack(struct
 {
 	struct trace_event_call *call = &event_kernel_stack;
 	struct ring_buffer_event *event;
+	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
 	struct stack_trace trace;
-	int use_stack;
-	int size = FTRACE_STACK_ENTRIES;
+	int size = FTRACE_KSTACK_ENTRIES;
+	int stackidx;
 
 	trace.nr_entries	= 0;
 	trace.skip		= skip;
@@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct
 	 */
 	preempt_disable_notrace();
 
-	use_stack = __this_cpu_inc_return(ftrace_stack_reserve);
+	stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
+
+	/* This should never happen. If it does, yell once and skip */
+	if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
+		goto out;
+
 	/*
-	 * We don't need any atomic variables, just a barrier.
-	 * If an interrupt comes in, we don't care, because it would
-	 * have exited and put the counter back to what we want.
-	 * We just need a barrier to keep gcc from moving things
-	 * around.
+	 * The above __this_cpu_inc_return() is 'atomic' cpu local. An
+	 * interrupt will either see the value pre increment or post
+	 * increment. If the interrupt happens pre increment it will have
+	 * restored the counter when it returns.  We just need a barrier to
+	 * keep gcc from moving things around.
 	 */
 	barrier();
-	if (use_stack == 1) {
-		trace.entries		= this_cpu_ptr(ftrace_stack.calls);
-		trace.max_entries	= FTRACE_STACK_MAX_ENTRIES;
-
-		if (regs)
-			save_stack_trace_regs(regs, &trace);
-		else
-			save_stack_trace(&trace);
-
-		if (trace.nr_entries > size)
-			size = trace.nr_entries;
-	} else
-		/* From now on, use_stack is a boolean */
-		use_stack = 0;
+
+	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
+	trace.entries		= fstack->calls;
+	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
+
+	if (regs)
+		save_stack_trace_regs(regs, &trace);
+	else
+		save_stack_trace(&trace);
+
+	if (trace.nr_entries > size)
+		size = trace.nr_entries;
 
 	size *= sizeof(unsigned long);
 
@@ -2820,19 +2833,7 @@ static void __ftrace_trace_stack(struct
 		goto out;
 	entry = ring_buffer_event_data(event);
 
-	memset(&entry->caller, 0, size);
-
-	if (use_stack)
-		memcpy(&entry->caller, trace.entries,
-		       trace.nr_entries * sizeof(unsigned long));
-	else {
-		trace.max_entries	= FTRACE_STACK_ENTRIES;
-		trace.entries		= entry->caller;
-		if (regs)
-			save_stack_trace_regs(regs, &trace);
-		else
-			save_stack_trace(&trace);
-	}
+	memcpy(&entry->caller, trace.entries, size);
 
 	entry->size = trace.nr_entries;
 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 22/29] tracing: Make ftrace_trace_userstack() static and conditional
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (20 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-19 13:28   ` Steven Rostedt
  2019-04-18  8:41 ` [patch V2 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
                   ` (6 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

It's only used in trace.c and there is absolutely no point in compiling it
in when user space stack traces are not supported.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   14 ++++++++------
 kernel/trace/trace.h |    8 --------
 2 files changed, 8 insertions(+), 14 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -159,6 +159,8 @@ static union trace_eval_map_item *trace_
 #endif /* CONFIG_TRACE_EVAL_MAP_FILE */
 
 static int tracing_set_tracer(struct trace_array *tr, const char *buf);
+static void ftrace_trace_userstack(struct ring_buffer *buffer,
+				   unsigned long flags, int pc);
 
 #define MAX_TRACER_SIZE		100
 static char bootup_tracer_buf[MAX_TRACER_SIZE] __initdata;
@@ -2905,9 +2907,10 @@ void trace_dump_stack(int skip)
 }
 EXPORT_SYMBOL_GPL(trace_dump_stack);
 
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
 static DEFINE_PER_CPU(int, user_stack_count);
 
-void
+static void
 ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
 {
 	struct trace_event_call *call = &event_user_stack;
@@ -2958,13 +2961,12 @@ ftrace_trace_userstack(struct ring_buffe
  out:
 	preempt_enable();
 }
-
-#ifdef UNUSED
-static void __trace_userstack(struct trace_array *tr, unsigned long flags)
+#else /* CONFIG_USER_STACKTRACE_SUPPORT */
+static void ftrace_trace_userstack(struct ring_buffer *buffer,
+				   unsigned long flags, int pc)
 {
-	ftrace_trace_userstack(tr, flags, preempt_count());
 }
-#endif /* UNUSED */
+#endif /* !CONFIG_USER_STACKTRACE_SUPPORT */
 
 #endif /* CONFIG_STACKTRACE */
 
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -782,17 +782,9 @@ void update_max_tr_single(struct trace_a
 #endif /* CONFIG_TRACER_MAX_TRACE */
 
 #ifdef CONFIG_STACKTRACE
-void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
-			    int pc);
-
 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
 		   int pc);
 #else
-static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
-					  unsigned long flags, int pc)
-{
-}
-
 static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
 				 int skip, int pc)
 {



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 23/29] tracing: Simplify stack trace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (21 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-19 20:11   ` Steven Rostedt
  2019-04-18  8:41 ` [patch V2 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
                   ` (5 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   40 +++++++++++++---------------------------
 1 file changed, 13 insertions(+), 27 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2774,22 +2774,18 @@ static void __ftrace_trace_stack(struct
 {
 	struct trace_event_call *call = &event_kernel_stack;
 	struct ring_buffer_event *event;
+	unsigned int size, nr_entries;
 	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
-	struct stack_trace trace;
-	int size = FTRACE_KSTACK_ENTRIES;
 	int stackidx;
 
-	trace.nr_entries	= 0;
-	trace.skip		= skip;
-
 	/*
 	 * Add one, for this function and the call to save_stack_trace()
 	 * If regs is set, then these functions will not be in the way.
 	 */
 #ifndef CONFIG_UNWINDER_ORC
 	if (!regs)
-		trace.skip++;
+		skip++;
 #endif
 
 	/*
@@ -2816,28 +2812,24 @@ static void __ftrace_trace_stack(struct
 	barrier();
 
 	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
-	trace.entries		= fstack->calls;
-	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
-
-	if (regs)
-		save_stack_trace_regs(regs, &trace);
-	else
-		save_stack_trace(&trace);
-
-	if (trace.nr_entries > size)
-		size = trace.nr_entries;
+	size = ARRAY_SIZE(fstack->calls);
 
-	size *= sizeof(unsigned long);
+	if (regs) {
+		nr_entries = stack_trace_save_regs(regs, fstack->calls,
+						   size, skip);
+	} else {
+		nr_entries = stack_trace_save(fstack->calls, size, skip);
+	}
 
+	size = nr_entries * sizeof(unsigned long);
 	event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
 					    sizeof(*entry) + size, flags, pc);
 	if (!event)
 		goto out;
 	entry = ring_buffer_event_data(event);
 
-	memcpy(&entry->caller, trace.entries, size);
-
-	entry->size = trace.nr_entries;
+	memcpy(&entry->caller, fstack->calls, size);
+	entry->size = nr_entries;
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
 		__buffer_unlock_commit(buffer, event);
@@ -2916,7 +2908,6 @@ ftrace_trace_userstack(struct ring_buffe
 	struct trace_event_call *call = &event_user_stack;
 	struct ring_buffer_event *event;
 	struct userstack_entry *entry;
-	struct stack_trace trace;
 
 	if (!(global_trace.trace_flags & TRACE_ITER_USERSTACKTRACE))
 		return;
@@ -2947,12 +2938,7 @@ ftrace_trace_userstack(struct ring_buffe
 	entry->tgid		= current->tgid;
 	memset(&entry->caller, 0, sizeof(entry->caller));
 
-	trace.nr_entries	= 0;
-	trace.max_entries	= FTRACE_STACK_ENTRIES;
-	trace.skip		= 0;
-	trace.entries		= entry->caller;
-
-	save_stack_trace_user(&trace);
+	stack_trace_save_user(entry->caller, FTRACE_STACK_ENTRIES);
 	if (!call_filter_check_discard(call, entry, buffer, event))
 		__buffer_unlock_commit(buffer, event);
 



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 24/29] tracing: Remove the last struct stack_trace usage
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (22 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-19 20:11   ` Steven Rostedt
  2019-04-18  8:41 ` [patch V2 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
                   ` (4 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Simplify the stack retrieval code by using the storage array based
interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/trace/trace_stack.c |   42 ++++++++++++++++--------------------------
 1 file changed, 16 insertions(+), 26 deletions(-)

--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -23,16 +23,7 @@
 static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES];
 static unsigned stack_trace_index[STACK_TRACE_ENTRIES];
 
-/*
- * Reserve one entry for the passed in ip. This will allow
- * us to remove most or all of the stack size overhead
- * added by the stack tracer itself.
- */
-struct stack_trace stack_trace_max = {
-	.max_entries		= STACK_TRACE_ENTRIES - 1,
-	.entries		= &stack_dump_trace[0],
-};
-
+static unsigned int stack_trace_entries;
 static unsigned long stack_trace_max_size;
 static arch_spinlock_t stack_trace_max_lock =
 	(arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED;
@@ -49,10 +40,10 @@ static void print_max_stack(void)
 
 	pr_emerg("        Depth    Size   Location    (%d entries)\n"
 			   "        -----    ----   --------\n",
-			   stack_trace_max.nr_entries);
+			   stack_trace_entries);
 
-	for (i = 0; i < stack_trace_max.nr_entries; i++) {
-		if (i + 1 == stack_trace_max.nr_entries)
+	for (i = 0; i < stack_trace_entries; i++) {
+		if (i + 1 == stack_trace_entries)
 			size = stack_trace_index[i];
 		else
 			size = stack_trace_index[i] - stack_trace_index[i+1];
@@ -98,13 +89,12 @@ static void check_stack(unsigned long ip
 
 	stack_trace_max_size = this_size;
 
-	stack_trace_max.nr_entries = 0;
-	stack_trace_max.skip = 0;
-
-	save_stack_trace(&stack_trace_max);
+	stack_trace_entries = stack_trace_save(stack_dump_trace,
+					       ARRAY_SIZE(stack_dump_trace) - 1,
+					       0);
 
 	/* Skip over the overhead of the stack tracer itself */
-	for (i = 0; i < stack_trace_max.nr_entries; i++) {
+	for (i = 0; i < stack_trace_entries; i++) {
 		if (stack_dump_trace[i] == ip)
 			break;
 	}
@@ -113,7 +103,7 @@ static void check_stack(unsigned long ip
 	 * Some archs may not have the passed in ip in the dump.
 	 * If that happens, we need to show everything.
 	 */
-	if (i == stack_trace_max.nr_entries)
+	if (i == stack_trace_entries)
 		i = 0;
 
 	/*
@@ -131,13 +121,13 @@ static void check_stack(unsigned long ip
 	 * loop will only happen once. This code only takes place
 	 * on a new max, so it is far from a fast path.
 	 */
-	while (i < stack_trace_max.nr_entries) {
+	while (i < stack_trace_entries) {
 		int found = 0;
 
 		stack_trace_index[x] = this_size;
 		p = start;
 
-		for (; p < top && i < stack_trace_max.nr_entries; p++) {
+		for (; p < top && i < stack_trace_entries; p++) {
 			/*
 			 * The READ_ONCE_NOCHECK is used to let KASAN know that
 			 * this is not a stack-out-of-bounds error.
@@ -168,7 +158,7 @@ static void check_stack(unsigned long ip
 			i++;
 	}
 
-	stack_trace_max.nr_entries = x;
+	stack_trace_entries = x;
 
 	if (task_stack_end_corrupted(current)) {
 		print_max_stack();
@@ -270,7 +260,7 @@ static void *
 {
 	long n = *pos - 1;
 
-	if (n >= stack_trace_max.nr_entries)
+	if (n >= stack_trace_entries)
 		return NULL;
 
 	m->private = (void *)n;
@@ -334,7 +324,7 @@ static int t_show(struct seq_file *m, vo
 		seq_printf(m, "        Depth    Size   Location"
 			   "    (%d entries)\n"
 			   "        -----    ----   --------\n",
-			   stack_trace_max.nr_entries);
+			   stack_trace_entries);
 
 		if (!stack_tracer_enabled && !stack_trace_max_size)
 			print_disabled(m);
@@ -344,10 +334,10 @@ static int t_show(struct seq_file *m, vo
 
 	i = *(long *)v;
 
-	if (i >= stack_trace_max.nr_entries)
+	if (i >= stack_trace_entries)
 		return 0;
 
-	if (i + 1 == stack_trace_max.nr_entries)
+	if (i + 1 == stack_trace_entries)
 		size = stack_trace_index[i];
 	else
 		size = stack_trace_index[i] - stack_trace_index[i+1];



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 25/29] livepatch: Simplify stack trace retrieval
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (23 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-23  8:18   ` Miroslav Benes
  2019-04-18  8:41 ` [patch V2 26/29] stacktrace: Remove obsolete functions Thomas Gleixner
                   ` (3 subsequent siblings)
  28 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/livepatch/transition.c |   22 +++++++++-------------
 1 file changed, 9 insertions(+), 13 deletions(-)

--- a/kernel/livepatch/transition.c
+++ b/kernel/livepatch/transition.c
@@ -202,15 +202,15 @@ void klp_update_patch_state(struct task_
  * Determine whether the given stack trace includes any references to a
  * to-be-patched or to-be-unpatched function.
  */
-static int klp_check_stack_func(struct klp_func *func,
-				struct stack_trace *trace)
+static int klp_check_stack_func(struct klp_func *func, unsigned long *entries,
+				unsigned int nr_entries)
 {
 	unsigned long func_addr, func_size, address;
 	struct klp_ops *ops;
 	int i;
 
-	for (i = 0; i < trace->nr_entries; i++) {
-		address = trace->entries[i];
+	for (i = 0; i < nr_entries; i++) {
+		address = entries[i];
 
 		if (klp_target_state == KLP_UNPATCHED) {
 			 /*
@@ -254,29 +254,25 @@ static int klp_check_stack_func(struct k
 static int klp_check_stack(struct task_struct *task, char *err_buf)
 {
 	static unsigned long entries[MAX_STACK_ENTRIES];
-	struct stack_trace trace;
 	struct klp_object *obj;
 	struct klp_func *func;
-	int ret;
+	int ret, nr_entries;
 
-	trace.skip = 0;
-	trace.nr_entries = 0;
-	trace.max_entries = MAX_STACK_ENTRIES;
-	trace.entries = entries;
-	ret = save_stack_trace_tsk_reliable(task, &trace);
+	ret = stack_trace_save_tsk_reliable(task, entries, ARRAY_SIZE(entries));
 	WARN_ON_ONCE(ret == -ENOSYS);
-	if (ret) {
+	if (ret < 0) {
 		snprintf(err_buf, STACK_ERR_BUF_SIZE,
 			 "%s: %s:%d has an unreliable stack\n",
 			 __func__, task->comm, task->pid);
 		return ret;
 	}
+	nr_entries = ret;
 
 	klp_for_each_object(klp_transition_patch, obj) {
 		if (!obj->patched)
 			continue;
 		klp_for_each_func(obj, func) {
-			ret = klp_check_stack_func(func, &trace);
+			ret = klp_check_stack_func(func, entries, nr_entries);
 			if (ret) {
 				snprintf(err_buf, STACK_ERR_BUF_SIZE,
 					 "%s: %s:%d is sleeping on function %s\n",



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 26/29] stacktrace: Remove obsolete functions
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (24 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 27/29] lib/stackdepot: " Thomas Gleixner
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

No more users of the struct stack_trace based interfaces. Remove them.

Remove the macro stubs for !CONFIG_STACKTRACE as well as they are pointless
because the storage on the call sites is conditional on CONFIG_STACKTRACE
already. No point to be 'smart'.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stacktrace.h |   17 -----------------
 kernel/stacktrace.c        |   14 --------------
 2 files changed, 31 deletions(-)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -36,24 +36,7 @@ extern void save_stack_trace_tsk(struct
 				struct stack_trace *trace);
 extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
 					 struct stack_trace *trace);
-
-extern void print_stack_trace(struct stack_trace *trace, int spaces);
-extern int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces);
-
-#ifdef CONFIG_USER_STACKTRACE_SUPPORT
 extern void save_stack_trace_user(struct stack_trace *trace);
-#else
-# define save_stack_trace_user(trace)              do { } while (0)
-#endif
-
-#else /* !CONFIG_STACKTRACE */
-# define save_stack_trace(trace)			do { } while (0)
-# define save_stack_trace_tsk(tsk, trace)		do { } while (0)
-# define save_stack_trace_user(trace)			do { } while (0)
-# define print_stack_trace(trace, spaces)		do { } while (0)
-# define snprint_stack_trace(buf, size, trace, spaces)	do { } while (0)
-# define save_stack_trace_tsk_reliable(tsk, trace)	({ -ENOSYS; })
 #endif /* CONFIG_STACKTRACE */
 
 #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -30,12 +30,6 @@ void stack_trace_print(unsigned long *en
 }
 EXPORT_SYMBOL_GPL(stack_trace_print);
 
-void print_stack_trace(struct stack_trace *trace, int spaces)
-{
-	stack_trace_print(trace->entries, trace->nr_entries, spaces);
-}
-EXPORT_SYMBOL_GPL(print_stack_trace);
-
 /**
  * stack_trace_snprint - Print the entries in the stack trace into a buffer
  * @buf:	Pointer to the print buffer
@@ -70,14 +64,6 @@ int stack_trace_snprint(char *buf, size_
 }
 EXPORT_SYMBOL_GPL(stack_trace_snprint);
 
-int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces)
-{
-	return stack_trace_snprint(buf, size, trace->entries,
-				   trace->nr_entries, spaces);
-}
-EXPORT_SYMBOL_GPL(snprint_stack_trace);
-
 /*
  * Architectures that do not implement save_stack_trace_*()
  * get these weak aliases and once-per-bootup warnings



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 27/29] lib/stackdepot: Remove obsolete functions
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (25 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 26/29] stacktrace: Remove obsolete functions Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
  2019-04-18  8:41 ` [patch V2 29/29] x86/stacktrace: Use " Thomas Gleixner
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

No more users of the struct stack_trace based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Alexander Potapenko <glider@google.com>
---
 include/linux/stackdepot.h |    4 ----
 lib/stackdepot.c           |   20 --------------------
 2 files changed, 24 deletions(-)

--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -23,13 +23,9 @@
 
 typedef u32 depot_stack_handle_t;
 
-struct stack_trace;
-
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries, gfp_t gfp_flags);
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries);
 
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -212,14 +212,6 @@ unsigned int stack_depot_fetch(depot_sta
 }
 EXPORT_SYMBOL_GPL(stack_depot_fetch);
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
-{
-	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
-
-	trace->max_entries = trace->nr_entries = nent;
-}
-EXPORT_SYMBOL_GPL(depot_fetch_stack);
-
 /**
  * stack_depot_save - Save a stack trace from an array
  *
@@ -314,15 +306,3 @@ depot_stack_handle_t stack_depot_save(un
 	return retval;
 }
 EXPORT_SYMBOL_GPL(stack_depot_save);
-
-/**
- * depot_save_stack - save stack in a stack depot.
- * @trace - the stacktrace to save.
- * @alloc_flags - flags for allocating additional memory if required.
- */
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
-				      gfp_t alloc_flags)
-{
-	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
-}
-EXPORT_SYMBOL_GPL(depot_save_stack);



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (26 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 27/29] lib/stackdepot: " Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  2019-04-18 11:52   ` Mike Rapoport
                     ` (2 more replies)
  2019-04-18  8:41 ` [patch V2 29/29] x86/stacktrace: Use " Thomas Gleixner
  28 siblings, 3 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

All architectures which support stacktrace carry duplicated code and
do the stack storage and filtering at the architecture side.

Provide a consolidated interface with a callback function for consuming the
stack entries provided by the architecture specific stack walker. This
removes lots of duplicated code and allows to implement better filtering
than 'skip number of entries' in the future without touching any
architecture specific code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
---
 include/linux/stacktrace.h |   38 +++++++++
 kernel/stacktrace.c        |  173 +++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig                |    4 +
 3 files changed, 215 insertions(+)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -23,6 +23,43 @@ unsigned int stack_trace_save_regs(struc
 unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
 
 /* Internal interfaces. Do not use in generic code */
+#ifdef CONFIG_ARCH_STACKWALK
+
+/**
+ * stack_trace_consume_fn - Callback for arch_stack_walk()
+ * @cookie:	Caller supplied pointer handed back by arch_stack_walk()
+ * @addr:	The stack entry address to consume
+ * @reliable:	True when the stack entry is reliable. Required by
+ *		some printk based consumers.
+ *
+ * Returns:	True, if the entry was consumed or skipped
+ *		False, if there is no space left to store
+ */
+typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
+				       bool reliable);
+/**
+ * arch_stack_walk - Architecture specific function to walk the stack
+
+ * @consume_entry:	Callback which is invoked by the architecture code for
+ *			each entry.
+ * @cookie:		Caller supplied pointer which is handed back to
+ *			@consume_entry
+ * @task:		Pointer to a task struct, can be NULL
+ * @regs:		Pointer to registers, can be NULL
+ *
+ * @task	@regs:
+ * NULL		NULL	Stack trace from current
+ * task		NULL	Stack trace from task (can be current)
+ * NULL		regs	Stack trace starting on regs->stackpointer
+ */
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+		     struct task_struct *task, struct pt_regs *regs);
+int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
+			     struct task_struct *task);
+void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
+			  const struct pt_regs *regs);
+
+#else /* CONFIG_ARCH_STACKWALK */
 struct stack_trace {
 	unsigned int nr_entries, max_entries;
 	unsigned long *entries;
@@ -37,6 +74,7 @@ extern void save_stack_trace_tsk(struct
 extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
 					 struct stack_trace *trace);
 extern void save_stack_trace_user(struct stack_trace *trace);
+#endif /* !CONFIG_ARCH_STACKWALK */
 #endif /* CONFIG_STACKTRACE */
 
 #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -5,6 +5,8 @@
  *
  *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
  */
+#include <linux/sched/task_stack.h>
+#include <linux/sched/debug.h>
 #include <linux/sched.h>
 #include <linux/kernel.h>
 #include <linux/export.h>
@@ -64,6 +66,175 @@ int stack_trace_snprint(char *buf, size_
 }
 EXPORT_SYMBOL_GPL(stack_trace_snprint);
 
+#ifdef CONFIG_ARCH_STACKWALK
+
+struct stacktrace_cookie {
+	unsigned long	*store;
+	unsigned int	size;
+	unsigned int	skip;
+	unsigned int	len;
+};
+
+static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
+				      bool reliable)
+{
+	struct stacktrace_cookie *c = cookie;
+
+	if (c->len >= c->size)
+		return false;
+
+	if (c->skip > 0) {
+		c->skip--;
+		return true;
+	}
+	c->store[c->len++] = addr;
+	return c->len < c->size;
+}
+
+static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
+					      bool reliable)
+{
+	if (in_sched_functions(addr))
+		return true;
+	return stack_trace_consume_entry(cookie, addr, reliable);
+}
+
+/**
+ * stack_trace_save - Save a stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Returns number of entries stored.
+ */
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+		.skip	= skipnr + 1,
+	};
+
+	arch_stack_walk(consume_entry, &c, current, NULL);
+	return c.len;
+}
+EXPORT_SYMBOL_GPL(stack_trace_save);
+
+/**
+ * stack_trace_save_tsk - Save a task stack trace into a storage array
+ * @task:	The task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Returns number of entries stored.
+ */
+unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size, unsigned int skipnr)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+		.skip	= skipnr + 1,
+	};
+
+	if (!try_get_task_stack(tsk))
+		return 0;
+
+	arch_stack_walk(consume_entry, &c, tsk, NULL);
+	put_task_stack(tsk);
+	return c.len;
+}
+
+/**
+ * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
+ * @regs:	Pointer to pt_regs to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ *
+ * Returns number of entries stored.
+ */
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+		.skip	= skipnr,
+	};
+
+	arch_stack_walk(consume_entry, &c, current, regs);
+	return c.len;
+}
+
+#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
+/**
+ * stack_trace_save_tsk_reliable - Save task stack with verification
+ * @tsk:	Pointer to the task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Returns:	An error if it detects any unreliable features of the
+ *		stack. Otherwise it guarantees that the stack trace is
+ *		reliable and returns the number of entries stored.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
+ */
+int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
+				  unsigned int size)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+	};
+	int ret;
+
+	/*
+	 * If the task doesn't have a stack (e.g., a zombie), the stack is
+	 * "reliably" empty.
+	 */
+	if (!try_get_task_stack(tsk))
+		return 0;
+
+	ret = arch_stack_walk_reliable(consume_entry, &c, tsk);
+	put_task_stack(tsk);
+	return ret;
+}
+#endif
+
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
+/**
+ * stack_trace_save_user - Save a user space stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ *
+ * Returns number of entries stored.
+ */
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
+{
+	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
+	struct stacktrace_cookie c = {
+		.store	= store,
+		.size	= size,
+	};
+
+	/* Trace user stack if not a kernel thread */
+	if (!current->mm)
+		return 0;
+
+	arch_stack_walk_user(consume_entry, &c, task_pt_regs(current));
+	return c.len;
+}
+#endif
+
+#else /* CONFIG_ARCH_STACKWALK */
+
 /*
  * Architectures that do not implement save_stack_trace_*()
  * get these weak aliases and once-per-bootup warnings
@@ -193,3 +364,5 @@ unsigned int stack_trace_save_user(unsig
 	return trace.nr_entries;
 }
 #endif /* CONFIG_USER_STACKTRACE_SUPPORT */
+
+#endif /* !CONFIG_ARCH_STACKWALK */
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -597,6 +597,10 @@ config ARCH_HAS_UACCESS_FLUSHCACHE
 config ARCH_HAS_UACCESS_MCSAFE
 	bool
 
+# Temporary. Goes away when all archs are cleaned up
+config ARCH_STACKWALK
+       bool
+
 config STACKDEPOT
 	bool
 	select STACKTRACE



^ permalink raw reply	[flat|nested] 72+ messages in thread

* [patch V2 29/29] x86/stacktrace: Use common infrastructure
  2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
                   ` (27 preceding siblings ...)
  2019-04-18  8:41 ` [patch V2 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
@ 2019-04-18  8:41 ` Thomas Gleixner
  28 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18  8:41 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

Replace the stack_trace_save*() functions with the new arch_stack_walk()
interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
---
 arch/x86/Kconfig             |    1 
 arch/x86/kernel/stacktrace.c |  116 +++++++------------------------------------
 2 files changed, 20 insertions(+), 97 deletions(-)

--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -74,6 +74,7 @@ config X86
 	select ARCH_MIGHT_HAVE_ACPI_PDC		if ACPI
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
+	select ARCH_STACKWALK
 	select ARCH_SUPPORTS_ACPI
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_NUMA_BALANCING	if X86_64
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -12,75 +12,31 @@
 #include <asm/stacktrace.h>
 #include <asm/unwind.h>
 
-static int save_stack_address(struct stack_trace *trace, unsigned long addr,
-			      bool nosched)
-{
-	if (nosched && in_sched_functions(addr))
-		return 0;
-
-	if (trace->skip > 0) {
-		trace->skip--;
-		return 0;
-	}
-
-	if (trace->nr_entries >= trace->max_entries)
-		return -1;
-
-	trace->entries[trace->nr_entries++] = addr;
-	return 0;
-}
-
-static void noinline __save_stack_trace(struct stack_trace *trace,
-			       struct task_struct *task, struct pt_regs *regs,
-			       bool nosched)
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+		     struct task_struct *task, struct pt_regs *regs)
 {
 	struct unwind_state state;
 	unsigned long addr;
 
-	if (regs)
-		save_stack_address(trace, regs->ip, nosched);
+	if (regs && !consume_entry(cookie, regs->ip, false))
+		return;
 
 	for (unwind_start(&state, task, regs, NULL); !unwind_done(&state);
 	     unwind_next_frame(&state)) {
 		addr = unwind_get_return_address(&state);
-		if (!addr || save_stack_address(trace, addr, nosched))
+		if (!addr || !consume_entry(cookie, addr, false))
 			break;
 	}
 }
 
 /*
- * Save stack-backtrace addresses into a stack_trace buffer.
+ * This function returns an error if it detects any unreliable features of the
+ * stack.  Otherwise it guarantees that the stack trace is reliable.
+ *
+ * If the task is not 'current', the caller *must* ensure the task is inactive.
  */
-void save_stack_trace(struct stack_trace *trace)
-{
-	trace->skip++;
-	__save_stack_trace(trace, current, NULL, false);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace);
-
-void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
-{
-	__save_stack_trace(trace, current, regs, false);
-}
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
-{
-	if (!try_get_task_stack(tsk))
-		return;
-
-	if (tsk == current)
-		trace->skip++;
-	__save_stack_trace(trace, tsk, NULL, true);
-
-	put_task_stack(tsk);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
-
-#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
-
-static int __always_inline
-__save_stack_trace_reliable(struct stack_trace *trace,
-			    struct task_struct *task)
+int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
+			     void *cookie, struct task_struct *task)
 {
 	struct unwind_state state;
 	struct pt_regs *regs;
@@ -117,7 +73,7 @@ static int __always_inline
 		if (!addr)
 			return -EINVAL;
 
-		if (save_stack_address(trace, addr, false))
+		if (!consume_entry(cookie, addr, false))
 			return -EINVAL;
 	}
 
@@ -132,32 +88,6 @@ static int __always_inline
 	return 0;
 }
 
-/*
- * This function returns an error if it detects any unreliable features of the
- * stack.  Otherwise it guarantees that the stack trace is reliable.
- *
- * If the task is not 'current', the caller *must* ensure the task is inactive.
- */
-int save_stack_trace_tsk_reliable(struct task_struct *tsk,
-				  struct stack_trace *trace)
-{
-	int ret;
-
-	/*
-	 * If the task doesn't have a stack (e.g., a zombie), the stack is
-	 * "reliably" empty.
-	 */
-	if (!try_get_task_stack(tsk))
-		return 0;
-
-	ret = __save_stack_trace_reliable(trace, tsk);
-
-	put_task_stack(tsk);
-
-	return ret;
-}
-#endif /* CONFIG_HAVE_RELIABLE_STACKTRACE */
-
 /* Userspace stacktrace - based on kernel/trace/trace_sysprof.c */
 
 struct stack_frame_user {
@@ -182,15 +112,15 @@ copy_stack_frame(const void __user *fp,
 	return ret;
 }
 
-static inline void __save_stack_trace_user(struct stack_trace *trace)
+void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
+			  const struct pt_regs *regs)
 {
-	const struct pt_regs *regs = task_pt_regs(current);
 	const void __user *fp = (const void __user *)regs->bp;
 
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = regs->ip;
+	if (!consume_entry(cookie, regs->ip, false))
+		return;
 
-	while (trace->nr_entries < trace->max_entries) {
+	while (1) {
 		struct stack_frame_user frame;
 
 		frame.next_fp = NULL;
@@ -200,8 +130,8 @@ static inline void __save_stack_trace_us
 		if ((unsigned long)fp < regs->sp)
 			break;
 		if (frame.ret_addr) {
-			trace->entries[trace->nr_entries++] =
-				frame.ret_addr;
+			if (!consume_entry(cookie, frame.ret_addr, false))
+				return;
 		}
 		if (fp == frame.next_fp)
 			break;
@@ -209,11 +139,3 @@ static inline void __save_stack_trace_us
 	}
 }
 
-void save_stack_trace_user(struct stack_trace *trace)
-{
-	/*
-	 * Trace user stack if we are not a kernel thread
-	 */
-	if (current->mm)
-		__save_stack_trace_user(trace);
-}



^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 09/29] mm/kasan: Simplify stacktrace handling
  2019-04-18  8:41 ` [patch V2 09/29] mm/kasan: " Thomas Gleixner
@ 2019-04-18 10:39   ` Andrey Ryabinin
  2019-04-18 11:53     ` Thomas Gleixner
  0 siblings, 1 reply; 72+ messages in thread
From: Andrey Ryabinin @ 2019-04-18 10:39 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Dmitry Vyukov, kasan-dev, linux-mm,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, David Rientjes,
	Christoph Lameter, Catalin Marinas, Mike Rapoport, Akinobu Mita,
	iommu, Robin Murphy, Christoph Hellwig, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon, intel-gfx,
	Joonas Lahtinen, Maarten Lankhorst, dri-devel, David Airlie,
	Jani Nikula, Daniel Vetter, Rodrigo Vivi, linux-arch

On 4/18/19 11:41 AM, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Acked-by: Dmitry Vyukov <dvyukov@google.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org

Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>

>  
>  static inline depot_stack_handle_t save_stack(gfp_t flags)
>  {
>  	unsigned long entries[KASAN_STACK_DEPTH];
> -	struct stack_trace trace = {
> -		.nr_entries = 0,
> -		.entries = entries,
> -		.max_entries = KASAN_STACK_DEPTH,
> -		.skip = 0
> -	};
> +	unsigned int nr_entries;
>  
> -	save_stack_trace(&trace);
> -	filter_irq_stacks(&trace);
> -
> -	return depot_save_stack(&trace, flags);
> +	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> +	nr_entries = filter_irq_stacks(entries, nr_entries);
> +	return stack_depot_save(entries, nr_entries, flags);

Suggestion for further improvement:

stack_trace_save() shouldn't unwind beyond irq entry point so we wouldn't need filter_irq_stacks().
Probably all call sites doesn't care about random stack above irq entry point, so it doesn't
make sense to spend resources on unwinding non-irq stack from interrupt first an filtering out it later.

It would improve performance of stack_trace_save() called from interrupt and fix page_owner which feed unfiltered
stack to stack_depot_save(). Random non-irq part kills the benefit of using the stack_deopt_save().

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 14/29] dm bufio: Simplify stack trace retrieval
  2019-04-18  8:41 ` [patch V2 14/29] dm bufio: " Thomas Gleixner
@ 2019-04-18 10:44   ` Alexander Potapenko
  2019-04-18 11:54     ` Thomas Gleixner
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Potapenko @ 2019-04-18 10:44 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	dm-devel, Mike Snitzer, Alasdair Kergon, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, Linux Memory Management List,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 11:06 AM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: dm-devel@redhat.com
> Cc: Mike Snitzer <snitzer@redhat.com>
> Cc: Alasdair Kergon <agk@redhat.com>
> ---
>  drivers/md/dm-bufio.c |   15 ++++++---------
>  1 file changed, 6 insertions(+), 9 deletions(-)
>
> --- a/drivers/md/dm-bufio.c
> +++ b/drivers/md/dm-bufio.c
> @@ -150,7 +150,7 @@ struct dm_buffer {
>         void (*end_io)(struct dm_buffer *, blk_status_t);
>  #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
>  #define MAX_STACK 10
> -       struct stack_trace stack_trace;
> +       unsigned int stack_len;
>         unsigned long stack_entries[MAX_STACK];
>  #endif
>  };
> @@ -232,11 +232,7 @@ static DEFINE_MUTEX(dm_bufio_clients_loc
>  #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
>  static void buffer_record_stack(struct dm_buffer *b)
>  {
> -       b->stack_trace.nr_entries = 0;
> -       b->stack_trace.max_entries = MAX_STACK;
> -       b->stack_trace.entries = b->stack_entries;
> -       b->stack_trace.skip = 2;
> -       save_stack_trace(&b->stack_trace);
> +       b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);
As noted in one of similar patches before, can we have an inline
comment to indicate what does this "2" stand for?
>  }
>  #endif
>
> @@ -438,7 +434,7 @@ static struct dm_buffer *alloc_buffer(st
>         adjust_total_allocated(b->data_mode, (long)c->block_size);
>
>  #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
> -       memset(&b->stack_trace, 0, sizeof(b->stack_trace));
> +       b->stack_len = 0;
>  #endif
>         return b;
>  }
> @@ -1520,8 +1516,9 @@ static void drop_buffers(struct dm_bufio
>                         DMERR("leaked buffer %llx, hold count %u, list %d",
>                               (unsigned long long)b->block, b->hold_count, i);
>  #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
> -                       print_stack_trace(&b->stack_trace, 1);
> -                       b->hold_count = 0; /* mark unclaimed to avoid BUG_ON below */
> +                       stack_trace_print(b->stack_entries, b->stack_len, 1);
> +                       /* mark unclaimed to avoid BUG_ON below */
> +                       b->hold_count = 0;
>  #endif
>                 }
>
>
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays
  2019-04-18  8:41 ` [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
@ 2019-04-18 11:51   ` Mike Rapoport
  2019-04-18 11:58     ` Thomas Gleixner
  0 siblings, 1 reply; 72+ messages in thread
From: Mike Rapoport @ 2019-04-18 11:51 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 10:41:22AM +0200, Thomas Gleixner wrote:
> The struct stack_trace indirection in the stack depot functions is a truly
> pointless excercise which requires horrible code at the callsites.
> 
> Provide interfaces based on plain storage arrays.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  include/linux/stackdepot.h |    4 ++
>  lib/stackdepot.c           |   66 ++++++++++++++++++++++++++++++++-------------
>  2 files changed, 51 insertions(+), 19 deletions(-)
> 
> --- a/include/linux/stackdepot.h
> +++ b/include/linux/stackdepot.h
> @@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t;
>  struct stack_trace;
> 
>  depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
> +depot_stack_handle_t stack_depot_save(unsigned long *entries,
> +				      unsigned int nr_entries, gfp_t gfp_flags);
> 
>  void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
> +unsigned int stack_depot_fetch(depot_stack_handle_t handle,
> +			       unsigned long **entries);
> 
>  #endif
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -194,40 +194,56 @@ static inline struct stack_record *find_
>  	return NULL;
>  }
> 
> -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
> +/**
> + * stack_depot_fetch - Fetch stack entries from a depot
> + *

Nit: kernel-doc will complain about missing description of @handle.

> + * @entries:		Pointer to store the entries address
> + */
> +unsigned int stack_depot_fetch(depot_stack_handle_t handle,
> +			       unsigned long **entries)
>  {
>  	union handle_parts parts = { .handle = handle };
>  	void *slab = stack_slabs[parts.slabindex];
>  	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
>  	struct stack_record *stack = slab + offset;
> 
> -	trace->nr_entries = trace->max_entries = stack->size;
> -	trace->entries = stack->entries;
> -	trace->skip = 0;
> +	*entries = stack->entries;
> +	return stack->size;
> +}
> +EXPORT_SYMBOL_GPL(stack_depot_fetch);
> +
> +void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
> +{
> +	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
> +
> +	trace->max_entries = trace->nr_entries = nent;
>  }
>  EXPORT_SYMBOL_GPL(depot_fetch_stack);
> 
>  /**
> - * depot_save_stack - save stack in a stack depot.
> - * @trace - the stacktrace to save.
> - * @alloc_flags - flags for allocating additional memory if required.
> + * stack_depot_save - Save a stack trace from an array
>   *
> - * Returns the handle of the stack struct stored in depot.
> + * @entries:		Pointer to storage array
> + * @nr_entries:		Size of the storage array
> + * @alloc_flags:	Allocation gfp flags
> + *
> + * Returns the handle of the stack struct stored in depot

Can you please s/Returns/Return:/ so that kernel-doc will recognize this as
return section.

>   */
> -depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
> -				    gfp_t alloc_flags)
> +depot_stack_handle_t stack_depot_save(unsigned long *entries,
> +				      unsigned int nr_entries,
> +				      gfp_t alloc_flags)
>  {
> -	u32 hash;
> -	depot_stack_handle_t retval = 0;
>  	struct stack_record *found = NULL, **bucket;
> -	unsigned long flags;
> +	depot_stack_handle_t retval = 0;
>  	struct page *page = NULL;
>  	void *prealloc = NULL;
> +	unsigned long flags;
> +	u32 hash;
> 
> -	if (unlikely(trace->nr_entries == 0))
> +	if (unlikely(nr_entries == 0))
>  		goto fast_exit;
> 
> -	hash = hash_stack(trace->entries, trace->nr_entries);
> +	hash = hash_stack(entries, nr_entries);
>  	bucket = &stack_table[hash & STACK_HASH_MASK];
> 
>  	/*
> @@ -235,8 +251,8 @@ depot_stack_handle_t depot_save_stack(st
>  	 * The smp_load_acquire() here pairs with smp_store_release() to
>  	 * |bucket| below.
>  	 */
> -	found = find_stack(smp_load_acquire(bucket), trace->entries,
> -			   trace->nr_entries, hash);
> +	found = find_stack(smp_load_acquire(bucket), entries,
> +			   nr_entries, hash);
>  	if (found)
>  		goto exit;
> 
> @@ -264,10 +280,10 @@ depot_stack_handle_t depot_save_stack(st
> 
>  	spin_lock_irqsave(&depot_lock, flags);
> 
> -	found = find_stack(*bucket, trace->entries, trace->nr_entries, hash);
> +	found = find_stack(*bucket, entries, nr_entries, hash);
>  	if (!found) {
>  		struct stack_record *new =
> -			depot_alloc_stack(trace->entries, trace->nr_entries,
> +			depot_alloc_stack(entries, nr_entries,
>  					  hash, &prealloc, alloc_flags);
>  		if (new) {
>  			new->next = *bucket;
> @@ -297,4 +313,16 @@ depot_stack_handle_t depot_save_stack(st
>  fast_exit:
>  	return retval;
>  }
> +EXPORT_SYMBOL_GPL(stack_depot_save);
> +
> +/**
> + * depot_save_stack - save stack in a stack depot.
> + * @trace - the stacktrace to save.
> + * @alloc_flags - flags for allocating additional memory if required.
> + */
> +depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
> +				      gfp_t alloc_flags)
> +{
> +	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
> +}
>  EXPORT_SYMBOL_GPL(depot_save_stack);
> 
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18  8:41 ` [patch V2 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
@ 2019-04-18 11:52   ` Mike Rapoport
  2019-04-18 11:57     ` Thomas Gleixner
  2019-04-18 14:52   ` Josh Poimboeuf
  2019-04-19  7:18   ` Peter Zijlstra
  2 siblings, 1 reply; 72+ messages in thread
From: Mike Rapoport @ 2019-04-18 11:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> All architectures which support stacktrace carry duplicated code and
> do the stack storage and filtering at the architecture side.
> 
> Provide a consolidated interface with a callback function for consuming the
> stack entries provided by the architecture specific stack walker. This
> removes lots of duplicated code and allows to implement better filtering
> than 'skip number of entries' in the future without touching any
> architecture specific code.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: linux-arch@vger.kernel.org
> ---
>  include/linux/stacktrace.h |   38 +++++++++
>  kernel/stacktrace.c        |  173 +++++++++++++++++++++++++++++++++++++++++++++
>  lib/Kconfig                |    4 +
>  3 files changed, 215 insertions(+)
> 
> --- a/include/linux/stacktrace.h
> +++ b/include/linux/stacktrace.h
> @@ -23,6 +23,43 @@ unsigned int stack_trace_save_regs(struc
>  unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
> 
>  /* Internal interfaces. Do not use in generic code */
> +#ifdef CONFIG_ARCH_STACKWALK
> +
> +/**
> + * stack_trace_consume_fn - Callback for arch_stack_walk()
> + * @cookie:	Caller supplied pointer handed back by arch_stack_walk()
> + * @addr:	The stack entry address to consume
> + * @reliable:	True when the stack entry is reliable. Required by
> + *		some printk based consumers.
> + *
> + * Returns:	True, if the entry was consumed or skipped
> + *		False, if there is no space left to store
> + */
> +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> +				       bool reliable);
> +/**
> + * arch_stack_walk - Architecture specific function to walk the stack
> +

Nit: no '*' at line beginning makes kernel-doc unhappy

> + * @consume_entry:	Callback which is invoked by the architecture code for
> + *			each entry.
> + * @cookie:		Caller supplied pointer which is handed back to
> + *			@consume_entry
> + * @task:		Pointer to a task struct, can be NULL
> + * @regs:		Pointer to registers, can be NULL
> + *
> + * @task	@regs:
> + * NULL		NULL	Stack trace from current
> + * task		NULL	Stack trace from task (can be current)
> + * NULL		regs	Stack trace starting on regs->stackpointer

This will render as a single line with 'make *docs'.
Adding line separators makes this actually a table in the generated docs:

 * ============ ======= ============================================
 * task		regs
 * ============ ======= ============================================
 * NULL		NULL	Stack trace from current
 * task		NULL	Stack trace from task (can be current)
 * NULL		regs	Stack trace starting on regs->stackpointer
 * ============ ======= ============================================


> + */
> +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> +		     struct task_struct *task, struct pt_regs *regs);
> +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> +			     struct task_struct *task);
> +void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
> +			  const struct pt_regs *regs);
> +
> +#else /* CONFIG_ARCH_STACKWALK */
>  struct stack_trace {
>  	unsigned int nr_entries, max_entries;
>  	unsigned long *entries;
> @@ -37,6 +74,7 @@ extern void save_stack_trace_tsk(struct
>  extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
>  					 struct stack_trace *trace);
>  extern void save_stack_trace_user(struct stack_trace *trace);
> +#endif /* !CONFIG_ARCH_STACKWALK */
>  #endif /* CONFIG_STACKTRACE */
> 
>  #if defined(CONFIG_STACKTRACE) && defined(CONFIG_HAVE_RELIABLE_STACKTRACE)
> --- a/kernel/stacktrace.c
> +++ b/kernel/stacktrace.c
> @@ -5,6 +5,8 @@
>   *
>   *  Copyright (C) 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
>   */
> +#include <linux/sched/task_stack.h>
> +#include <linux/sched/debug.h>
>  #include <linux/sched.h>
>  #include <linux/kernel.h>
>  #include <linux/export.h>
> @@ -64,6 +66,175 @@ int stack_trace_snprint(char *buf, size_
>  }
>  EXPORT_SYMBOL_GPL(stack_trace_snprint);
> 
> +#ifdef CONFIG_ARCH_STACKWALK
> +
> +struct stacktrace_cookie {
> +	unsigned long	*store;
> +	unsigned int	size;
> +	unsigned int	skip;
> +	unsigned int	len;
> +};
> +
> +static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
> +				      bool reliable)
> +{
> +	struct stacktrace_cookie *c = cookie;
> +
> +	if (c->len >= c->size)
> +		return false;
> +
> +	if (c->skip > 0) {
> +		c->skip--;
> +		return true;
> +	}
> +	c->store[c->len++] = addr;
> +	return c->len < c->size;
> +}
> +
> +static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
> +					      bool reliable)
> +{
> +	if (in_sched_functions(addr))
> +		return true;
> +	return stack_trace_consume_entry(cookie, addr, reliable);
> +}
> +
> +/**
> + * stack_trace_save - Save a stack trace into a storage array
> + * @store:	Pointer to storage array
> + * @size:	Size of the storage array
> + * @skipnr:	Number of entries to skip at the start of the stack trace
> + *
> + * Returns number of entries stored.

Can you please s/Returns/Return:/ so that kernel-doc will recognize this as
return section.

This is relevant for other comments below as well.

> + */
> +unsigned int stack_trace_save(unsigned long *store, unsigned int size,
> +			      unsigned int skipnr)
> +{
> +	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
> +	struct stacktrace_cookie c = {
> +		.store	= store,
> +		.size	= size,
> +		.skip	= skipnr + 1,
> +	};
> +
> +	arch_stack_walk(consume_entry, &c, current, NULL);
> +	return c.len;
> +}
> +EXPORT_SYMBOL_GPL(stack_trace_save);
> +
> +/**
> + * stack_trace_save_tsk - Save a task stack trace into a storage array
> + * @task:	The task to examine
> + * @store:	Pointer to storage array
> + * @size:	Size of the storage array
> + * @skipnr:	Number of entries to skip at the start of the stack trace
> + *
> + * Returns number of entries stored.
> + */
> +unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
> +				  unsigned int size, unsigned int skipnr)
> +{
> +	stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched;
> +	struct stacktrace_cookie c = {
> +		.store	= store,
> +		.size	= size,
> +		.skip	= skipnr + 1,
> +	};
> +
> +	if (!try_get_task_stack(tsk))
> +		return 0;
> +
> +	arch_stack_walk(consume_entry, &c, tsk, NULL);
> +	put_task_stack(tsk);
> +	return c.len;
> +}
> +
> +/**
> + * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
> + * @regs:	Pointer to pt_regs to examine
> + * @store:	Pointer to storage array
> + * @size:	Size of the storage array
> + * @skipnr:	Number of entries to skip at the start of the stack trace
> + *
> + * Returns number of entries stored.
> + */
> +unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
> +				   unsigned int size, unsigned int skipnr)
> +{
> +	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
> +	struct stacktrace_cookie c = {
> +		.store	= store,
> +		.size	= size,
> +		.skip	= skipnr,
> +	};
> +
> +	arch_stack_walk(consume_entry, &c, current, regs);
> +	return c.len;
> +}
> +
> +#ifdef CONFIG_HAVE_RELIABLE_STACKTRACE
> +/**
> + * stack_trace_save_tsk_reliable - Save task stack with verification
> + * @tsk:	Pointer to the task to examine
> + * @store:	Pointer to storage array
> + * @size:	Size of the storage array
> + *
> + * Returns:	An error if it detects any unreliable features of the
> + *		stack. Otherwise it guarantees that the stack trace is
> + *		reliable and returns the number of entries stored.
> + *
> + * If the task is not 'current', the caller *must* ensure the task is inactive.
> + */
> +int stack_trace_save_tsk_reliable(struct task_struct *tsk, unsigned long *store,
> +				  unsigned int size)
> +{
> +	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
> +	struct stacktrace_cookie c = {
> +		.store	= store,
> +		.size	= size,
> +	};
> +	int ret;
> +
> +	/*
> +	 * If the task doesn't have a stack (e.g., a zombie), the stack is
> +	 * "reliably" empty.
> +	 */
> +	if (!try_get_task_stack(tsk))
> +		return 0;
> +
> +	ret = arch_stack_walk_reliable(consume_entry, &c, tsk);
> +	put_task_stack(tsk);
> +	return ret;
> +}
> +#endif
> +
> +#ifdef CONFIG_USER_STACKTRACE_SUPPORT
> +/**
> + * stack_trace_save_user - Save a user space stack trace into a storage array
> + * @store:	Pointer to storage array
> + * @size:	Size of the storage array
> + *
> + * Returns number of entries stored.
> + */
> +unsigned int stack_trace_save_user(unsigned long *store, unsigned int size)
> +{
> +	stack_trace_consume_fn consume_entry = stack_trace_consume_entry;
> +	struct stacktrace_cookie c = {
> +		.store	= store,
> +		.size	= size,
> +	};
> +
> +	/* Trace user stack if not a kernel thread */
> +	if (!current->mm)
> +		return 0;
> +
> +	arch_stack_walk_user(consume_entry, &c, task_pt_regs(current));
> +	return c.len;
> +}
> +#endif
> +
> +#else /* CONFIG_ARCH_STACKWALK */
> +
>  /*
>   * Architectures that do not implement save_stack_trace_*()
>   * get these weak aliases and once-per-bootup warnings
> @@ -193,3 +364,5 @@ unsigned int stack_trace_save_user(unsig
>  	return trace.nr_entries;
>  }
>  #endif /* CONFIG_USER_STACKTRACE_SUPPORT */
> +
> +#endif /* !CONFIG_ARCH_STACKWALK */
> --- a/lib/Kconfig
> +++ b/lib/Kconfig
> @@ -597,6 +597,10 @@ config ARCH_HAS_UACCESS_FLUSHCACHE
>  config ARCH_HAS_UACCESS_MCSAFE
>  	bool
> 
> +# Temporary. Goes away when all archs are cleaned up
> +config ARCH_STACKWALK
> +       bool
> +
>  config STACKDEPOT
>  	bool
>  	select STACKTRACE
> 
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 09/29] mm/kasan: Simplify stacktrace handling
  2019-04-18 10:39   ` Andrey Ryabinin
@ 2019-04-18 11:53     ` Thomas Gleixner
  0 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 11:53 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Dmitry Vyukov, kasan-dev, linux-mm,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, David Rientjes,
	Christoph Lameter, Catalin Marinas, Mike Rapoport, Akinobu Mita,
	iommu, Robin Murphy, Christoph Hellwig, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon, intel-gfx,
	Joonas Lahtinen, Maarten Lankhorst, dri-devel, David Airlie,
	Jani Nikula, Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Andrey Ryabinin wrote:
> On 4/18/19 11:41 AM, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace by using the storage
> > array based interfaces.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Acked-by: Dmitry Vyukov <dvyukov@google.com>
> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > Cc: Alexander Potapenko <glider@google.com>
> > Cc: kasan-dev@googlegroups.com
> > Cc: linux-mm@kvack.org
> 
> Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> 
> >  
> >  static inline depot_stack_handle_t save_stack(gfp_t flags)
> >  {
> >  	unsigned long entries[KASAN_STACK_DEPTH];
> > -	struct stack_trace trace = {
> > -		.nr_entries = 0,
> > -		.entries = entries,
> > -		.max_entries = KASAN_STACK_DEPTH,
> > -		.skip = 0
> > -	};
> > +	unsigned int nr_entries;
> >  
> > -	save_stack_trace(&trace);
> > -	filter_irq_stacks(&trace);
> > -
> > -	return depot_save_stack(&trace, flags);
> > +	nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> > +	nr_entries = filter_irq_stacks(entries, nr_entries);
> > +	return stack_depot_save(entries, nr_entries, flags);
> 
> Suggestion for further improvement:
> 
> stack_trace_save() shouldn't unwind beyond irq entry point so we wouldn't
> need filter_irq_stacks().  Probably all call sites doesn't care about
> random stack above irq entry point, so it doesn't make sense to spend
> resources on unwinding non-irq stack from interrupt first an filtering
> out it later.

There are users which care about the full trace.

Once we have cleaned up the whole architeture side, we can add core side
filtering which allows to

   1) replace the 'skip number of entries at the beginning

   2) stop the trace when it reaches a certain point

Right now, I don't want to change any of this until the whole mess is
consolidated.

Thanks,

	tglx




^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 14/29] dm bufio: Simplify stack trace retrieval
  2019-04-18 10:44   ` Alexander Potapenko
@ 2019-04-18 11:54     ` Thomas Gleixner
  2019-04-18 12:11       ` Alexander Potapenko
  0 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 11:54 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	dm-devel, Mike Snitzer, Alasdair Kergon, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, Linux Memory Management List,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Alexander Potapenko wrote:
> On Thu, Apr 18, 2019 at 11:06 AM Thomas Gleixner <tglx@linutronix.de> wrote:
> > -       save_stack_trace(&b->stack_trace);
> > +       b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);
> As noted in one of similar patches before, can we have an inline
> comment to indicate what does this "2" stand for?

Come on. We have gazillion of functions which take numerical constant
arguments. Should we add comments to all of them?

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18 11:52   ` Mike Rapoport
@ 2019-04-18 11:57     ` Thomas Gleixner
  0 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 11:57 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Thu, 18 Apr 2019, Mike Rapoport wrote:
> On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> > +/**
> > + * arch_stack_walk - Architecture specific function to walk the stack
> > +
> 
> Nit: no '*' at line beginning makes kernel-doc unhappy

Oops.

> > + * @consume_entry:	Callback which is invoked by the architecture code for
> > + *			each entry.
> > + * @cookie:		Caller supplied pointer which is handed back to
> > + *			@consume_entry
> > + * @task:		Pointer to a task struct, can be NULL
> > + * @regs:		Pointer to registers, can be NULL
> > + *
> > + * @task	@regs:
> > + * NULL		NULL	Stack trace from current
> > + * task		NULL	Stack trace from task (can be current)
> > + * NULL		regs	Stack trace starting on regs->stackpointer
> 
> This will render as a single line with 'make *docs'.
> Adding line separators makes this actually a table in the generated docs:
> 
>  * ============ ======= ============================================
>  * task		regs
>  * ============ ======= ============================================
>  * NULL		NULL	Stack trace from current
>  * task		NULL	Stack trace from task (can be current)
>  * NULL		regs	Stack trace starting on regs->stackpointer
>  * ============ ======= ============================================

Cute.

> > + * Returns number of entries stored.
> 
> Can you please s/Returns/Return:/ so that kernel-doc will recognize this as
> return section.
> 
> This is relevant for other comments below as well.

Sure.

Thanks for the free kernel doc course! :)

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays
  2019-04-18 11:51   ` Mike Rapoport
@ 2019-04-18 11:58     ` Thomas Gleixner
  0 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 11:58 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Mike Rapoport wrote:
> On Thu, Apr 18, 2019 at 10:41:22AM +0200, Thomas Gleixner wrote:
> > 
> > -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
> > +/**
> > + * stack_depot_fetch - Fetch stack entries from a depot
> > + *
> 
> Nit: kernel-doc will complain about missing description of @handle.

Duh.

> > + * @entries:		Pointer to store the entries address
> > + */
> 
> Can you please s/Returns/Return:/ so that kernel-doc will recognize this as
> return section.

Sure thing.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 14/29] dm bufio: Simplify stack trace retrieval
  2019-04-18 11:54     ` Thomas Gleixner
@ 2019-04-18 12:11       ` Alexander Potapenko
  2019-04-18 13:33         ` Steven Rostedt
  0 siblings, 1 reply; 72+ messages in thread
From: Alexander Potapenko @ 2019-04-18 12:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	dm-devel, Mike Snitzer, Alasdair Kergon, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, Linux Memory Management List,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 1:54 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> On Thu, 18 Apr 2019, Alexander Potapenko wrote:
> > On Thu, Apr 18, 2019 at 11:06 AM Thomas Gleixner <tglx@linutronix.de> wrote:
> > > -       save_stack_trace(&b->stack_trace);
> > > +       b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);
> > As noted in one of similar patches before, can we have an inline
> > comment to indicate what does this "2" stand for?
>
> Come on. We have gazillion of functions which take numerical constant
> arguments. Should we add comments to all of them?
Ok, sorry. I might not be familiar enough with the kernel style guide.
> Thanks,
>
>         tglx



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 14/29] dm bufio: Simplify stack trace retrieval
  2019-04-18 12:11       ` Alexander Potapenko
@ 2019-04-18 13:33         ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 13:33 UTC (permalink / raw)
  To: Alexander Potapenko
  Cc: Thomas Gleixner, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	dm-devel, Mike Snitzer, Alasdair Kergon, Alexey Dobriyan,
	Andrew Morton, Pekka Enberg, Linux Memory Management List,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 14:11:44 +0200
Alexander Potapenko <glider@google.com> wrote:

> On Thu, Apr 18, 2019 at 1:54 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >
> > On Thu, 18 Apr 2019, Alexander Potapenko wrote:  
> > > On Thu, Apr 18, 2019 at 11:06 AM Thomas Gleixner <tglx@linutronix.de> wrote:  
> > > > -       save_stack_trace(&b->stack_trace);
> > > > +       b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);  
> > > As noted in one of similar patches before, can we have an inline
> > > comment to indicate what does this "2" stand for?  
> >
> > Come on. We have gazillion of functions which take numerical constant
> > arguments. Should we add comments to all of them?  
> Ok, sorry. I might not be familiar enough with the kernel style guide.

It is a legitimate complaint but not for this series. I only complain
about hard coded constants when they are added. That "2" was not
added by this series. This patch set is a clean up of the stack tracing
code, not a clean up of removing hard coded constants, or commenting
them.

The hard coded "2" was there without a comment before this patch series
and Thomas is correct to leave it as is for these changes. This patch
series should not modify what was already there which is out of scope
for the purpose of these changes.

A separate clean up patch to the maintainer of the subsystem (dm bufio
in this case) is fine. But it's not Thomas's responsibility.

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms
  2019-04-18  8:41 ` [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
@ 2019-04-18 13:40   ` Steven Rostedt
  2019-04-18 19:58     ` Tom Zanussi
  0 siblings, 1 reply; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 13:40 UTC (permalink / raw)
  To: Thomas Gleixner, Tom Zanussi
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch


[ Added Tom Zanussi ]

On Thu, 18 Apr 2019 10:41:39 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> The indirection through struct stack_trace is not necessary at all. Use the
> storage array based interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>

Looks fine to me

Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

 But...

Tom,

Can you review this too?

Patch series starts here:

  http://lkml.kernel.org/r/20190418084119.056416939@linutronix.de

Thanks,

-- Steve

> ---
>  kernel/trace/trace_events_hist.c |   12 +++---------
>  1 file changed, 3 insertions(+), 9 deletions(-)
> 
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -5186,7 +5186,6 @@ static void event_hist_trigger(struct ev
>  	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
>  	char compound_key[HIST_KEY_SIZE_MAX];
>  	struct tracing_map_elt *elt = NULL;
> -	struct stack_trace stacktrace;
>  	struct hist_field *key_field;
>  	u64 field_contents;
>  	void *key = NULL;
> @@ -5198,14 +5197,9 @@ static void event_hist_trigger(struct ev
>  		key_field = hist_data->fields[i];
>  
>  		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
> -			stacktrace.max_entries = HIST_STACKTRACE_DEPTH;
> -			stacktrace.entries = entries;
> -			stacktrace.nr_entries = 0;
> -			stacktrace.skip = HIST_STACKTRACE_SKIP;
> -
> -			memset(stacktrace.entries, 0, HIST_STACKTRACE_SIZE);
> -			save_stack_trace(&stacktrace);
> -
> +			memset(entries, 0, HIST_STACKTRACE_SIZE);
> +			stack_trace_save(entries, HIST_STACKTRACE_DEPTH,
> +					 HIST_STACKTRACE_SKIP);
>  			key = entries;
>  		} else {
>  			field_contents = key_field->fn(key_field, elt, rbe, rec);
> 


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18  8:41 ` [patch V2 01/29] tracing: Cleanup stack trace code Thomas Gleixner
@ 2019-04-18 13:57   ` Josh Poimboeuf
  2019-04-18 21:14     ` Thomas Gleixner
  2019-04-18 22:19   ` Steven Rostedt
  1 sibling, 1 reply; 72+ messages in thread
From: Josh Poimboeuf @ 2019-04-18 13:57 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 10:41:20AM +0200, Thomas Gleixner wrote:
> - Remove the extra array member of stack_dump_trace[]. It's not required as
>   the stack tracer stores at max array size - 1 entries so there is still
>   an empty slot.

What is the empty slot used for?

-- 
Josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18  8:41 ` [patch V2 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
  2019-04-18 11:52   ` Mike Rapoport
@ 2019-04-18 14:52   ` Josh Poimboeuf
  2019-04-18 15:42     ` Thomas Gleixner
  2019-04-19  7:18   ` Peter Zijlstra
  2 siblings, 1 reply; 72+ messages in thread
From: Josh Poimboeuf @ 2019-04-18 14:52 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	linux-arch, Alexey Dobriyan, Andrew Morton, Pekka Enberg,
	linux-mm, David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi

On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> All architectures which support stacktrace carry duplicated code and
> do the stack storage and filtering at the architecture side.
> 
> Provide a consolidated interface with a callback function for consuming the
> stack entries provided by the architecture specific stack walker. This
> removes lots of duplicated code and allows to implement better filtering
> than 'skip number of entries' in the future without touching any
> architecture specific code.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: linux-arch@vger.kernel.org

This is a step in the right direction, especially if it allows us to get
rid of the 'skip' stuff.  But I'm not crazy about the callbacks.

Another idea I had (but never got a chance to work on) was to extend the
x86 unwind interface to all arches.  So instead of the callbacks, each
arch would implement something like this API:


struct unwind_state state;

void unwind_start(struct unwind_state *state, struct task_struct *task,
		  struct pt_regs *regs, unsigned long *first_frame);

bool unwind_next_frame(struct unwind_state *state);

inline bool unwind_done(struct unwind_state *state);


Not only would it avoid the callbacks (which is a nice benefit already),
it would also allow the interfaces to be used outside of the
stack_trace_*() interfaces.  That would come in handy in cases like the
ftrace stack tracer code, which needs more than the stack_trace_*() API
can give.

Of course, this may be more work than what you thought you signed up for
;-)

-- 
Josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently
  2019-04-18  8:41 ` [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
@ 2019-04-18 14:53   ` Steven Rostedt
  2019-04-18 15:43     ` Thomas Gleixner
  0 siblings, 1 reply; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 14:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 10:41:40 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> The per cpu stack trace buffer usage pattern is odd at best. The buffer has
> place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When
> interrupts or exceptions nest after the per cpu buffer was acquired the
> stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries
> in kernel stacks are unrealistic so the buffer is a complete waste.
> 
> Split the buffer into chunks of 64 stack entries which is plenty. This
> allows nesting contexts (interrupts, exceptions) to utilize the cpu buffer
> for stack retrieval and avoids the fixed length allocation along with the
> conditional execution pathes.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/trace/trace.c |   77 +++++++++++++++++++++++++--------------------------
>  1 file changed, 39 insertions(+), 38 deletions(-)
> 
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -2749,12 +2749,21 @@ trace_function(struct trace_array *tr,
>  
>  #ifdef CONFIG_STACKTRACE
>  
> -#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
> +/* 64 entries for kernel stacks are plenty */
> +#define FTRACE_KSTACK_ENTRIES	64
> +
>  struct ftrace_stack {
> -	unsigned long		calls[FTRACE_STACK_MAX_ENTRIES];
> +	unsigned long		calls[FTRACE_KSTACK_ENTRIES];
>  };
>  
> -static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
> +/* This allows 8 level nesting which is plenty */

Can we make this 4 level nesting and increase the size? (I can see us
going more than 64 deep, kernel developers never cease to amaze me ;-)
That's all we need:

 Context: Normal, softirq, irq, NMI

Is there any other way to nest?

-- Steve

> +#define FTRACE_KSTACK_NESTING	(PAGE_SIZE / sizeof(struct ftrace_stack))
> +
> +struct ftrace_stacks {
> +	struct ftrace_stack	stacks[FTRACE_KSTACK_NESTING];
> +};
> +
> +static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
>  static DEFINE_PER_CPU(int, ftrace_stack_reserve);
>  
> 

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 08/29] mm/kmemleak: Simplify stacktrace handling
  2019-04-18  8:41 ` [patch V2 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-18 15:17   ` Catalin Marinas
  0 siblings, 0 replies; 72+ messages in thread
From: Catalin Marinas @ 2019-04-18 15:17 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-mm, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, David Rientjes, Christoph Lameter, Dmitry Vyukov,
	Andrey Ryabinin, kasan-dev, Mike Rapoport, Akinobu Mita, iommu,
	Robin Murphy, Christoph Hellwig, Marek Szyprowski,
	Johannes Thumshirn, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs, dm-devel, Mike Snitzer, Alasdair Kergon, intel-gfx,
	Joonas Lahtinen, Maarten Lankhorst, dri-devel, David Airlie,
	Jani Nikula, Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 10:41:27AM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: linux-mm@kvack.org

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18 14:52   ` Josh Poimboeuf
@ 2019-04-18 15:42     ` Thomas Gleixner
  2019-04-19  7:02       ` Peter Zijlstra
  0 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 15:42 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	linux-arch, Alexey Dobriyan, Andrew Morton, Pekka Enberg,
	linux-mm, David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi

On Thu, 18 Apr 2019, Josh Poimboeuf wrote:

> On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> > All architectures which support stacktrace carry duplicated code and
> > do the stack storage and filtering at the architecture side.
> > 
> > Provide a consolidated interface with a callback function for consuming the
> > stack entries provided by the architecture specific stack walker. This
> > removes lots of duplicated code and allows to implement better filtering
> > than 'skip number of entries' in the future without touching any
> > architecture specific code.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: linux-arch@vger.kernel.org
> 
> This is a step in the right direction, especially if it allows us to get
> rid of the 'skip' stuff.  But I'm not crazy about the callbacks.
> 
> Another idea I had (but never got a chance to work on) was to extend the
> x86 unwind interface to all arches.  So instead of the callbacks, each
> arch would implement something like this API:
> 
> 
> struct unwind_state state;
> 
> void unwind_start(struct unwind_state *state, struct task_struct *task,
> 		  struct pt_regs *regs, unsigned long *first_frame);
> 
> bool unwind_next_frame(struct unwind_state *state);
> 
> inline bool unwind_done(struct unwind_state *state);
> 
> 
> Not only would it avoid the callbacks (which is a nice benefit already),
> it would also allow the interfaces to be used outside of the
> stack_trace_*() interfaces.  That would come in handy in cases like the
> ftrace stack tracer code, which needs more than the stack_trace_*() API
> can give.

I surely thought about that, but after staring at all incarnations of
arch/*/stacktrace.c I just gave up.

Aside of that quite some archs already have callback based unwinders
because they use them for more than stacktracing and just have a single
implementation of that loop.

I'm fine either way. We can start with x86 and then let archs convert over
their stuff, but I wouldn't hold my breath that this will be completed in
the forseeable future.

> Of course, this may be more work than what you thought you signed up for
> ;-)

I did not sign up for anything. I tripped over that mess by accident and me
being me hated it strong enough to give it at least an initial steam blast.

Thanks,

	tglx
	

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently
  2019-04-18 14:53   ` Steven Rostedt
@ 2019-04-18 15:43     ` Thomas Gleixner
  2019-04-18 15:46       ` Steven Rostedt
  0 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 15:43 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Steven Rostedt wrote:
> On Thu, 18 Apr 2019 10:41:40 +0200
> Thomas Gleixner <tglx@linutronix.de> wrote:
> > -static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
> > +/* This allows 8 level nesting which is plenty */
> 
> Can we make this 4 level nesting and increase the size? (I can see us
> going more than 64 deep, kernel developers never cease to amaze me ;-)
> That's all we need:
> 
>  Context: Normal, softirq, irq, NMI
> 
> Is there any other way to nest?

Not that I know, but you are the tracer dude :)

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently
  2019-04-18 15:43     ` Thomas Gleixner
@ 2019-04-18 15:46       ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 15:46 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 17:43:59 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Thu, 18 Apr 2019, Steven Rostedt wrote:
> > On Thu, 18 Apr 2019 10:41:40 +0200
> > Thomas Gleixner <tglx@linutronix.de> wrote:  
> > > -static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
> > > +/* This allows 8 level nesting which is plenty */  
> > 
> > Can we make this 4 level nesting and increase the size? (I can see us
> > going more than 64 deep, kernel developers never cease to amaze me ;-)
> > That's all we need:
> > 
> >  Context: Normal, softirq, irq, NMI
> > 
> > Is there any other way to nest?  
> 
> Not that I know, but you are the tracer dude :)
>

There's other places I only test 4 deep, so it should be fine to limit
it to 4 then.

Thanks!

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms
  2019-04-18 13:40   ` Steven Rostedt
@ 2019-04-18 19:58     ` Tom Zanussi
  2019-04-18 20:13       ` Steven Rostedt
  0 siblings, 1 reply; 72+ messages in thread
From: Tom Zanussi @ 2019-04-18 19:58 UTC (permalink / raw)
  To: Steven Rostedt, Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 2019-04-18 at 09:40 -0400, Steven Rostedt wrote:
> [ Added Tom Zanussi ]
> 
> On Thu, 18 Apr 2019 10:41:39 +0200
> Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> > The indirection through struct stack_trace is not necessary at all.
> > Use the
> > storage array based interface.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> 
> Looks fine to me
> 
> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> 
>  But...
> 
> Tom,
> 
> Can you review this too?

Looks good to me too!

Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com>


> 
> Patch series starts here:
> 
>   http://lkml.kernel.org/r/20190418084119.056416939@linutronix.de
> 
> Thanks,
> 
> -- Steve
> 
> > ---
> >  kernel/trace/trace_events_hist.c |   12 +++---------
> >  1 file changed, 3 insertions(+), 9 deletions(-)
> > 
> > --- a/kernel/trace/trace_events_hist.c
> > +++ b/kernel/trace/trace_events_hist.c
> > @@ -5186,7 +5186,6 @@ static void event_hist_trigger(struct ev
> >  	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
> >  	char compound_key[HIST_KEY_SIZE_MAX];
> >  	struct tracing_map_elt *elt = NULL;
> > -	struct stack_trace stacktrace;
> >  	struct hist_field *key_field;
> >  	u64 field_contents;
> >  	void *key = NULL;
> > @@ -5198,14 +5197,9 @@ static void event_hist_trigger(struct ev
> >  		key_field = hist_data->fields[i];
> >  
> >  		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
> > -			stacktrace.max_entries = HIST_STACKTRACE_DEPTH;
> > -			stacktrace.entries = entries;
> > -			stacktrace.nr_entries = 0;
> > -			stacktrace.skip = HIST_STACKTRACE_SKIP;
> > -
> > -			memset(stacktrace.entries, 0,
> > HIST_STACKTRACE_SIZE);
> > -			save_stack_trace(&stacktrace);
> > -
> > +			memset(entries, 0, HIST_STACKTRACE_SIZE);
> > +			stack_trace_save(entries,
> > HIST_STACKTRACE_DEPTH,
> > +					 HIST_STACKTRACE_SKIP);
> >  			key = entries;
> >  		} else {
> >  			field_contents = key_field->fn(key_field, elt,
> > rbe, rec);
> > 
> 
> 


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms
  2019-04-18 19:58     ` Tom Zanussi
@ 2019-04-18 20:13       ` Steven Rostedt
  2019-04-18 20:22         ` Tom Zanussi
  0 siblings, 1 reply; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 20:13 UTC (permalink / raw)
  To: Tom Zanussi
  Cc: Thomas Gleixner, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 14:58:55 -0500
Tom Zanussi <tom.zanussi@linux.intel.com> wrote:

> > Tom,
> > 
> > Can you review this too?  
> 
> Looks good to me too!
> 
> Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com>
> 

Would you be OK to upgrade this to a Reviewed-by tag?

Thanks!

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms
  2019-04-18 20:13       ` Steven Rostedt
@ 2019-04-18 20:22         ` Tom Zanussi
  0 siblings, 0 replies; 72+ messages in thread
From: Tom Zanussi @ 2019-04-18 20:22 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Thomas Gleixner, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

Hi Steve,

On Thu, 2019-04-18 at 16:13 -0400, Steven Rostedt wrote:
> On Thu, 18 Apr 2019 14:58:55 -0500
> Tom Zanussi <tom.zanussi@linux.intel.com> wrote:
> 
> > > Tom,
> > > 
> > > Can you review this too?  
> > 
> > Looks good to me too!
> > 
> > Acked-by: Tom Zanussi <tom.zanussi@linux.intel.com>
> > 
> 
> Would you be OK to upgrade this to a Reviewed-by tag?
> 

Yeah, I did review it and even tested it, so:

Reviewed-by: Tom Zanussi <tom.zanussi@linux.intel.com>
Tested-by: Tom Zanussi <tom.zanussi@linux.intel.com>

Tom


> Thanks!
> 
> -- Steve


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18 13:57   ` Josh Poimboeuf
@ 2019-04-18 21:14     ` Thomas Gleixner
  2019-04-18 21:24       ` Steven Rostedt
  0 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 21:14 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Josh Poimboeuf wrote:

> On Thu, Apr 18, 2019 at 10:41:20AM +0200, Thomas Gleixner wrote:
> > - Remove the extra array member of stack_dump_trace[]. It's not required as
> >   the stack tracer stores at max array size - 1 entries so there is still
> >   an empty slot.
> 
> What is the empty slot used for?

I was trying to find an answer but failed. Maybe it's just historical
leftovers or Steven knows where the magic is in this maze.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18 21:14     ` Thomas Gleixner
@ 2019-04-18 21:24       ` Steven Rostedt
  2019-04-18 21:50         ` Steven Rostedt
  0 siblings, 1 reply; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 21:24 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Josh Poimboeuf, LKML, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 23:14:45 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Thu, 18 Apr 2019, Josh Poimboeuf wrote:
> 
> > On Thu, Apr 18, 2019 at 10:41:20AM +0200, Thomas Gleixner wrote:  
> > > - Remove the extra array member of stack_dump_trace[]. It's not required as
> > >   the stack tracer stores at max array size - 1 entries so there is still
> > >   an empty slot.  
> > 
> > What is the empty slot used for?  
> 
> I was trying to find an answer but failed. Maybe it's just historical
> leftovers or Steven knows where the magic is in this maze.
>

I believe it was for historical leftovers (there was a time it was
required), and left there for "paranoid" sake. But let me apply the
patch and see if it is really needed.

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18 21:24       ` Steven Rostedt
@ 2019-04-18 21:50         ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 21:50 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Josh Poimboeuf, LKML, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 17:24:43 -0400
Steven Rostedt <rostedt@goodmis.org> wrote:

> I believe it was for historical leftovers (there was a time it was
> required), and left there for "paranoid" sake. But let me apply the
> patch and see if it is really needed.

I removed the +1 on the max_entries and set SET_TRACE_ENTRIES to 5 (a
bit extreme). Then I ran the stack tracing with KASAN enabled and it
never complained.

As stated, it was there for historical reasons and I felt 500 was way
more than enough and left the buffer there just out of laziness and
paranoia.

Feel free to remove that if you want.

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18  8:41 ` [patch V2 01/29] tracing: Cleanup stack trace code Thomas Gleixner
  2019-04-18 13:57   ` Josh Poimboeuf
@ 2019-04-18 22:19   ` Steven Rostedt
  2019-04-18 22:44     ` Thomas Gleixner
  1 sibling, 1 reply; 72+ messages in thread
From: Steven Rostedt @ 2019-04-18 22:19 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 10:41:20 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:


> @@ -412,23 +404,20 @@ stack_trace_sysctl(struct ctl_table *tab
>  		   void __user *buffer, size_t *lenp,
>  		   loff_t *ppos)
>  {
> -	int ret;
> +	int ret, was_enabled;

One small nit. Could this be:

	int was_enabled;
	int ret;

I prefer only joining variables that are related on the same line.
Makes it look cleaner IMO.

>  
>  	mutex_lock(&stack_sysctl_mutex);
> +	was_enabled = !!stack_tracer_enabled;
>  

Bah, not sure why I didn't do it this way to begin with. I think I
copied something else that couldn't do it this way for some reason and
didn't put any brain power behind the copy. :-/ But that was back in
2008 so I blame it on being "young and stupid" ;-)

Other then the above nit and removing the unneeded +1 in max_entries:

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve


>  	ret = proc_dointvec(table, write, buffer, lenp, ppos);
>  
> -	if (ret || !write ||
> -	    (last_stack_tracer_enabled == !!stack_tracer_enabled))
> +	if (ret || !write || (was_enabled == !!stack_tracer_enabled))
>  		goto out;
>  
> -	last_stack_tracer_enabled = !!stack_tracer_enabled;
> -
>  	if (stack_tracer_enabled)
>  		register_ftrace_function(&trace_ops);
>  	else
>  		unregister_ftrace_function(&trace_ops);
> -
>   out:
>  	mutex_unlock(&stack_sysctl_mutex);
>  	return ret;
> @@ -444,7 +433,6 @@ static __init int enable_stacktrace(char
>  		strncpy(stack_trace_filter_buf, str + len, COMMAND_LINE_SIZE);
>  
>  	stack_tracer_enabled = 1;
> -	last_stack_tracer_enabled = 1;
>  	return 1;
>  }
>  __setup("stacktrace", enable_stacktrace);
> 


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18 22:19   ` Steven Rostedt
@ 2019-04-18 22:44     ` Thomas Gleixner
  2019-04-19  0:39       ` Steven Rostedt
  0 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-18 22:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Steven Rostedt wrote:
> On Thu, 18 Apr 2019 10:41:20 +0200
> Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> > @@ -412,23 +404,20 @@ stack_trace_sysctl(struct ctl_table *tab
> >  		   void __user *buffer, size_t *lenp,
> >  		   loff_t *ppos)
> >  {
> > -	int ret;
> > +	int ret, was_enabled;
> 
> One small nit. Could this be:
> 
> 	int was_enabled;
> 	int ret;
> 
> I prefer only joining variables that are related on the same line.
> Makes it look cleaner IMO.

If you wish so. To me it's waste of screen space :)

> >  
> >  	mutex_lock(&stack_sysctl_mutex);
> > +	was_enabled = !!stack_tracer_enabled;
> >  
> 
> Bah, not sure why I didn't do it this way to begin with. I think I
> copied something else that couldn't do it this way for some reason and
> didn't put any brain power behind the copy. :-/ But that was back in
> 2008 so I blame it on being "young and stupid" ;-)

The young part is gone for sure :)

> Other then the above nit and removing the unneeded +1 in max_entries:

s/+1/-1/

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 01/29] tracing: Cleanup stack trace code
  2019-04-18 22:44     ` Thomas Gleixner
@ 2019-04-19  0:39       ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-19  0:39 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Fri, 19 Apr 2019 00:44:17 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Thu, 18 Apr 2019, Steven Rostedt wrote:
> > On Thu, 18 Apr 2019 10:41:20 +0200
> > Thomas Gleixner <tglx@linutronix.de> wrote:
> >   
> > > @@ -412,23 +404,20 @@ stack_trace_sysctl(struct ctl_table *tab
> > >  		   void __user *buffer, size_t *lenp,
> > >  		   loff_t *ppos)
> > >  {
> > > -	int ret;
> > > +	int ret, was_enabled;  
> > 
> > One small nit. Could this be:
> > 
> > 	int was_enabled;
> > 	int ret;
> > 
> > I prefer only joining variables that are related on the same line.
> > Makes it look cleaner IMO.  
> 
> If you wish so. To me it's waste of screen space :)

At least you didn't say it helps the compiler ;-)

> 
> > >  
> > >  	mutex_lock(&stack_sysctl_mutex);
> > > +	was_enabled = !!stack_tracer_enabled;
> > >    
> > 
> > Bah, not sure why I didn't do it this way to begin with. I think I
> > copied something else that couldn't do it this way for some reason and
> > didn't put any brain power behind the copy. :-/ But that was back in
> > 2008 so I blame it on being "young and stupid" ;-)  
> 
> The young part is gone for sure :)

I purposely set you up for that response.

> 
> > Other then the above nit and removing the unneeded +1 in max_entries:  
> 
> s/+1/-1/

That was an ode to G+

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18 15:42     ` Thomas Gleixner
@ 2019-04-19  7:02       ` Peter Zijlstra
  2019-04-19 15:50         ` Josh Poimboeuf
  0 siblings, 1 reply; 72+ messages in thread
From: Peter Zijlstra @ 2019-04-19  7:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Josh Poimboeuf, LKML, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Thu, Apr 18, 2019 at 05:42:55PM +0200, Thomas Gleixner wrote:
> On Thu, 18 Apr 2019, Josh Poimboeuf wrote:

> > Another idea I had (but never got a chance to work on) was to extend the
> > x86 unwind interface to all arches.  So instead of the callbacks, each
> > arch would implement something like this API:

> I surely thought about that, but after staring at all incarnations of
> arch/*/stacktrace.c I just gave up.
> 
> Aside of that quite some archs already have callback based unwinders
> because they use them for more than stacktracing and just have a single
> implementation of that loop.
> 
> I'm fine either way. We can start with x86 and then let archs convert over
> their stuff, but I wouldn't hold my breath that this will be completed in
> the forseeable future.

I suggested the same to Thomas early on, and I even spend the time to
convert some $random arch to the iterator interface, and while it is
indeed entirely feasible, it is _far_ more work.

The callback thing OTOH is flexible enough to do what we want to do now,
and allows converting most archs to it without too much pain (as Thomas
said, many archs are already in this form and only need minor API
adjustments), which gets us in a far better place than we are now.

And we can always go to iterators later on. But I think getting the
generic unwinder improved across all archs is a really important first
step here.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 12/29] dma/debug: Simplify stracktrace retrieval
  2019-04-18  8:41 ` [patch V2 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
@ 2019-04-19  7:05   ` Christoph Hellwig
  0 siblings, 0 replies; 72+ messages in thread
From: Christoph Hellwig @ 2019-04-19  7:05 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Alexey Dobriyan, Andrew Morton, Pekka Enberg,
	linux-mm, David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

Please fix up the > 80 char line.  Otherwise:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-18  8:41 ` [patch V2 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
  2019-04-18 11:52   ` Mike Rapoport
  2019-04-18 14:52   ` Josh Poimboeuf
@ 2019-04-19  7:18   ` Peter Zijlstra
  2019-04-19  8:32     ` Thomas Gleixner
  2 siblings, 1 reply; 72+ messages in thread
From: Peter Zijlstra @ 2019-04-19  7:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:

> +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> +                                      bool reliable);

> +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> +		     struct task_struct *task, struct pt_regs *regs);
> +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> +			     struct task_struct *task);

This bugs me a little; ideally the _reliable() thing would not exists.

Thomas said that the existing __save_stack_trace_reliable() is different
enough for the unification to be non-trivial, but maybe Josh can help
out?

From what I can see the biggest significant differences are:

 - it looks at the regs sets on the stack and for FP bails early
 - bails for khreads and idle (after it does all the hard work!?!)

The first (FP checking for exceptions) should probably be reflected in
consume_fn(.reliable) anyway -- although that would mean a lot of extra
'?' entries where there are none today.

And the second (KTHREAD/IDLE) is something that the generic code can
easily do before calling into the arch unwinder.

Hmm?


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-19  7:18   ` Peter Zijlstra
@ 2019-04-19  8:32     ` Thomas Gleixner
  2019-04-19  9:07       ` Peter Zijlstra
  0 siblings, 1 reply; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-19  8:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Fri, 19 Apr 2019, Peter Zijlstra wrote:
> On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> 
> > +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> > +                                      bool reliable);
> 
> > +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> > +		     struct task_struct *task, struct pt_regs *regs);
> > +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> > +			     struct task_struct *task);
> 
> This bugs me a little; ideally the _reliable() thing would not exists.
> 
> Thomas said that the existing __save_stack_trace_reliable() is different
> enough for the unification to be non-trivial, but maybe Josh can help
> out?
> 
> >From what I can see the biggest significant differences are:
> 
>  - it looks at the regs sets on the stack and for FP bails early
>  - bails for khreads and idle (after it does all the hard work!?!)
> 
> The first (FP checking for exceptions) should probably be reflected in
> consume_fn(.reliable) anyway -- although that would mean a lot of extra
> '?' entries where there are none today.
> 
> And the second (KTHREAD/IDLE) is something that the generic code can
> easily do before calling into the arch unwinder.

And looking at the powerpc version of it, that has even more interesting
extra checks in that function.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-19  8:32     ` Thomas Gleixner
@ 2019-04-19  9:07       ` Peter Zijlstra
  2019-04-19 16:17         ` Josh Poimboeuf
  0 siblings, 1 reply; 72+ messages in thread
From: Peter Zijlstra @ 2019-04-19  9:07 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Fri, Apr 19, 2019 at 10:32:30AM +0200, Thomas Gleixner wrote:
> On Fri, 19 Apr 2019, Peter Zijlstra wrote:
> > On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> > 
> > > +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> > > +                                      bool reliable);
> > 
> > > +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> > > +		     struct task_struct *task, struct pt_regs *regs);
> > > +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> > > +			     struct task_struct *task);
> > 
> > This bugs me a little; ideally the _reliable() thing would not exists.
> > 
> > Thomas said that the existing __save_stack_trace_reliable() is different
> > enough for the unification to be non-trivial, but maybe Josh can help
> > out?
> > 
> > >From what I can see the biggest significant differences are:
> > 
> >  - it looks at the regs sets on the stack and for FP bails early
> >  - bails for khreads and idle (after it does all the hard work!?!)
> > 
> > The first (FP checking for exceptions) should probably be reflected in
> > consume_fn(.reliable) anyway -- although that would mean a lot of extra
> > '?' entries where there are none today.
> > 
> > And the second (KTHREAD/IDLE) is something that the generic code can
> > easily do before calling into the arch unwinder.
> 
> And looking at the powerpc version of it, that has even more interesting
> extra checks in that function.

Right, but not fundamentally different from determining @reliable I
think.

Anyway, it would be good if someone knowledgable could have a look at
this.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 22/29] tracing: Make ftrace_trace_userstack() static and conditional
  2019-04-18  8:41 ` [patch V2 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
@ 2019-04-19 13:28   ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-19 13:28 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 10:41:41 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> It's only used in trace.c and there is absolutely no point in compiling it
> in when user space stack traces are not supported.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>

Funny, these were moved out to global functions along with the
ftrace_trace_stack() but I guess they were never used.

This basically just does a partial revert of:

 c0a0d0d3f6528 ("tracing/core: Make the stack entry helpers global")


> ---
>  kernel/trace/trace.c |   14 ++++++++------
>  kernel/trace/trace.h |    8 --------
>  2 files changed, 8 insertions(+), 14 deletions(-)
> 
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -159,6 +159,8 @@ static union trace_eval_map_item *trace_
>  #endif /* CONFIG_TRACE_EVAL_MAP_FILE */
>  
>  static int tracing_set_tracer(struct trace_array *tr, const char *buf);
> +static void ftrace_trace_userstack(struct ring_buffer *buffer,
> +				   unsigned long flags, int pc);
>  
>  #define MAX_TRACER_SIZE		100
>  static char bootup_tracer_buf[MAX_TRACER_SIZE] __initdata;
> @@ -2905,9 +2907,10 @@ void trace_dump_stack(int skip)
>  }
>  EXPORT_SYMBOL_GPL(trace_dump_stack);
>  
> +#ifdef CONFIG_USER_STACKTRACE_SUPPORT
>  static DEFINE_PER_CPU(int, user_stack_count);
>  
> -void
> +static void
>  ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
>  {
>  	struct trace_event_call *call = &event_user_stack;
> @@ -2958,13 +2961,12 @@ ftrace_trace_userstack(struct ring_buffe
>   out:
>  	preempt_enable();
>  }
> -
> -#ifdef UNUSED

Strange, I never knew about this ifdef. I would have nuked it when I
saw it.

Anyway,

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve


> -static void __trace_userstack(struct trace_array *tr, unsigned long flags)
> +#else /* CONFIG_USER_STACKTRACE_SUPPORT */
> +static void ftrace_trace_userstack(struct ring_buffer *buffer,
> +				   unsigned long flags, int pc)
>  {
> -	ftrace_trace_userstack(tr, flags, preempt_count());
>  }
> -#endif /* UNUSED */
> +#endif /* !CONFIG_USER_STACKTRACE_SUPPORT */
>  
>  #endif /* CONFIG_STACKTRACE */
>  
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -782,17 +782,9 @@ void update_max_tr_single(struct trace_a
>  #endif /* CONFIG_TRACER_MAX_TRACE */
>  
>  #ifdef CONFIG_STACKTRACE
> -void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
> -			    int pc);
> -
>  void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
>  		   int pc);
>  #else
> -static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
> -					  unsigned long flags, int pc)
> -{
> -}
> -
>  static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
>  				 int skip, int pc)
>  {
> 


^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-19  7:02       ` Peter Zijlstra
@ 2019-04-19 15:50         ` Josh Poimboeuf
  0 siblings, 0 replies; 72+ messages in thread
From: Josh Poimboeuf @ 2019-04-19 15:50 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Fri, Apr 19, 2019 at 09:02:11AM +0200, Peter Zijlstra wrote:
> On Thu, Apr 18, 2019 at 05:42:55PM +0200, Thomas Gleixner wrote:
> > On Thu, 18 Apr 2019, Josh Poimboeuf wrote:
> 
> > > Another idea I had (but never got a chance to work on) was to extend the
> > > x86 unwind interface to all arches.  So instead of the callbacks, each
> > > arch would implement something like this API:
> 
> > I surely thought about that, but after staring at all incarnations of
> > arch/*/stacktrace.c I just gave up.
> > 
> > Aside of that quite some archs already have callback based unwinders
> > because they use them for more than stacktracing and just have a single
> > implementation of that loop.
> > 
> > I'm fine either way. We can start with x86 and then let archs convert over
> > their stuff, but I wouldn't hold my breath that this will be completed in
> > the forseeable future.
> 
> I suggested the same to Thomas early on, and I even spend the time to
> convert some $random arch to the iterator interface, and while it is
> indeed entirely feasible, it is _far_ more work.
> 
> The callback thing OTOH is flexible enough to do what we want to do now,
> and allows converting most archs to it without too much pain (as Thomas
> said, many archs are already in this form and only need minor API
> adjustments), which gets us in a far better place than we are now.
> 
> And we can always go to iterators later on. But I think getting the
> generic unwinder improved across all archs is a really important first
> step here.

Fair enough.

-- 
Josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 28/29] stacktrace: Provide common infrastructure
  2019-04-19  9:07       ` Peter Zijlstra
@ 2019-04-19 16:17         ` Josh Poimboeuf
  0 siblings, 0 replies; 72+ messages in thread
From: Josh Poimboeuf @ 2019-04-19 16:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-arch, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

On Fri, Apr 19, 2019 at 11:07:17AM +0200, Peter Zijlstra wrote:
> On Fri, Apr 19, 2019 at 10:32:30AM +0200, Thomas Gleixner wrote:
> > On Fri, 19 Apr 2019, Peter Zijlstra wrote:
> > > On Thu, Apr 18, 2019 at 10:41:47AM +0200, Thomas Gleixner wrote:
> > > 
> > > > +typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
> > > > +                                      bool reliable);
> > > 
> > > > +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> > > > +		     struct task_struct *task, struct pt_regs *regs);
> > > > +int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
> > > > +			     struct task_struct *task);
> > > 
> > > This bugs me a little; ideally the _reliable() thing would not exists.
> > > 
> > > Thomas said that the existing __save_stack_trace_reliable() is different
> > > enough for the unification to be non-trivial, but maybe Josh can help
> > > out?
> > > 
> > > >From what I can see the biggest significant differences are:
> > > 
> > >  - it looks at the regs sets on the stack and for FP bails early
> > >  - bails for khreads and idle (after it does all the hard work!?!)

That's done for a reason, see the "Success path" comments.

> > > The first (FP checking for exceptions) should probably be reflected in
> > > consume_fn(.reliable) anyway -- although that would mean a lot of extra
> > > '?' entries where there are none today.
> > > 
> > > And the second (KTHREAD/IDLE) is something that the generic code can
> > > easily do before calling into the arch unwinder.
> > 
> > And looking at the powerpc version of it, that has even more interesting
> > extra checks in that function.
> 
> Right, but not fundamentally different from determining @reliable I
> think.
> 
> Anyway, it would be good if someone knowledgable could have a look at
> this.

Yeah, we could probably do that.

The flow would need to be changed a bit -- some of the errors are soft
errors which most users don't care about because they just want a best
effort.  The soft errors can be remembered without breaking out of the
loop, and just returned at the end.  Most users could just ignore the
return code.

The only thing I'd be worried about is performance for the non-livepatch
users, but I guess those checks don't look very expensive.  And the x86
unwinders are already pretty slow anyway...

-- 
Josh

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 23/29] tracing: Simplify stack trace retrieval
  2019-04-18  8:41 ` [patch V2 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-19 20:11   ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-19 20:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 10:41:42 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 24/29] tracing: Remove the last struct stack_trace usage
  2019-04-18  8:41 ` [patch V2 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
@ 2019-04-19 20:11   ` Steven Rostedt
  0 siblings, 0 replies; 72+ messages in thread
From: Steven Rostedt @ 2019-04-19 20:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	Alexey Dobriyan, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter, Catalin Marinas,
	Dmitry Vyukov, Andrey Ryabinin, kasan-dev, Mike Rapoport,
	Akinobu Mita, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski, Johannes Thumshirn, David Sterba, Chris Mason,
	Josef Bacik, linux-btrfs, dm-devel, Mike Snitzer,
	Alasdair Kergon, intel-gfx, Joonas Lahtinen, Maarten Lankhorst,
	dri-devel, David Airlie, Jani Nikula, Daniel Vetter,
	Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019 10:41:43 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> Simplify the stack retrieval code by using the storage array based
> interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>


Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

-- Steve

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 16/29] drm: Simplify stacktrace handling
  2019-04-18  8:41 ` [patch V2 16/29] drm: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-23  7:36   ` Daniel Vetter
  0 siblings, 0 replies; 72+ messages in thread
From: Daniel Vetter @ 2019-04-23  7:36 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, linux-arch

On Thu, Apr 18, 2019 at 10:41:35AM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> The original code in all printing functions is really wrong. It allocates a
> storage array on stack which is unused because depot_fetch_stack() does not
> store anything in it. It overwrites the entries pointer in the stack_trace
> struct so it points to the depot storage.

Thanks for cleaning this up for us!

> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: intel-gfx@lists.freedesktop.org
> Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: David Airlie <airlied@linux.ie>
> Cc: Jani Nikula <jani.nikula@linux.intel.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

for merging through whatever tree is convenient for you (or tell me I
should pick it up into drm-next when the prep work landed).

Cheers, Daniel

> ---
>  drivers/gpu/drm/drm_mm.c                |   22 +++++++---------------
>  drivers/gpu/drm/i915/i915_vma.c         |   11 ++++-------
>  drivers/gpu/drm/i915/intel_runtime_pm.c |   21 +++++++--------------
>  3 files changed, 18 insertions(+), 36 deletions(-)
> 
> --- a/drivers/gpu/drm/drm_mm.c
> +++ b/drivers/gpu/drm/drm_mm.c
> @@ -106,22 +106,19 @@
>  static noinline void save_stack(struct drm_mm_node *node)
>  {
>  	unsigned long entries[STACKDEPTH];
> -	struct stack_trace trace = {
> -		.entries = entries,
> -		.max_entries = STACKDEPTH,
> -		.skip = 1
> -	};
> +	unsigned int n;
>  
> -	save_stack_trace(&trace);
> +	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
>  
>  	/* May be called under spinlock, so avoid sleeping */
> -	node->stack = depot_save_stack(&trace, GFP_NOWAIT);
> +	node->stack = stack_depot_save(entries, n, GFP_NOWAIT);
>  }
>  
>  static void show_leaks(struct drm_mm *mm)
>  {
>  	struct drm_mm_node *node;
> -	unsigned long entries[STACKDEPTH];
> +	unsigned long *entries;
> +	unsigned int nr_entries;
>  	char *buf;
>  
>  	buf = kmalloc(BUFSZ, GFP_KERNEL);
> @@ -129,19 +126,14 @@ static void show_leaks(struct drm_mm *mm
>  		return;
>  
>  	list_for_each_entry(node, drm_mm_nodes(mm), node_list) {
> -		struct stack_trace trace = {
> -			.entries = entries,
> -			.max_entries = STACKDEPTH
> -		};
> -
>  		if (!node->stack) {
>  			DRM_ERROR("node [%08llx + %08llx]: unknown owner\n",
>  				  node->start, node->size);
>  			continue;
>  		}
>  
> -		depot_fetch_stack(node->stack, &trace);
> -		snprint_stack_trace(buf, BUFSZ, &trace, 0);
> +		nr_entries = stack_depot_fetch(node->stack, &entries);
> +		stack_trace_snprint(buf, BUFSZ, entries, nr_entries, 0);
>  		DRM_ERROR("node [%08llx + %08llx]: inserted at\n%s",
>  			  node->start, node->size, buf);
>  	}
> --- a/drivers/gpu/drm/i915/i915_vma.c
> +++ b/drivers/gpu/drm/i915/i915_vma.c
> @@ -36,11 +36,8 @@
>  
>  static void vma_print_allocator(struct i915_vma *vma, const char *reason)
>  {
> -	unsigned long entries[12];
> -	struct stack_trace trace = {
> -		.entries = entries,
> -		.max_entries = ARRAY_SIZE(entries),
> -	};
> +	unsigned long *entries;
> +	unsigned int nr_entries;
>  	char buf[512];
>  
>  	if (!vma->node.stack) {
> @@ -49,8 +46,8 @@ static void vma_print_allocator(struct i
>  		return;
>  	}
>  
> -	depot_fetch_stack(vma->node.stack, &trace);
> -	snprint_stack_trace(buf, sizeof(buf), &trace, 0);
> +	nr_entries = stack_depot_fetch(vma->node.stack, &entries);
> +	stack_trace_snprint(buf, sizeof(buf), entries, nr_entries, 0);
>  	DRM_DEBUG_DRIVER("vma.node [%08llx + %08llx] %s: inserted at %s\n",
>  			 vma->node.start, vma->node.size, reason, buf);
>  }
> --- a/drivers/gpu/drm/i915/intel_runtime_pm.c
> +++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
> @@ -60,27 +60,20 @@
>  static noinline depot_stack_handle_t __save_depot_stack(void)
>  {
>  	unsigned long entries[STACKDEPTH];
> -	struct stack_trace trace = {
> -		.entries = entries,
> -		.max_entries = ARRAY_SIZE(entries),
> -		.skip = 1,
> -	};
> +	unsigned int n;
>  
> -	save_stack_trace(&trace);
> -	return depot_save_stack(&trace, GFP_NOWAIT | __GFP_NOWARN);
> +	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
> +	return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
>  }
>  
>  static void __print_depot_stack(depot_stack_handle_t stack,
>  				char *buf, int sz, int indent)
>  {
> -	unsigned long entries[STACKDEPTH];
> -	struct stack_trace trace = {
> -		.entries = entries,
> -		.max_entries = ARRAY_SIZE(entries),
> -	};
> +	unsigned long *entries;
> +	unsigned int nr_entries;
>  
> -	depot_fetch_stack(stack, &trace);
> -	snprint_stack_trace(buf, sz, &trace, indent);
> +	nr_entries = stack_depot_fetch(stack, &entries);
> +	stack_trace_snprint(buf, sz, entries, nr_entries, indent);
>  }
>  
>  static void init_intel_runtime_pm_wakeref(struct drm_i915_private *i915)
> 
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 25/29] livepatch: Simplify stack trace retrieval
  2019-04-18  8:41 ` [patch V2 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-23  8:18   ` Miroslav Benes
  0 siblings, 0 replies; 72+ messages in thread
From: Miroslav Benes @ 2019-04-23  8:18 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, 18 Apr 2019, Thomas Gleixner wrote:

> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

Acked-by: Miroslav Benes <mbenes@suse.cz>

Feel free to take it through tip or let us know to pick it up.

Miroslav

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add()
  2019-04-18  8:41 ` [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
@ 2019-04-24 19:45   ` Peter Zijlstra
  2019-04-24 19:51     ` Thomas Gleixner
  0 siblings, 1 reply; 72+ messages in thread
From: Peter Zijlstra @ 2019-04-24 19:45 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 10:41:37AM +0200, Thomas Gleixner wrote:
> There is only one caller of check_prev_add() which hands in a zeroed struct
> stack trace and a function pointer to save_stack(). Inside check_prev_add()
> the stack_trace struct is checked for being empty, which is always
> true. Based on that one code path stores a stack trace which is unused. The
> comment there does not make sense either. It's all leftovers from
> historical lockdep code (cross release).

I was more or less expecting a revert of:

ce07a9415f26 ("locking/lockdep: Make check_prev_add() able to handle external stack_trace")

And then I read the comment that went with the "static struct
stack_trace trace" that got removed (in the above commit) and realized
that your patch will consume more stack entries.

The problem is when the held lock stack in check_prevs_add() has multple
trylock entries on top, in that case we call check_prev_add() multiple
times, and this patch will then save the exact same stack-trace multiple
times, consuming static resources.

Possibly we should copy what stackdepot does (but we cannot use it
directly because stackdepot uses locks; but possible we can share bits),
but that is a patch for another day I think.

So while convoluted, perhaps we should retain this code for now.

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 19/29] lockdep: Simplify stack trace handling
  2019-04-18  8:41 ` [patch V2 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
@ 2019-04-24 19:45   ` Peter Zijlstra
  0 siblings, 0 replies; 72+ messages in thread
From: Peter Zijlstra @ 2019-04-24 19:45 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Thu, Apr 18, 2019 at 10:41:38AM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces and storing the information is a small lockdep
> specific data structure.
> 

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 72+ messages in thread

* Re: [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add()
  2019-04-24 19:45   ` Peter Zijlstra
@ 2019-04-24 19:51     ` Thomas Gleixner
  0 siblings, 0 replies; 72+ messages in thread
From: Thomas Gleixner @ 2019-04-24 19:51 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Alexey Dobriyan, Andrew Morton,
	Pekka Enberg, linux-mm, David Rientjes, Christoph Lameter,
	Catalin Marinas, Dmitry Vyukov, Andrey Ryabinin, kasan-dev,
	Mike Rapoport, Akinobu Mita, iommu, Robin Murphy,
	Christoph Hellwig, Marek Szyprowski, Johannes Thumshirn,
	David Sterba, Chris Mason, Josef Bacik, linux-btrfs, dm-devel,
	Mike Snitzer, Alasdair Kergon, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, linux-arch

On Wed, 24 Apr 2019, Peter Zijlstra wrote:
> On Thu, Apr 18, 2019 at 10:41:37AM +0200, Thomas Gleixner wrote:
> > There is only one caller of check_prev_add() which hands in a zeroed struct
> > stack trace and a function pointer to save_stack(). Inside check_prev_add()
> > the stack_trace struct is checked for being empty, which is always
> > true. Based on that one code path stores a stack trace which is unused. The
> > comment there does not make sense either. It's all leftovers from
> > historical lockdep code (cross release).
> 
> I was more or less expecting a revert of:
> 
> ce07a9415f26 ("locking/lockdep: Make check_prev_add() able to handle external stack_trace")
> 
> And then I read the comment that went with the "static struct
> stack_trace trace" that got removed (in the above commit) and realized
> that your patch will consume more stack entries.
> 
> The problem is when the held lock stack in check_prevs_add() has multple
> trylock entries on top, in that case we call check_prev_add() multiple
> times, and this patch will then save the exact same stack-trace multiple
> times, consuming static resources.
> 
> Possibly we should copy what stackdepot does (but we cannot use it
> directly because stackdepot uses locks; but possible we can share bits),
> but that is a patch for another day I think.
> 
> So while convoluted, perhaps we should retain this code for now.

Uurg, what a mess.


^ permalink raw reply	[flat|nested] 72+ messages in thread

end of thread, other threads:[~2019-04-24 19:51 UTC | newest]

Thread overview: 72+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-18  8:41 [patch V2 00/29] stacktrace: Consolidate stack trace usage Thomas Gleixner
2019-04-18  8:41 ` [patch V2 01/29] tracing: Cleanup stack trace code Thomas Gleixner
2019-04-18 13:57   ` Josh Poimboeuf
2019-04-18 21:14     ` Thomas Gleixner
2019-04-18 21:24       ` Steven Rostedt
2019-04-18 21:50         ` Steven Rostedt
2019-04-18 22:19   ` Steven Rostedt
2019-04-18 22:44     ` Thomas Gleixner
2019-04-19  0:39       ` Steven Rostedt
2019-04-18  8:41 ` [patch V2 02/29] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
2019-04-18  8:41 ` [patch V2 03/29] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
2019-04-18 11:51   ` Mike Rapoport
2019-04-18 11:58     ` Thomas Gleixner
2019-04-18  8:41 ` [patch V2 04/29] backtrace-test: Simplify stack trace handling Thomas Gleixner
2019-04-18  8:41 ` [patch V2 05/29] proc: Simplify task stack retrieval Thomas Gleixner
2019-04-18  8:41 ` [patch V2 06/29] latency_top: Simplify stack trace handling Thomas Gleixner
2019-04-18  8:41 ` [patch V2 07/29] mm/slub: Simplify stack trace retrieval Thomas Gleixner
2019-04-18  8:41 ` [patch V2 08/29] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
2019-04-18 15:17   ` Catalin Marinas
2019-04-18  8:41 ` [patch V2 09/29] mm/kasan: " Thomas Gleixner
2019-04-18 10:39   ` Andrey Ryabinin
2019-04-18 11:53     ` Thomas Gleixner
2019-04-18  8:41 ` [patch V2 10/29] mm/page_owner: Simplify stack trace handling Thomas Gleixner
2019-04-18  8:41 ` [patch V2 11/29] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
2019-04-18  8:41 ` [patch V2 12/29] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
2019-04-19  7:05   ` Christoph Hellwig
2019-04-18  8:41 ` [patch V2 13/29] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
2019-04-18  8:41 ` [patch V2 14/29] dm bufio: " Thomas Gleixner
2019-04-18 10:44   ` Alexander Potapenko
2019-04-18 11:54     ` Thomas Gleixner
2019-04-18 12:11       ` Alexander Potapenko
2019-04-18 13:33         ` Steven Rostedt
2019-04-18  8:41 ` [patch V2 15/29] dm persistent data: Simplify stack trace handling Thomas Gleixner
2019-04-18  8:41 ` [patch V2 16/29] drm: Simplify stacktrace handling Thomas Gleixner
2019-04-23  7:36   ` Daniel Vetter
2019-04-18  8:41 ` [patch V2 17/29] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
2019-04-18  8:41 ` [patch V2 18/29] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
2019-04-24 19:45   ` Peter Zijlstra
2019-04-24 19:51     ` Thomas Gleixner
2019-04-18  8:41 ` [patch V2 19/29] lockdep: Simplify stack trace handling Thomas Gleixner
2019-04-24 19:45   ` Peter Zijlstra
2019-04-18  8:41 ` [patch V2 20/29] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
2019-04-18 13:40   ` Steven Rostedt
2019-04-18 19:58     ` Tom Zanussi
2019-04-18 20:13       ` Steven Rostedt
2019-04-18 20:22         ` Tom Zanussi
2019-04-18  8:41 ` [patch V2 21/29] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
2019-04-18 14:53   ` Steven Rostedt
2019-04-18 15:43     ` Thomas Gleixner
2019-04-18 15:46       ` Steven Rostedt
2019-04-18  8:41 ` [patch V2 22/29] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
2019-04-19 13:28   ` Steven Rostedt
2019-04-18  8:41 ` [patch V2 23/29] tracing: Simplify stack trace retrieval Thomas Gleixner
2019-04-19 20:11   ` Steven Rostedt
2019-04-18  8:41 ` [patch V2 24/29] tracing: Remove the last struct stack_trace usage Thomas Gleixner
2019-04-19 20:11   ` Steven Rostedt
2019-04-18  8:41 ` [patch V2 25/29] livepatch: Simplify stack trace retrieval Thomas Gleixner
2019-04-23  8:18   ` Miroslav Benes
2019-04-18  8:41 ` [patch V2 26/29] stacktrace: Remove obsolete functions Thomas Gleixner
2019-04-18  8:41 ` [patch V2 27/29] lib/stackdepot: " Thomas Gleixner
2019-04-18  8:41 ` [patch V2 28/29] stacktrace: Provide common infrastructure Thomas Gleixner
2019-04-18 11:52   ` Mike Rapoport
2019-04-18 11:57     ` Thomas Gleixner
2019-04-18 14:52   ` Josh Poimboeuf
2019-04-18 15:42     ` Thomas Gleixner
2019-04-19  7:02       ` Peter Zijlstra
2019-04-19 15:50         ` Josh Poimboeuf
2019-04-19  7:18   ` Peter Zijlstra
2019-04-19  8:32     ` Thomas Gleixner
2019-04-19  9:07       ` Peter Zijlstra
2019-04-19 16:17         ` Josh Poimboeuf
2019-04-18  8:41 ` [patch V2 29/29] x86/stacktrace: Use " Thomas Gleixner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).