All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace
@ 2019-04-10 10:27 Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 01/41] um/stacktrace: Remove the pointless ULONG_MAX marker Thomas Gleixner
                   ` (41 more replies)
  0 siblings, 42 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Struct stack_trace is a sinkhole for input and output parameters which is
largely pointless for most usage sites. In fact if embedded into other data
structures it creates indirections and extra storage overhead for no benefit.

Looking at all usage sites makes it clear that they just require an
interface which is based on a storage array. That array is either on stack,
global or embedded into some other data structure.

Some of the stack depot usage sites are outright wrong, but fortunately the
wrongness just causes more stack being used for nothing and does not have
functional impact.

Another oddity is the inconsistent termination of the stack trace with
ULONG_MAX. It's pointless as the number of entries is what determines the
length of the stored trace. In fact quite some call sites remove the
ULONG_MAX marker afterwards with or without nasty comments about it. Not
all architectures do that and those which do, do it inconsistenly either
conditional on nr_entries == 0 or unconditionally.

The following series cleans that up by:

    1) Removing the ULONG_MAX termination in the architecture code

    2) Removing the ULONG_MAX fixups at the call sites

    3) Providing plain storage array based interfaces for stacktrace and
       stackdepot.

    4) Cleaning up the mess at the callsites including some related
       cleanups.

    5) Removing the struct stack_trace based interfaces

This is not changing the struct stack_trace interfaces at the architecture
level, but it removes the exposure to the generic code.

It's only lightly tested as I'm traveling and access to my test boxes is
limited.

Thanks,

	tglx

8<-----------------
 arch/um/kernel/stacktrace.c                   |    2 
 b/arch/arm/kernel/stacktrace.c                |    6 -
 b/arch/arm64/kernel/stacktrace.c              |    4 
 b/arch/parisc/kernel/stacktrace.c             |    5 -
 b/arch/riscv/kernel/stacktrace.c              |    2 
 b/arch/s390/kernel/stacktrace.c               |    6 -
 b/arch/sh/kernel/stacktrace.c                 |    4 
 b/arch/unicore32/kernel/stacktrace.c          |    2 
 b/arch/x86/kernel/stacktrace.c                |   14 --
 drivers/gpu/drm/drm_mm.c                      |   27 +----
 drivers/gpu/drm/i915/i915_vma.c               |   11 --
 drivers/gpu/drm/i915/intel_runtime_pm.c       |   25 +----
 drivers/md/dm-bufio.c                         |   15 +--
 drivers/md/persistent-data/dm-block-manager.c |   19 +--
 fs/btrfs/ref-verify.c                         |   15 ---
 fs/proc/base.c                                |   18 +--
 include/linux/ftrace.h                        |    1 
 include/linux/lockdep.h                       |    9 +
 include/linux/stackdepot.h                    |    8 -
 include/linux/stacktrace.h                    |   40 ++++----
 kernel/backtracetest.c                        |   11 --
 kernel/dma/debug.c                            |   13 +-
 kernel/latencytop.c                           |   29 +----
 kernel/locking/lockdep.c                      |   87 ++++++-----------
 kernel/stacktrace.c                           |  127 ++++++++++++++++++++++----
 kernel/trace/trace.c                          |  103 +++++++++------------
 kernel/trace/trace.h                          |    8 -
 kernel/trace/trace_events_hist.c              |   14 --
 kernel/trace/trace_stack.c                    |   24 +---
 lib/fault-inject.c                            |   12 --
 lib/stackdepot.c                              |   50 +++++-----
 mm/kasan/common.c                             |   33 ++----
 mm/kasan/report.c                             |    7 -
 mm/kmemleak.c                                 |   24 ----
 mm/page_owner.c                               |   82 +++++-----------
 mm/slub.c                                     |   21 +---
 36 files changed, 375 insertions(+), 503 deletions(-)




^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 01/41] um/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
@ 2019-04-10 10:27 ` Thomas Gleixner
  2019-04-14 20:34   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
                   ` (40 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Richard Weinberger, linux-um

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Richard Weinberger <richard@nod.at>
Cc: linux-um@lists.infradead.org
---
 arch/um/kernel/stacktrace.c |    2 --
 1 file changed, 2 deletions(-)

--- a/arch/um/kernel/stacktrace.c
+++ b/arch/um/kernel/stacktrace.c
@@ -63,8 +63,6 @@ static const struct stacktrace_ops dump_
 static void __save_stack_trace(struct task_struct *tsk, struct stack_trace *trace)
 {
 	dump_trace(tsk, &dump_ops, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace(struct stack_trace *trace)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 02/41] x86/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 01/41] um/stacktrace: Remove the pointless ULONG_MAX marker Thomas Gleixner
@ 2019-04-10 10:27 ` Thomas Gleixner
  2019-04-14 20:34   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:27   ` Thomas Gleixner
                   ` (39 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/kernel/stacktrace.c |   14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -46,9 +46,6 @@ static void noinline __save_stack_trace(
 		if (!addr || save_stack_address(trace, addr, nosched))
 			break;
 	}
-
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 /*
@@ -97,7 +94,7 @@ static int __always_inline
 		if (regs) {
 			/* Success path for user tasks */
 			if (user_mode(regs))
-				goto success;
+				return 0;
 
 			/*
 			 * Kernel mode registers on the stack indicate an
@@ -132,10 +129,6 @@ static int __always_inline
 	if (!(task->flags & (PF_KTHREAD | PF_IDLE)))
 		return -EINVAL;
 
-success:
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
-
 	return 0;
 }
 
@@ -221,9 +214,6 @@ void save_stack_trace_user(struct stack_
 	/*
 	 * Trace user stack if we are not a kernel thread
 	 */
-	if (current->mm) {
+	if (current->mm)
 		__save_stack_trace_user(trace);
-	}
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 03/41] arm/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
@ 2019-04-10 10:27   ` Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
                     ` (40 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Russell King, linux-arm-kernel

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/kernel/stacktrace.c |    6 ------
 1 file changed, 6 deletions(-)

--- a/arch/arm/kernel/stacktrace.c
+++ b/arch/arm/kernel/stacktrace.c
@@ -115,8 +115,6 @@ static noinline void __save_stack_trace(
 		 * running on another CPU?  For now, ignore it as we
 		 * can't guarantee we won't explode.
 		 */
-		if (trace->nr_entries < trace->max_entries)
-			trace->entries[trace->nr_entries++] = ULONG_MAX;
 		return;
 #else
 		frame.fp = thread_saved_fp(tsk);
@@ -134,8 +132,6 @@ static noinline void __save_stack_trace(
 	}
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
@@ -153,8 +149,6 @@ void save_stack_trace_regs(struct pt_reg
 	frame.pc = regs->ARM_pc;
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 03/41] arm/stacktrace: Remove the pointless ULONG_MAX marker
@ 2019-04-10 10:27   ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: x86, Russell King, Steven Rostedt, Alexander Potapenko,
	Andy Lutomirski, Josh Poimboeuf, linux-arm-kernel

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm/kernel/stacktrace.c |    6 ------
 1 file changed, 6 deletions(-)

--- a/arch/arm/kernel/stacktrace.c
+++ b/arch/arm/kernel/stacktrace.c
@@ -115,8 +115,6 @@ static noinline void __save_stack_trace(
 		 * running on another CPU?  For now, ignore it as we
 		 * can't guarantee we won't explode.
 		 */
-		if (trace->nr_entries < trace->max_entries)
-			trace->entries[trace->nr_entries++] = ULONG_MAX;
 		return;
 #else
 		frame.fp = thread_saved_fp(tsk);
@@ -134,8 +132,6 @@ static noinline void __save_stack_trace(
 	}
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
@@ -153,8 +149,6 @@ void save_stack_trace_regs(struct pt_reg
 	frame.pc = regs->ARM_pc;
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 04/41] sh/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
@ 2019-04-10 10:27   ` Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
                     ` (40 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Rich Felker, Yoshinori Sato,
	Kuninori Morimoto, linux-sh, Simon Horman

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Rich Felker <dalias@libc.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Cc: linux-sh@vger.kernel.org
Cc: Simon Horman <horms+renesas@verge.net.au>
---
 arch/sh/kernel/stacktrace.c |    4 ----
 1 file changed, 4 deletions(-)

--- a/arch/sh/kernel/stacktrace.c
+++ b/arch/sh/kernel/stacktrace.c
@@ -49,8 +49,6 @@ void save_stack_trace(struct stack_trace
 	unsigned long *sp = (unsigned long *)current_stack_pointer;
 
 	unwind_stack(current, NULL, sp,  &save_stack_ops, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
@@ -84,7 +82,5 @@ void save_stack_trace_tsk(struct task_st
 	unsigned long *sp = (unsigned long *)tsk->thread.sp;
 
 	unwind_stack(current, NULL, sp,  &save_stack_ops_nosched, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 04/41] sh/stacktrace: Remove the pointless ULONG_MAX marker
@ 2019-04-10 10:27   ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Rich Felker, Yoshinori Sato,
	Kuninori Morimoto, linux-sh, Simon Horman

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Rich Felker <dalias@libc.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Cc: linux-sh@vger.kernel.org
Cc: Simon Horman <horms+renesas@verge.net.au>
---
 arch/sh/kernel/stacktrace.c |    4 ----
 1 file changed, 4 deletions(-)

--- a/arch/sh/kernel/stacktrace.c
+++ b/arch/sh/kernel/stacktrace.c
@@ -49,8 +49,6 @@ void save_stack_trace(struct stack_trace
 	unsigned long *sp = (unsigned long *)current_stack_pointer;
 
 	unwind_stack(current, NULL, sp,  &save_stack_ops, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
@@ -84,7 +82,5 @@ void save_stack_trace_tsk(struct task_st
 	unsigned long *sp = (unsigned long *)tsk->thread.sp;
 
 	unwind_stack(current, NULL, sp,  &save_stack_ops_nosched, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 05/41] unicore32/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (3 preceding siblings ...)
  2019-04-10 10:27   ` Thomas Gleixner
@ 2019-04-10 10:27 ` Thomas Gleixner
  2019-04-14 20:36   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28   ` Thomas Gleixner
                   ` (36 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:27 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Guan Xuetao

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Guan Xuetao <gxt@pku.edu.cn>
---
 arch/unicore32/kernel/stacktrace.c |    2 --
 1 file changed, 2 deletions(-)

--- a/arch/unicore32/kernel/stacktrace.c
+++ b/arch/unicore32/kernel/stacktrace.c
@@ -120,8 +120,6 @@ void save_stack_trace_tsk(struct task_st
 	}
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace(struct stack_trace *trace)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 06/41] riscv/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
@ 2019-04-10 10:28   ` Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
                     ` (40 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-riscv, Palmer Dabbelt, Albert Ou

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-riscv@lists.infradead.org
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
---
 arch/riscv/kernel/stacktrace.c |    2 --
 1 file changed, 2 deletions(-)

--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -169,8 +169,6 @@ static bool save_trace(unsigned long pc,
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
 	walk_stackframe(tsk, NULL, save_trace, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 06/41] riscv/stacktrace: Remove the pointless ULONG_MAX marker
@ 2019-04-10 10:28   ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Palmer Dabbelt, x86, Steven Rostedt, Albert Ou,
	Alexander Potapenko, Andy Lutomirski, Josh Poimboeuf,
	linux-riscv

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-riscv@lists.infradead.org
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
---
 arch/riscv/kernel/stacktrace.c |    2 --
 1 file changed, 2 deletions(-)

--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -169,8 +169,6 @@ static bool save_trace(unsigned long pc,
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
 	walk_stackframe(tsk, NULL, save_trace, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 



_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 07/41] arm64/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
@ 2019-04-10 10:28   ` Thomas Gleixner
  2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
                     ` (40 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Catalin Marinas, Will Deacon,
	linux-arm-kernel

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/kernel/stacktrace.c |    4 ----
 1 file changed, 4 deletions(-)

--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -140,8 +140,6 @@ void save_stack_trace_regs(struct pt_reg
 #endif
 
 	walk_stackframe(current, &frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
 
@@ -172,8 +170,6 @@ static noinline void __save_stack_trace(
 #endif
 
 	walk_stackframe(tsk, &frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 
 	put_task_stack(tsk);
 }



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 07/41] arm64/stacktrace: Remove the pointless ULONG_MAX marker
@ 2019-04-10 10:28   ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Catalin Marinas, x86, Will Deacon, Steven Rostedt,
	Alexander Potapenko, Andy Lutomirski, Josh Poimboeuf,
	linux-arm-kernel

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/kernel/stacktrace.c |    4 ----
 1 file changed, 4 deletions(-)

--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -140,8 +140,6 @@ void save_stack_trace_regs(struct pt_reg
 #endif
 
 	walk_stackframe(current, &frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
 
@@ -172,8 +170,6 @@ static noinline void __save_stack_trace(
 #endif
 
 	walk_stackframe(tsk, &frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 
 	put_task_stack(tsk);
 }



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 08/41] parisc/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (6 preceding siblings ...)
  2019-04-10 10:28   ` Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:38   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 09/41] s390/stacktrace: " Thomas Gleixner
                   ` (33 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, James E.J. Bottomley, Helge Deller,
	linux-parisc

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
---
 arch/parisc/kernel/stacktrace.c |    5 -----
 1 file changed, 5 deletions(-)

--- a/arch/parisc/kernel/stacktrace.c
+++ b/arch/parisc/kernel/stacktrace.c
@@ -29,22 +29,17 @@ static void dump_trace(struct task_struc
 	}
 }
 
-
 /*
  * Save stack-backtrace addresses into a stack_trace buffer.
  */
 void save_stack_trace(struct stack_trace *trace)
 {
 	dump_trace(current, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
 	dump_trace(tsk, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 09/41] s390/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (7 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 08/41] parisc/stacktrace: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:39   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 10/41] lockdep: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
                   ` (32 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Martin Schwidefsky, linux-s390,
	Heiko Carstens

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
---
 arch/s390/kernel/stacktrace.c |    6 ------
 1 file changed, 6 deletions(-)

--- a/arch/s390/kernel/stacktrace.c
+++ b/arch/s390/kernel/stacktrace.c
@@ -45,8 +45,6 @@ void save_stack_trace(struct stack_trace
 
 	sp = current_stack_pointer();
 	dump_trace(save_address, trace, NULL, sp);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
@@ -58,8 +56,6 @@ void save_stack_trace_tsk(struct task_st
 	if (tsk == current)
 		sp = current_stack_pointer();
 	dump_trace(save_address_nosched, trace, tsk, sp);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 
@@ -69,7 +65,5 @@ void save_stack_trace_regs(struct pt_reg
 
 	sp = kernel_stack_pointer(regs);
 	dump_trace(save_address, trace, NULL, sp);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 10/41] lockdep: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (8 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 09/41] s390/stacktrace: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:40   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: " Thomas Gleixner
                   ` (31 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Peter Zijlstra, Will Deacon

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 kernel/locking/lockdep.c |   11 -----------
 1 file changed, 11 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -444,17 +444,6 @@ static int save_trace(struct stack_trace
 
 	save_stack_trace(trace);
 
-	/*
-	 * Some daft arches put -1 at the end to indicate its a full trace.
-	 *
-	 * <rant> this is buggy anyway, since it takes a whole extra entry so a
-	 * complete trace that maxes out the entries provided will be reported
-	 * as incomplete, friggin useless </rant>
-	 */
-	if (trace->nr_entries != 0 &&
-	    trace->entries[trace->nr_entries-1] == ULONG_MAX)
-		trace->nr_entries--;
-
 	trace->max_entries = trace->nr_entries;
 
 	nr_stack_trace_entries += trace->nr_entries;



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 11/41] mm/slub: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (9 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 10/41] lockdep: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:40   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
                   ` (30 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

While at it remove the pointless loop of clearing the stack array
completely. It's sufficient to clear the last entry as the consumers break
out on the first zeroed entry anyway.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
---
 mm/slub.c |   13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -553,7 +553,6 @@ static void set_track(struct kmem_cache
 	if (addr) {
 #ifdef CONFIG_STACKTRACE
 		struct stack_trace trace;
-		int i;
 
 		trace.nr_entries = 0;
 		trace.max_entries = TRACK_ADDRS_COUNT;
@@ -563,20 +562,16 @@ static void set_track(struct kmem_cache
 		save_stack_trace(&trace);
 		metadata_access_disable();
 
-		/* See rant in lockdep.c */
-		if (trace.nr_entries != 0 &&
-		    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
-			trace.nr_entries--;
-
-		for (i = trace.nr_entries; i < TRACK_ADDRS_COUNT; i++)
-			p->addrs[i] = 0;
+		if (trace.nr_entries < TRACK_ADDRS_COUNT)
+			p->addrs[trace.nr_entries] = 0;
 #endif
 		p->addr = addr;
 		p->cpu = smp_processor_id();
 		p->pid = current->pid;
 		p->when = jiffies;
-	} else
+	} else {
 		memset(p, 0, sizeof(struct track));
+	}
 }
 
 static void init_tracking(struct kmem_cache *s, void *object)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 12/41] mm/page_owner: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (10 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:41   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
                   ` (29 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Michal Hocko, linux-mm, Mike Rapoport,
	Andrew Morton

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 mm/page_owner.c |    3 ---
 1 file changed, 3 deletions(-)

--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -148,9 +148,6 @@ static noinline depot_stack_handle_t sav
 	depot_stack_handle_t handle;
 
 	save_stack_trace(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
 
 	/*
 	 * We need to check recursion here because our request to stackdepot



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 13/41] mm/kasan: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (11 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 11:31     ` Dmitry Vyukov
  2019-04-14 20:42   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 14/41] latency_top: " Thomas Gleixner
                   ` (28 subsequent siblings)
  41 siblings, 2 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrey Ryabinin, kasan-dev, Dmitry Vyukov,
	linux-mm

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: linux-mm@kvack.org
---
 mm/kasan/common.c |    3 ---
 1 file changed, 3 deletions(-)

--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -74,9 +74,6 @@ static inline depot_stack_handle_t save_
 
 	save_stack_trace(&trace);
 	filter_irq_stacks(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
 
 	return depot_save_stack(&trace, flags);
 }



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 14/41] latency_top: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (12 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:42   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 15/41] drm: " Thomas Gleixner
                   ` (27 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

No architecture terminates the stack trace with ULONG_MAX anymore. The
consumer terminates on the first zero entry or at the number of entries, so
no functional change.

Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 fs/proc/base.c      |    3 +--
 kernel/latencytop.c |   12 ++++++------
 2 files changed, 7 insertions(+), 8 deletions(-)

--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -489,10 +489,9 @@ static int lstats_show_proc(struct seq_f
 				   lr->count, lr->time, lr->max);
 			for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
 				unsigned long bt = lr->backtrace[q];
+
 				if (!bt)
 					break;
-				if (bt == ULONG_MAX)
-					break;
 				seq_printf(m, " %ps", (void *)bt);
 			}
 			seq_putc(m, '\n');
--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -120,8 +120,8 @@ account_global_scheduler_latency(struct
 				break;
 			}
 
-			/* 0 and ULONG_MAX entries mean end of backtrace: */
-			if (record == 0 || record == ULONG_MAX)
+			/* 0 entry marks end of backtrace: */
+			if (!record)
 				break;
 		}
 		if (same) {
@@ -210,8 +210,8 @@ void __sched
 				break;
 			}
 
-			/* 0 and ULONG_MAX entries mean end of backtrace: */
-			if (record == 0 || record == ULONG_MAX)
+			/* 0 entry is end of backtrace */
+			if (!record)
 				break;
 		}
 		if (same) {
@@ -252,10 +252,10 @@ static int lstats_show(struct seq_file *
 				   lr->count, lr->time, lr->max);
 			for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
 				unsigned long bt = lr->backtrace[q];
+
 				if (!bt)
 					break;
-				if (bt == ULONG_MAX)
-					break;
+
 				seq_printf(m, " %ps", (void *)bt);
 			}
 			seq_puts(m, "\n");



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 15/41] drm: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (13 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 14/41] latency_top: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 20:43   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 16/41] tracing: " Thomas Gleixner
                   ` (26 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: intel-gfx@lists.freedesktop.org
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/drm_mm.c                |    3 ---
 drivers/gpu/drm/i915/intel_runtime_pm.c |    4 ----
 2 files changed, 7 deletions(-)

--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -113,9 +113,6 @@ static noinline void save_stack(struct d
 	};
 
 	save_stack_trace(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
 
 	/* May be called under spinlock, so avoid sleeping */
 	node->stack = depot_save_stack(&trace, GFP_NOWAIT);
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -67,10 +67,6 @@ static noinline depot_stack_handle_t __s
 	};
 
 	save_stack_trace(&trace);
-	if (trace.nr_entries &&
-	    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
-		trace.nr_entries--;
-
 	return depot_save_stack(&trace, GFP_NOWAIT | __GFP_NOWARN);
 }
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 16/41] tracing: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (14 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 15/41] drm: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-11  2:34   ` Josh Poimboeuf
  2019-04-14 20:44   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it Thomas Gleixner
                   ` (25 subsequent siblings)
  41 siblings, 2 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

No architecture terminates the stack trace with ULONG_MAX anymore. As the
code checks the number of entries stored anyway there is no point in
keeping all that ULONG_MAX magic around.

The histogram code zeroes the storage before saving the stack, so if the
trace is shorter than the maximum number of entries it can terminate the
print loop if a zero entry is detected.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_events_hist.c |    2 +-
 kernel/trace/trace_stack.c       |   20 +++++---------------
 2 files changed, 6 insertions(+), 16 deletions(-)

--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5246,7 +5246,7 @@ static void hist_trigger_stacktrace_prin
 	unsigned int i;
 
 	for (i = 0; i < max_entries; i++) {
-		if (stacktrace_entries[i] == ULONG_MAX)
+		if (!stacktrace_entries[i])
 			return;
 
 		seq_printf(m, "%*c", 1 + spaces, ' ');
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -18,8 +18,7 @@
 
 #include "trace.h"
 
-static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES+1] =
-	 { [0 ... (STACK_TRACE_ENTRIES)] = ULONG_MAX };
+static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1];
 unsigned stack_trace_index[STACK_TRACE_ENTRIES];
 
 /*
@@ -52,10 +51,7 @@ void stack_trace_print(void)
 			   stack_trace_max.nr_entries);
 
 	for (i = 0; i < stack_trace_max.nr_entries; i++) {
-		if (stack_dump_trace[i] == ULONG_MAX)
-			break;
-		if (i+1 == stack_trace_max.nr_entries ||
-				stack_dump_trace[i+1] == ULONG_MAX)
+		if (i + 1 == stack_trace_max.nr_entries)
 			size = stack_trace_index[i];
 		else
 			size = stack_trace_index[i] - stack_trace_index[i+1];
@@ -150,8 +146,6 @@ check_stack(unsigned long ip, unsigned l
 		p = start;
 
 		for (; p < top && i < stack_trace_max.nr_entries; p++) {
-			if (stack_dump_trace[i] == ULONG_MAX)
-				break;
 			/*
 			 * The READ_ONCE_NOCHECK is used to let KASAN know that
 			 * this is not a stack-out-of-bounds error.
@@ -183,8 +177,6 @@ check_stack(unsigned long ip, unsigned l
 	}
 
 	stack_trace_max.nr_entries = x;
-	for (; x < i; x++)
-		stack_dump_trace[x] = ULONG_MAX;
 
 	if (task_stack_end_corrupted(current)) {
 		stack_trace_print();
@@ -286,7 +278,7 @@ static void *
 {
 	long n = *pos - 1;
 
-	if (n >= stack_trace_max.nr_entries || stack_dump_trace[n] == ULONG_MAX)
+	if (n >= stack_trace_max.nr_entries)
 		return NULL;
 
 	m->private = (void *)n;
@@ -360,12 +352,10 @@ static int t_show(struct seq_file *m, vo
 
 	i = *(long *)v;
 
-	if (i >= stack_trace_max.nr_entries ||
-	    stack_dump_trace[i] == ULONG_MAX)
+	if (i >= stack_trace_max.nr_entries)
 		return 0;
 
-	if (i+1 == stack_trace_max.nr_entries ||
-	    stack_dump_trace[i+1] == ULONG_MAX)
+	if (i + 1 == stack_trace_max.nr_entries)
 		size = stack_trace_index[i];
 	else
 		size = stack_trace_index[i] - stack_trace_index[i+1];



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (15 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 16/41] tracing: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 12:47   ` Steven Rostedt
  2019-04-10 10:28 ` [RFC patch 18/41] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
                   ` (24 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

It's only used in the source file where it is defined and it's using the
stack_trace_ namespace. Rename it to free it up for stack trace related
functions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 include/linux/ftrace.h     |    1 -
 kernel/trace/trace_stack.c |    4 ++--
 2 files changed, 2 insertions(+), 3 deletions(-)

--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -251,7 +251,6 @@ extern unsigned long stack_trace_max_siz
 extern arch_spinlock_t stack_trace_max_lock;
 
 extern int stack_tracer_enabled;
-void stack_trace_print(void);
 int
 stack_trace_sysctl(struct ctl_table *table, int write,
 		   void __user *buffer, size_t *lenp,
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -41,7 +41,7 @@ static DEFINE_MUTEX(stack_sysctl_mutex);
 int stack_tracer_enabled;
 static int last_stack_tracer_enabled;
 
-void stack_trace_print(void)
+static void trace_stack_trace_print(void)
 {
 	long i;
 	int size;
@@ -179,7 +179,7 @@ check_stack(unsigned long ip, unsigned l
 	stack_trace_max.nr_entries = x;
 
 	if (task_stack_end_corrupted(current)) {
-		stack_trace_print();
+		trace_stack_trace_print();
 		BUG();
 	}
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 18/41] stacktrace: Provide helpers for common stack trace operations
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (16 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 19/41] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
                   ` (23 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

All operations with stack traces are based on struct stack_trace. That's a
horrible construct as the struct is a kitchen sink for input and
output. Quite some usage sites embed it into their own data structures
which creates weird indirections.

There is absolutely no point in doing so. For all use cases a storage array
and the number of valid stack trace entries in the array is sufficient.

Provide helper functions which avoid the struct stack_trace indirection so
the usage sites can be cleaned up.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stacktrace.h |   18 +++++
 kernel/stacktrace.c        |  137 ++++++++++++++++++++++++++++++++++++++++-----
 2 files changed, 140 insertions(+), 15 deletions(-)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -26,6 +26,24 @@ extern void print_stack_trace(struct sta
 extern int snprint_stack_trace(char *buf, size_t size,
 			struct stack_trace *trace, int spaces);
 
+extern void stack_trace_print(unsigned long *trace, unsigned int nr_entries,
+			      int spaces);
+extern int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+			       unsigned int nr_entries, int spaces);
+extern unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+				     unsigned int skipnr);
+extern unsigned int stack_trace_save_tsk(struct task_struct *task,
+					 unsigned long *store,
+					 unsigned int size,
+					 unsigned int skipnr);
+extern unsigned int stack_trace_save_regs(struct pt_regs *regs,
+					  unsigned long *store,
+					  unsigned int size,
+					  unsigned int skipnr);
+extern unsigned int stack_trace_save_user(unsigned long *store,
+					  unsigned int size,
+					  unsigned int skipnr);
+
 #ifdef CONFIG_USER_STACKTRACE_SUPPORT
 extern void save_stack_trace_user(struct stack_trace *trace);
 #else
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -11,35 +11,52 @@
 #include <linux/kallsyms.h>
 #include <linux/stacktrace.h>
 
-void print_stack_trace(struct stack_trace *trace, int spaces)
+/**
+ * stack_trace_print - Print the entries in the stack trace
+ * @entries:	Pointer to storage array
+ * @nr_entries:	Number of entries in the storage array
+ * @spaces:	Number of leading spaces to print
+ */
+void stack_trace_print(unsigned long *entries, unsigned int nr_entries,
+		       int spaces)
 {
-	int i;
+	unsigned int i;
 
-	if (WARN_ON(!trace->entries))
+	if (WARN_ON(!entries))
 		return;
 
-	for (i = 0; i < trace->nr_entries; i++)
-		printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]);
+	for (i = 0; i < nr_entries; i++)
+		printk("%*c%pS\n", 1 + spaces, ' ', (void *)entries[i]);
+}
+EXPORT_SYMBOL_GPL(stack_trace_print);
+
+void print_stack_trace(struct stack_trace *trace, int spaces)
+{
+	stack_trace_print(trace->entries, trace->nr_entries, spaces);
 }
 EXPORT_SYMBOL_GPL(print_stack_trace);
 
-int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces)
+/**
+ * stack_trace_snprint - Print the entries in the stack trace into a buffer
+ * @buf:	Pointer to the print buffer
+ * @size:	Size of the print buffer
+ * @entries:	Pointer to storage array
+ * @nr_entries:	Number of entries in the storage array
+ * @spaces:	Number of leading spaces to print
+ */
+int stack_trace_snprint(char *buf, size_t size, unsigned long *entries,
+			unsigned int nr_entries, int spaces)
 {
-	int i;
-	int generated;
-	int total = 0;
+	unsigned int generated, i, total = 0;
 
-	if (WARN_ON(!trace->entries))
+	if (WARN_ON(!entries))
 		return 0;
 
-	for (i = 0; i < trace->nr_entries; i++) {
+	for (i = 0; i < nr_entries; i++) {
 		generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ',
-				     (void *)trace->entries[i]);
+				     (void *)entries[i]);
 
 		total += generated;
-
-		/* Assume that generated isn't a negative number */
 		if (generated >= size) {
 			buf += size;
 			size = 0;
@@ -51,6 +68,14 @@ int snprint_stack_trace(char *buf, size_
 
 	return total;
 }
+EXPORT_SYMBOL_GPL(stack_trace_snprint);
+
+int snprint_stack_trace(char *buf, size_t size,
+			struct stack_trace *trace, int spaces)
+{
+	return stack_trace_snprint(buf, size, trace->entries,
+				   trace->nr_entries, spaces);
+}
 EXPORT_SYMBOL_GPL(snprint_stack_trace);
 
 /*
@@ -77,3 +102,85 @@ save_stack_trace_tsk_reliable(struct tas
 	WARN_ONCE(1, KERN_INFO "save_stack_tsk_reliable() not implemented yet.\n");
 	return -ENOSYS;
 }
+
+/**
+ * stack_trace_save - Save a stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace(&trace);
+	return trace.nr_entries;
+}
+EXPORT_SYMBOL_GPL(stack_trace_save);
+
+/**
+ * stack_trace_save_tsk - Save a task stack trace into a storage array
+ * @task:	The task to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save_tsk(struct task_struct *task,
+				  unsigned long *store, unsigned int size,
+				  unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace_tsk(task, &trace);
+	return trace.nr_entries;
+}
+
+/**
+ * stack_trace_save_regs - Save a stack trace based on pt_regs into a storage array
+ * @regs:	Pointer to pt_regs to examine
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace_regs(regs, &trace);
+	return trace.nr_entries;
+}
+
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
+/**
+ * stack_trace_save_user - Save a user space stack trace into a storage array
+ * @store:	Pointer to storage array
+ * @size:	Size of the storage array
+ * @skipnr:	Number of entries to skip at the start of the stack trace
+ */
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size,
+				   unsigned int skipnr)
+{
+	struct stack_trace trace = {
+		.entries	= store,
+		.max_entries	= size,
+		.skip		= skipnr + 1,
+	};
+
+	save_stack_trace_user(&trace);
+	return trace.nr_entries;
+}
+#endif



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 19/41] lib/stackdepot: Provide functions which operate on plain storage arrays
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (17 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 18/41] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 13:39   ` Alexander Potapenko
  2019-04-10 10:28 ` [RFC patch 20/41] backtrace-test: Simplify stack trace handling Thomas Gleixner
                   ` (22 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

The struct stack_trace indirection in the stack depot functions is a truly
pointless excercise which requires horrible code at the callsites.

Provide interfaces based on plain storage arrays.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stackdepot.h |    4 ++
 lib/stackdepot.c           |   66 ++++++++++++++++++++++++++++++++-------------
 2 files changed, 51 insertions(+), 19 deletions(-)

--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t;
 struct stack_trace;
 
 depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
+depot_stack_handle_t stack_depot_save(unsigned long *entries,
+				      unsigned int nr_entries, gfp_t gfp_flags);
 
 void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries);
 
 #endif
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -194,40 +194,56 @@ static inline struct stack_record *find_
 	return NULL;
 }
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
+/**
+ * stack_depot_fetch - Fetch stack entries from a depot
+ *
+ * @entries:		Pointer to store the entries address
+ */
+unsigned int stack_depot_fetch(depot_stack_handle_t handle,
+			       unsigned long **entries)
 {
 	union handle_parts parts = { .handle = handle };
 	void *slab = stack_slabs[parts.slabindex];
 	size_t offset = parts.offset << STACK_ALLOC_ALIGN;
 	struct stack_record *stack = slab + offset;
 
-	trace->nr_entries = trace->max_entries = stack->size;
-	trace->entries = stack->entries;
-	trace->skip = 0;
+	*entries = stack->entries;
+	return stack->size;
+}
+EXPORT_SYMBOL_GPL(stack_depot_fetch);
+
+void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
+{
+	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
+
+	trace->max_entries = trace->nr_entries = nent;
 }
 EXPORT_SYMBOL_GPL(depot_fetch_stack);
 
 /**
- * depot_save_stack - save stack in a stack depot.
- * @trace - the stacktrace to save.
- * @alloc_flags - flags for allocating additional memory if required.
+ * stack_depot_save - Save a stack trace from an array
  *
- * Returns the handle of the stack struct stored in depot.
+ * @entries:		Pointer to storage array
+ * @nr_entries:		Size of the storage array
+ * @alloc_flags:	Allocation gfp flags
+ *
+ * Returns the handle of the stack struct stored in depot
  */
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
-				    gfp_t alloc_flags)
+depot_stack_handle_t stack_depot_save(unsigned long *entries,
+				      unsigned int nr_entries,
+				      gfp_t alloc_flags)
 {
-	u32 hash;
-	depot_stack_handle_t retval = 0;
 	struct stack_record *found = NULL, **bucket;
-	unsigned long flags;
+	depot_stack_handle_t retval = 0;
 	struct page *page = NULL;
 	void *prealloc = NULL;
+	unsigned long flags;
+	u32 hash;
 
-	if (unlikely(trace->nr_entries == 0))
+	if (unlikely(nr_entries == 0))
 		goto fast_exit;
 
-	hash = hash_stack(trace->entries, trace->nr_entries);
+	hash = hash_stack(entries, nr_entries);
 	bucket = &stack_table[hash & STACK_HASH_MASK];
 
 	/*
@@ -235,8 +251,8 @@ depot_stack_handle_t depot_save_stack(st
 	 * The smp_load_acquire() here pairs with smp_store_release() to
 	 * |bucket| below.
 	 */
-	found = find_stack(smp_load_acquire(bucket), trace->entries,
-			   trace->nr_entries, hash);
+	found = find_stack(smp_load_acquire(bucket), entries,
+			   nr_entries, hash);
 	if (found)
 		goto exit;
 
@@ -264,10 +280,10 @@ depot_stack_handle_t depot_save_stack(st
 
 	spin_lock_irqsave(&depot_lock, flags);
 
-	found = find_stack(*bucket, trace->entries, trace->nr_entries, hash);
+	found = find_stack(*bucket, entries, nr_entries, hash);
 	if (!found) {
 		struct stack_record *new =
-			depot_alloc_stack(trace->entries, trace->nr_entries,
+			depot_alloc_stack(entries, nr_entries,
 					  hash, &prealloc, alloc_flags);
 		if (new) {
 			new->next = *bucket;
@@ -297,4 +313,16 @@ depot_stack_handle_t depot_save_stack(st
 fast_exit:
 	return retval;
 }
+EXPORT_SYMBOL_GPL(stack_depot_save);
+
+/**
+ * depot_save_stack - save stack in a stack depot.
+ * @trace - the stacktrace to save.
+ * @alloc_flags - flags for allocating additional memory if required.
+ */
+depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
+				      gfp_t alloc_flags)
+{
+	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
+}
 EXPORT_SYMBOL_GPL(depot_save_stack);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 20/41] backtrace-test: Simplify stack trace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (18 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 19/41] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-11  2:47   ` Josh Poimboeuf
  2019-04-10 10:28 ` [RFC patch 21/41] proc: Simplify task stack retrieval Thomas Gleixner
                   ` (21 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/backtracetest.c |   11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

--- a/kernel/backtracetest.c
+++ b/kernel/backtracetest.c
@@ -48,19 +48,14 @@ static void backtrace_test_irq(void)
 #ifdef CONFIG_STACKTRACE
 static void backtrace_test_saved(void)
 {
-	struct stack_trace trace;
 	unsigned long entries[8];
+	unsigned int nent;
 
 	pr_info("Testing a saved backtrace.\n");
 	pr_info("The following trace is a kernel self test and not a bug!\n");
 
-	trace.nr_entries = 0;
-	trace.max_entries = ARRAY_SIZE(entries);
-	trace.entries = entries;
-	trace.skip = 0;
-
-	save_stack_trace(&trace);
-	print_stack_trace(&trace, 0);
+	nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	stack_trace_print(entries, nent, 0);
 }
 #else
 static void backtrace_test_saved(void)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 21/41] proc: Simplify task stack retrieval
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (19 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 20/41] backtrace-test: Simplify stack trace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-14 14:49   ` Alexey Dobriyan
  2019-04-10 10:28 ` [RFC patch 22/41] latency_top: Simplify stack trace handling Thomas Gleixner
                   ` (20 subsequent siblings)
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrew Morton, Alexey Dobriyan

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
---
 fs/proc/base.c |   15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -407,7 +407,6 @@ static void unlock_trace(struct task_str
 static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns,
 			  struct pid *pid, struct task_struct *task)
 {
-	struct stack_trace trace;
 	unsigned long *entries;
 	int err;
 
@@ -430,20 +429,16 @@ static int proc_pid_stack(struct seq_fil
 	if (!entries)
 		return -ENOMEM;
 
-	trace.nr_entries	= 0;
-	trace.max_entries	= MAX_STACK_TRACE_DEPTH;
-	trace.entries		= entries;
-	trace.skip		= 0;
-
 	err = lock_trace(task);
 	if (!err) {
-		unsigned int i;
+		unsigned int i, nent;
 
-		save_stack_trace_tsk(task, &trace);
+		nent = stack_trace_save_tsk(task, entries,
+					    MAX_STACK_TRACE_DEPTH, 0);
 
-		for (i = 0; i < trace.nr_entries; i++) {
+		for (i = 0; i < nent; i++)
 			seq_printf(m, "[<0>] %pB\n", (void *)entries[i]);
-		}
+
 		unlock_trace(task);
 	}
 	kfree(entries);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 22/41] latency_top: Simplify stack trace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (20 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 21/41] proc: Simplify task stack retrieval Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 23/41] mm/slub: Simplify stack trace retrieval Thomas Gleixner
                   ` (19 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/latencytop.c |   17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -141,20 +141,6 @@ account_global_scheduler_latency(struct
 	memcpy(&latency_record[i], lat, sizeof(struct latency_record));
 }
 
-/*
- * Iterator to store a backtrace into a latency record entry
- */
-static inline void store_stacktrace(struct task_struct *tsk,
-					struct latency_record *lat)
-{
-	struct stack_trace trace;
-
-	memset(&trace, 0, sizeof(trace));
-	trace.max_entries = LT_BACKTRACEDEPTH;
-	trace.entries = &lat->backtrace[0];
-	save_stack_trace_tsk(tsk, &trace);
-}
-
 /**
  * __account_scheduler_latency - record an occurred latency
  * @tsk - the task struct of the task hitting the latency
@@ -191,7 +177,8 @@ void __sched
 	lat.count = 1;
 	lat.time = usecs;
 	lat.max = usecs;
-	store_stacktrace(tsk, &lat);
+
+	stack_trace_save_tsk(tsk, lat.backtrace, LT_BACKTRACEDEPTH, 0);
 
 	raw_spin_lock_irqsave(&latency_lock, flags);
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 23/41] mm/slub: Simplify stack trace retrieval
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (21 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 22/41] latency_top: Simplify stack trace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
                   ` (18 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrew Morton, Pekka Enberg, linux-mm,
	David Rientjes, Christoph Lameter

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
---
 mm/slub.c |   12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

--- a/mm/slub.c
+++ b/mm/slub.c
@@ -552,18 +552,14 @@ static void set_track(struct kmem_cache
 
 	if (addr) {
 #ifdef CONFIG_STACKTRACE
-		struct stack_trace trace;
+		unsigned int nent;
 
-		trace.nr_entries = 0;
-		trace.max_entries = TRACK_ADDRS_COUNT;
-		trace.entries = p->addrs;
-		trace.skip = 3;
 		metadata_access_enable();
-		save_stack_trace(&trace);
+		nent = stack_trace_save(p->addrs, TRACK_ADDRS_COUNT, 3);
 		metadata_access_disable();
 
-		if (trace.nr_entries < TRACK_ADDRS_COUNT)
-			p->addrs[trace.nr_entries] = 0;
+		if (nent < TRACK_ADDRS_COUNT)
+			p->addrs[nent] = 0;
 #endif
 		p->addr = addr;
 		p->cpu = smp_processor_id();



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (22 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 23/41] mm/slub: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
                   ` (17 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Catalin Marinas, linux-mm

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-mm@kvack.org
---
 mm/kmemleak.c |   24 +++---------------------
 1 file changed, 3 insertions(+), 21 deletions(-)

--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -410,11 +410,6 @@ static void print_unreferenced(struct se
  */
 static void dump_object_info(struct kmemleak_object *object)
 {
-	struct stack_trace trace;
-
-	trace.nr_entries = object->trace_len;
-	trace.entries = object->trace;
-
 	pr_notice("Object 0x%08lx (size %zu):\n",
 		  object->pointer, object->size);
 	pr_notice("  comm \"%s\", pid %d, jiffies %lu\n",
@@ -424,7 +419,7 @@ static void dump_object_info(struct kmem
 	pr_notice("  flags = 0x%x\n", object->flags);
 	pr_notice("  checksum = %u\n", object->checksum);
 	pr_notice("  backtrace:\n");
-	print_stack_trace(&trace, 4);
+	stack_trace_print(object->trace, object->trace_len, 4);
 }
 
 /*
@@ -553,15 +548,7 @@ static struct kmemleak_object *find_and_
  */
 static int __save_stack_trace(unsigned long *trace)
 {
-	struct stack_trace stack_trace;
-
-	stack_trace.max_entries = MAX_TRACE;
-	stack_trace.nr_entries = 0;
-	stack_trace.entries = trace;
-	stack_trace.skip = 2;
-	save_stack_trace(&stack_trace);
-
-	return stack_trace.nr_entries;
+	return stack_trace_save(trace, MAX_TRACE, 2);
 }
 
 /*
@@ -2019,13 +2006,8 @@ early_param("kmemleak", kmemleak_boot_co
 
 static void __init print_log_trace(struct early_log *log)
 {
-	struct stack_trace trace;
-
-	trace.nr_entries = log->trace_len;
-	trace.entries = log->trace;
-
 	pr_notice("Early log backtrace:\n");
-	print_stack_trace(&trace, 2);
+	stack_trace_print(log->trace, log->trace_len, 2);
 }
 
 /*



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (23 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 11:33     ` Dmitry Vyukov
  2019-04-11  2:55   ` Josh Poimboeuf
  2019-04-10 10:28 ` [RFC patch 26/41] mm/page_owner: Simplify stack trace handling Thomas Gleixner
                   ` (16 subsequent siblings)
  41 siblings, 2 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrey Ryabinin, Dmitry Vyukov, kasan-dev,
	linux-mm

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
---
 mm/kasan/common.c |   30 ++++++++++++------------------
 mm/kasan/report.c |    7 ++++---
 2 files changed, 16 insertions(+), 21 deletions(-)

--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
 		 ptr < (unsigned long)&__softirqentry_text_end);
 }
 
-static inline void filter_irq_stacks(struct stack_trace *trace)
+static inline unsigned int filter_irq_stacks(unsigned long *entries,
+					     unsigned int nr_entries)
 {
-	int i;
+	unsigned int i;
 
-	if (!trace->nr_entries)
-		return;
-	for (i = 0; i < trace->nr_entries; i++)
-		if (in_irqentry_text(trace->entries[i])) {
+	for (i = 0; i < nr_entries; i++) {
+		if (in_irqentry_text(entries[i])) {
 			/* Include the irqentry function into the stack. */
-			trace->nr_entries = i + 1;
-			break;
+			return i + 1;
 		}
+	}
+	return nr_entries;
 }
 
 static inline depot_stack_handle_t save_stack(gfp_t flags)
 {
 	unsigned long entries[KASAN_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = KASAN_STACK_DEPTH,
-		.skip = 0
-	};
+	unsigned int nent;
 
-	save_stack_trace(&trace);
-	filter_irq_stacks(&trace);
-
-	return depot_save_stack(&trace, flags);
+	nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	nent = filter_irq_stacks(entries, nent);
+	return stack_depot_save(entries, nent, flags);
 }
 
 static inline void set_track(struct kasan_track *track, gfp_t flags)
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
 {
 	pr_err("%s by task %u:\n", prefix, track->pid);
 	if (track->stack) {
-		struct stack_trace trace;
+		unsigned long *entries;
+		unsigned int nent;
 
-		depot_fetch_stack(track->stack, &trace);
-		print_stack_trace(&trace, 0);
+		nent = stack_depot_fetch(track->stack, &entries);
+		stack_trace_print(entries, nent, 0);
 	} else {
 		pr_err("(stack is not available)\n");
 	}



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 26/41] mm/page_owner: Simplify stack trace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (24 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 27/41] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
                   ` (15 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, linux-mm, Mike Rapoport, David Rientjes,
	Andrew Morton

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 mm/page_owner.c |   79 +++++++++++++++++++-------------------------------------
 1 file changed, 28 insertions(+), 51 deletions(-)

--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -58,15 +58,10 @@ static bool need_page_owner(void)
 static __always_inline depot_stack_handle_t create_dummy_stack(void)
 {
 	unsigned long entries[4];
-	struct stack_trace dummy;
+	unsigned int nent;
 
-	dummy.nr_entries = 0;
-	dummy.max_entries = ARRAY_SIZE(entries);
-	dummy.entries = &entries[0];
-	dummy.skip = 0;
-
-	save_stack_trace(&dummy);
-	return depot_save_stack(&dummy, GFP_KERNEL);
+	nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+	return stack_depot_save(entries, nent, GFP_KERNEL);
 }
 
 static noinline void register_dummy_stack(void)
@@ -120,46 +115,39 @@ void __reset_page_owner(struct page *pag
 	}
 }
 
-static inline bool check_recursive_alloc(struct stack_trace *trace,
-					unsigned long ip)
+static inline bool check_recursive_alloc(unsigned long *entries,
+					 unsigned int nr_entries,
+					 unsigned long ip)
 {
-	int i;
+	unsigned int i;
 
-	if (!trace->nr_entries)
-		return false;
-
-	for (i = 0; i < trace->nr_entries; i++) {
-		if (trace->entries[i] == ip)
+	for (i = 0; i < nr_entries; i++) {
+		if (entries[i] == ip)
 			return true;
 	}
-
 	return false;
 }
 
 static noinline depot_stack_handle_t save_stack(gfp_t flags)
 {
 	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 2
-	};
 	depot_stack_handle_t handle;
+	unsigned int nent;
 
-	save_stack_trace(&trace);
+	nent = stack_trace_save(entries, ARRAY_SIZE(entries), 2);
 
 	/*
-	 * We need to check recursion here because our request to stackdepot
-	 * could trigger memory allocation to save new entry. New memory
-	 * allocation would reach here and call depot_save_stack() again
-	 * if we don't catch it. There is still not enough memory in stackdepot
-	 * so it would try to allocate memory again and loop forever.
+	 * We need to check recursion here because our request to
+	 * stackdepot could trigger memory allocation to save new
+	 * entry. New memory allocation would reach here and call
+	 * stack_depot_save_entries() again if we don't catch it. There is
+	 * still not enough memory in stackdepot so it would try to
+	 * allocate memory again and loop forever.
 	 */
-	if (check_recursive_alloc(&trace, _RET_IP_))
+	if (check_recursive_alloc(entries, nent, _RET_IP_))
 		return dummy_handle;
 
-	handle = depot_save_stack(&trace, flags);
+	handle = stack_depot_save(entries, nent, flags);
 	if (!handle)
 		handle = failure_handle;
 
@@ -337,16 +325,10 @@ print_page_owner(char __user *buf, size_
 		struct page *page, struct page_owner *page_owner,
 		depot_stack_handle_t handle)
 {
-	int ret;
-	int pageblock_mt, page_mt;
+	int ret, pageblock_mt, page_mt;
+	unsigned long *entries;
+	unsigned int nent;
 	char *kbuf;
-	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 0
-	};
 
 	count = min_t(size_t, count, PAGE_SIZE);
 	kbuf = kmalloc(count, GFP_KERNEL);
@@ -375,8 +357,8 @@ print_page_owner(char __user *buf, size_
 	if (ret >= count)
 		goto err;
 
-	depot_fetch_stack(handle, &trace);
-	ret += snprint_stack_trace(kbuf + ret, count - ret, &trace, 0);
+	nent = stack_depot_fetch(handle, &entries);
+	ret += stack_trace_snprint(kbuf + ret, count - ret, entries, nent, 0);
 	if (ret >= count)
 		goto err;
 
@@ -407,14 +389,9 @@ void __dump_page_owner(struct page *page
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
 	struct page_owner *page_owner;
-	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.entries = entries,
-		.max_entries = PAGE_OWNER_STACK_DEPTH,
-		.skip = 0
-	};
 	depot_stack_handle_t handle;
+	unsigned long *entries;
+	unsigned int nent;
 	gfp_t gfp_mask;
 	int mt;
 
@@ -438,10 +415,10 @@ void __dump_page_owner(struct page *page
 		return;
 	}
 
-	depot_fetch_stack(handle, &trace);
+	nent = stack_depot_fetch(handle, &entries);
 	pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n",
 		 page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask);
-	print_stack_trace(&trace, 0);
+	stack_trace_print(entries, nent, 0);
 
 	if (page_owner->last_migrate_reason != -1)
 		pr_alert("page has been migrated, last migrate reason: %s\n",



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 27/41] fault-inject: Simplify stacktrace retrieval
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (25 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 26/41] mm/page_owner: Simplify stack trace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28   ` Thomas Gleixner
                   ` (14 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Akinobu Mita

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
---
 lib/fault-inject.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/lib/fault-inject.c
+++ b/lib/fault-inject.c
@@ -65,22 +65,16 @@ static bool fail_task(struct fault_attr
 
 static bool fail_stacktrace(struct fault_attr *attr)
 {
-	struct stack_trace trace;
 	int depth = attr->stacktrace_depth;
 	unsigned long entries[MAX_STACK_TRACE_DEPTH];
-	int n;
+	int n, nent;
 	bool found = (attr->require_start == 0 && attr->require_end == ULONG_MAX);
 
 	if (depth == 0)
 		return found;
 
-	trace.nr_entries = 0;
-	trace.entries = entries;
-	trace.max_entries = depth;
-	trace.skip = 1;
-
-	save_stack_trace(&trace);
-	for (n = 0; n < trace.nr_entries; n++) {
+	nent = stack_trace_save(entries, depth, 1);
+	for (n = 0; n < nent; n++) {
 		if (attr->reject_start <= entries[n] &&
 			       entries[n] < attr->reject_end)
 			return false;



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 10:28   ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: iommu@lists.linux-foundation.org
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
 kernel/dma/debug.c |   13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -89,8 +89,8 @@ struct dma_debug_entry {
 	int		 sg_mapped_ents;
 	enum map_err_types  map_err_type;
 #ifdef CONFIG_STACKTRACE
-	struct		 stack_trace stacktrace;
-	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
+	unsigned int	st_len;
+	unsigned long	st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
 #endif
 };
 
@@ -174,7 +174,7 @@ static inline void dump_entry_trace(stru
 #ifdef CONFIG_STACKTRACE
 	if (entry) {
 		pr_warning("Mapped at:\n");
-		print_stack_trace(&entry->stacktrace, 0);
+		stack_trace_print(entry->st_entries, entry->st_len, 0);
 	}
 #endif
 }
@@ -704,12 +704,9 @@ static struct dma_debug_entry *dma_entry
 	spin_unlock_irqrestore(&free_entries_lock, flags);
 
 #ifdef CONFIG_STACKTRACE
-	entry->stacktrace.max_entries = DMA_DEBUG_STACKTRACE_ENTRIES;
-	entry->stacktrace.entries = entry->st_entries;
-	entry->stacktrace.skip = 2;
-	save_stack_trace(&entry->stacktrace);
+	entry->st_len = stack_trace_save(entry->st_entries,
+					 ARRAY_SIZE(entry->st_entries), 2);
 #endif
-
 	return entry;
 }
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 10:28   ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, iommu, Alexander Potapenko, Andy Lutomirski,
	Josh Poimboeuf, Robin Murphy, Christoph Hellwig

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: iommu@lists.linux-foundation.org
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Marek Szyprowski <m.szyprowski@samsung.com>
---
 kernel/dma/debug.c |   13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

--- a/kernel/dma/debug.c
+++ b/kernel/dma/debug.c
@@ -89,8 +89,8 @@ struct dma_debug_entry {
 	int		 sg_mapped_ents;
 	enum map_err_types  map_err_type;
 #ifdef CONFIG_STACKTRACE
-	struct		 stack_trace stacktrace;
-	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
+	unsigned int	st_len;
+	unsigned long	st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
 #endif
 };
 
@@ -174,7 +174,7 @@ static inline void dump_entry_trace(stru
 #ifdef CONFIG_STACKTRACE
 	if (entry) {
 		pr_warning("Mapped at:\n");
-		print_stack_trace(&entry->stacktrace, 0);
+		stack_trace_print(entry->st_entries, entry->st_len, 0);
 	}
 #endif
 }
@@ -704,12 +704,9 @@ static struct dma_debug_entry *dma_entry
 	spin_unlock_irqrestore(&free_entries_lock, flags);
 
 #ifdef CONFIG_STACKTRACE
-	entry->stacktrace.max_entries = DMA_DEBUG_STACKTRACE_ENTRIES;
-	entry->stacktrace.entries = entry->st_entries;
-	entry->stacktrace.skip = 2;
-	save_stack_trace(&entry->stacktrace);
+	entry->st_len = stack_trace_save(entry->st_entries,
+					 ARRAY_SIZE(entry->st_entries), 2);
 #endif
-
 	return entry;
 }
 


_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (27 preceding siblings ...)
  2019-04-10 10:28   ` Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 11:31   ` Johannes Thumshirn
                     ` (2 more replies)
  2019-04-10 10:28 ` [RFC patch 30/41] dm bufio: " Thomas Gleixner
                   ` (12 subsequent siblings)
  41 siblings, 3 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: David Sterba <dsterba@suse.com>
Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: linux-btrfs@vger.kernel.org
---
 fs/btrfs/ref-verify.c |   15 ++-------------
 1 file changed, 2 insertions(+), 13 deletions(-)

--- a/fs/btrfs/ref-verify.c
+++ b/fs/btrfs/ref-verify.c
@@ -205,28 +205,17 @@ static struct root_entry *lookup_root_en
 #ifdef CONFIG_STACKTRACE
 static void __save_stack_trace(struct ref_action *ra)
 {
-	struct stack_trace stack_trace;
-
-	stack_trace.max_entries = MAX_TRACE;
-	stack_trace.nr_entries = 0;
-	stack_trace.entries = ra->trace;
-	stack_trace.skip = 2;
-	save_stack_trace(&stack_trace);
-	ra->trace_len = stack_trace.nr_entries;
+	ra->trace_len = stack_trace_save(ra->trace, MAX_TRACE, 2);
 }
 
 static void __print_stack_trace(struct btrfs_fs_info *fs_info,
 				struct ref_action *ra)
 {
-	struct stack_trace trace;
-
 	if (ra->trace_len == 0) {
 		btrfs_err(fs_info, "  ref-verify: no stacktrace");
 		return;
 	}
-	trace.nr_entries = ra->trace_len;
-	trace.entries = ra->trace;
-	print_stack_trace(&trace, 2);
+	stack_trace_print(ra->trace, ra->trace_len, 2);
 }
 #else
 static void inline __save_stack_trace(struct ref_action *ra)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 30/41] dm bufio: Simplify stack trace retrieval
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (28 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 31/41] dm persistent data: Simplify stack trace handling Thomas Gleixner
                   ` (11 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, dm-devel, Mike Snitzer, Alasdair Kergon

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: dm-devel@redhat.com
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair Kergon <agk@redhat.com>
---
 drivers/md/dm-bufio.c |   15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

--- a/drivers/md/dm-bufio.c
+++ b/drivers/md/dm-bufio.c
@@ -150,7 +150,7 @@ struct dm_buffer {
 	void (*end_io)(struct dm_buffer *, blk_status_t);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 #define MAX_STACK 10
-	struct stack_trace stack_trace;
+	unsigned int stack_len;
 	unsigned long stack_entries[MAX_STACK];
 #endif
 };
@@ -232,11 +232,7 @@ static DEFINE_MUTEX(dm_bufio_clients_loc
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 static void buffer_record_stack(struct dm_buffer *b)
 {
-	b->stack_trace.nr_entries = 0;
-	b->stack_trace.max_entries = MAX_STACK;
-	b->stack_trace.entries = b->stack_entries;
-	b->stack_trace.skip = 2;
-	save_stack_trace(&b->stack_trace);
+	b->stack_len = stack_trace_save(b->stack_entries, MAX_STACK, 2);
 }
 #endif
 
@@ -438,7 +434,7 @@ static struct dm_buffer *alloc_buffer(st
 	adjust_total_allocated(b->data_mode, (long)c->block_size);
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	memset(&b->stack_trace, 0, sizeof(b->stack_trace));
+	b->stack_len = 0;
 #endif
 	return b;
 }
@@ -1520,8 +1516,9 @@ static void drop_buffers(struct dm_bufio
 			DMERR("leaked buffer %llx, hold count %u, list %d",
 			      (unsigned long long)b->block, b->hold_count, i);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-			print_stack_trace(&b->stack_trace, 1);
-			b->hold_count = 0; /* mark unclaimed to avoid BUG_ON below */
+			stack_trace_print(b->stack_entries, b->stack_len, 1);
+			/* mark unclaimed to avoid BUG_ON below */
+			b->hold_count = 0;
 #endif
 		}
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 31/41] dm persistent data: Simplify stack trace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (29 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 30/41] dm bufio: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 32/41] drm: Simplify stacktrace handling Thomas Gleixner
                   ` (10 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, dm-devel, Mike Snitzer, Alasdair Kergon

Replace the indirection through struct stack_trace with an invocation of
the storage array based interface. This results in less storage space and
indirection.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: dm-devel@redhat.com
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: Alasdair Kergon <agk@redhat.com>
---
 drivers/md/persistent-data/dm-block-manager.c |   19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

--- a/drivers/md/persistent-data/dm-block-manager.c
+++ b/drivers/md/persistent-data/dm-block-manager.c
@@ -35,7 +35,10 @@
 #define MAX_HOLDERS 4
 #define MAX_STACK 10
 
-typedef unsigned long stack_entries[MAX_STACK];
+struct stack_store {
+	unsigned int	nr_entries;
+	unsigned long	entries[MAX_STACK];
+};
 
 struct block_lock {
 	spinlock_t lock;
@@ -44,8 +47,7 @@ struct block_lock {
 	struct task_struct *holders[MAX_HOLDERS];
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	struct stack_trace traces[MAX_HOLDERS];
-	stack_entries entries[MAX_HOLDERS];
+	struct stack_store traces[MAX_HOLDERS];
 #endif
 };
 
@@ -73,7 +75,7 @@ static void __add_holder(struct block_lo
 {
 	unsigned h = __find_holder(lock, NULL);
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
-	struct stack_trace *t;
+	struct stack_store *t;
 #endif
 
 	get_task_struct(task);
@@ -81,11 +83,7 @@ static void __add_holder(struct block_lo
 
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 	t = lock->traces + h;
-	t->nr_entries = 0;
-	t->max_entries = MAX_STACK;
-	t->entries = lock->entries[h];
-	t->skip = 2;
-	save_stack_trace(t);
+	t->nr_entries = stack_trace_save(t->entries, MAX_STACK, 2);
 #endif
 }
 
@@ -106,7 +104,8 @@ static int __check_holder(struct block_l
 			DMERR("recursive lock detected in metadata");
 #ifdef CONFIG_DM_DEBUG_BLOCK_STACK_TRACING
 			DMERR("previously held here:");
-			print_stack_trace(lock->traces + i, 4);
+			stack_trace_print(lock->traces[i].entries,
+					  lock->traces[i].nr_entries, 4);
 
 			DMERR("subsequent acquisition attempted here:");
 			dump_stack();



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 32/41] drm: Simplify stacktrace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (30 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 31/41] dm persistent data: Simplify stack trace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 33/41] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
                   ` (9 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, intel-gfx, Joonas Lahtinen,
	Maarten Lankhorst, dri-devel, David Airlie, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: intel-gfx@lists.freedesktop.org
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/drm_mm.c                |   22 +++++++---------------
 drivers/gpu/drm/i915/i915_vma.c         |   11 ++++-------
 drivers/gpu/drm/i915/intel_runtime_pm.c |   21 +++++++--------------
 3 files changed, 18 insertions(+), 36 deletions(-)

--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -106,22 +106,19 @@
 static noinline void save_stack(struct drm_mm_node *node)
 {
 	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = STACKDEPTH,
-		.skip = 1
-	};
+	unsigned int n;
 
-	save_stack_trace(&trace);
+	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
 
 	/* May be called under spinlock, so avoid sleeping */
-	node->stack = depot_save_stack(&trace, GFP_NOWAIT);
+	node->stack = stack_depot_save(entries, n, GFP_NOWAIT);
 }
 
 static void show_leaks(struct drm_mm *mm)
 {
 	struct drm_mm_node *node;
-	unsigned long entries[STACKDEPTH];
+	unsigned long *entries;
+	unsigned int nent;
 	char *buf;
 
 	buf = kmalloc(BUFSZ, GFP_KERNEL);
@@ -129,19 +126,14 @@ static void show_leaks(struct drm_mm *mm
 		return;
 
 	list_for_each_entry(node, drm_mm_nodes(mm), node_list) {
-		struct stack_trace trace = {
-			.entries = entries,
-			.max_entries = STACKDEPTH
-		};
-
 		if (!node->stack) {
 			DRM_ERROR("node [%08llx + %08llx]: unknown owner\n",
 				  node->start, node->size);
 			continue;
 		}
 
-		depot_fetch_stack(node->stack, &trace);
-		snprint_stack_trace(buf, BUFSZ, &trace, 0);
+		nent = stack_depot_fetch(node->stack, &entries);
+		stack_trace_snprintf(buf, BUFSZ, entries, nent, 0);
 		DRM_ERROR("node [%08llx + %08llx]: inserted at\n%s",
 			  node->start, node->size, buf);
 	}
--- a/drivers/gpu/drm/i915/i915_vma.c
+++ b/drivers/gpu/drm/i915/i915_vma.c
@@ -36,11 +36,8 @@
 
 static void vma_print_allocator(struct i915_vma *vma, const char *reason)
 {
-	unsigned long entries[12];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-	};
+	unsigned long *entries;
+	unsigned int nent;
 	char buf[512];
 
 	if (!vma->node.stack) {
@@ -49,8 +46,8 @@ static void vma_print_allocator(struct i
 		return;
 	}
 
-	depot_fetch_stack(vma->node.stack, &trace);
-	snprint_stack_trace(buf, sizeof(buf), &trace, 0);
+	nent = stack_depot_fetch(vma->node.stack, &entries);
+	stack_trace_snprint(buf, sizeof(buf), entries, nent, 0);
 	DRM_DEBUG_DRIVER("vma.node [%08llx + %08llx] %s: inserted at %s\n",
 			 vma->node.start, vma->node.size, reason, buf);
 }
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -60,27 +60,20 @@
 static noinline depot_stack_handle_t __save_depot_stack(void)
 {
 	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-		.skip = 1,
-	};
+	unsigned int n;
 
-	save_stack_trace(&trace);
-	return depot_save_stack(&trace, GFP_NOWAIT | __GFP_NOWARN);
+	n = stack_trace_save(entries, ARRAY_SIZE(entries), 1);
+	return stack_depot_save(entries, n, GFP_NOWAIT | __GFP_NOWARN);
 }
 
 static void __print_depot_stack(depot_stack_handle_t stack,
 				char *buf, int sz, int indent)
 {
-	unsigned long entries[STACKDEPTH];
-	struct stack_trace trace = {
-		.entries = entries,
-		.max_entries = ARRAY_SIZE(entries),
-	};
+	unsigned long *entries;
+	unsigned int nent;
 
-	depot_fetch_stack(stack, &trace);
-	snprint_stack_trace(buf, sz, &trace, indent);
+	nent = stack_depot_fetch(stack, &entries);
+	stack_trace_snprint(buf, sz, entries, nent, indent);
 }
 
 static void init_intel_runtime_pm_wakeref(struct drm_i915_private *i915)



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 33/41] lockdep: Remove unused trace argument from print_circular_bug()
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (31 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 32/41] drm: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 34/41] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
                   ` (8 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/lockdep.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1522,10 +1522,9 @@ static inline int class_equal(struct loc
 }
 
 static noinline int print_circular_bug(struct lock_list *this,
-				struct lock_list *target,
-				struct held_lock *check_src,
-				struct held_lock *check_tgt,
-				struct stack_trace *trace)
+				       struct lock_list *target,
+				       struct held_lock *check_src,
+				       struct held_lock *check_tgt)
 {
 	struct task_struct *curr = current;
 	struct lock_list *parent;
@@ -2206,7 +2205,7 @@ check_prev_add(struct task_struct *curr,
 			 */
 			save(trace);
 		}
-		return print_circular_bug(&this, target_entry, next, prev, trace);
+		return print_circular_bug(&this, target_entry, next, prev);
 	}
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 34/41] lockdep: Move stack trace logic into check_prev_add()
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (32 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 33/41] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 35/41] lockdep: Simplify stack trace handling Thomas Gleixner
                   ` (7 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

There is only one caller of check_prev_add() which hands in a zeroed struct
stack trace and a function pointer to save_stack(). Inside check_prev_add()
the stack_trace struct is checked for being empty, which is always
true. Based on that one code path stores a stack trace which is unused. The
comment there does not make sense either. It's all leftovers from
historical lockdep code.

Move the variable into check_prev_add() itself and cleanup the nonsensical
checks and the pointless stack trace recording.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/locking/lockdep.c |   30 ++++++++----------------------
 1 file changed, 8 insertions(+), 22 deletions(-)

--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2158,10 +2158,10 @@ check_deadlock(struct task_struct *curr,
  */
 static int
 check_prev_add(struct task_struct *curr, struct held_lock *prev,
-	       struct held_lock *next, int distance, struct stack_trace *trace,
-	       int (*save)(struct stack_trace *trace))
+	       struct held_lock *next, int distance)
 {
 	struct lock_list *uninitialized_var(target_entry);
+	struct stack_trace trace;
 	struct lock_list *entry;
 	struct lock_list this;
 	int ret;
@@ -2196,17 +2196,8 @@ check_prev_add(struct task_struct *curr,
 	this.class = hlock_class(next);
 	this.parent = NULL;
 	ret = check_noncircular(&this, hlock_class(prev), &target_entry);
-	if (unlikely(!ret)) {
-		if (!trace->entries) {
-			/*
-			 * If @save fails here, the printing might trigger
-			 * a WARN but because of the !nr_entries it should
-			 * not do bad things.
-			 */
-			save(trace);
-		}
+	if (unlikely(!ret))
 		return print_circular_bug(&this, target_entry, next, prev);
-	}
 	else if (unlikely(ret < 0))
 		return print_bfs_bug(ret);
 
@@ -2253,7 +2244,7 @@ check_prev_add(struct task_struct *curr,
 		return print_bfs_bug(ret);
 
 
-	if (!trace->entries && !save(trace))
+	if (!save_trace(&trace))
 		return 0;
 
 	/*
@@ -2262,14 +2253,14 @@ check_prev_add(struct task_struct *curr,
 	 */
 	ret = add_lock_to_list(hlock_class(next), hlock_class(prev),
 			       &hlock_class(prev)->locks_after,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance, &trace);
 
 	if (!ret)
 		return 0;
 
 	ret = add_lock_to_list(hlock_class(prev), hlock_class(next),
 			       &hlock_class(next)->locks_before,
-			       next->acquire_ip, distance, trace);
+			       next->acquire_ip, distance, &trace);
 	if (!ret)
 		return 0;
 
@@ -2287,12 +2278,6 @@ check_prevs_add(struct task_struct *curr
 {
 	int depth = curr->lockdep_depth;
 	struct held_lock *hlock;
-	struct stack_trace trace = {
-		.nr_entries = 0,
-		.max_entries = 0,
-		.entries = NULL,
-		.skip = 0,
-	};
 
 	/*
 	 * Debugging checks.
@@ -2318,7 +2303,8 @@ check_prevs_add(struct task_struct *curr
 		 * added:
 		 */
 		if (hlock->read != 2 && hlock->check) {
-			int ret = check_prev_add(curr, hlock, next, distance, &trace, save_trace);
+			int ret = check_prev_add(curr, hlock, next, distance);
+
 			if (!ret)
 				return 0;
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 35/41] lockdep: Simplify stack trace handling
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (33 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 34/41] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 36/41] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
                   ` (6 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Replace the indirection through struct stack_trace by using the storage
array based interfaces and storing the information is a small lockdep
specific data structure.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/lockdep.h  |    9 +++++++--
 kernel/locking/lockdep.c |   39 ++++++++++++++++++++-------------------
 2 files changed, 27 insertions(+), 21 deletions(-)

--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -66,6 +66,11 @@ struct lock_class_key {
 
 extern struct lock_class_key __lockdep_no_validate__;
 
+struct lock_trace {
+	unsigned int		nr_entries;
+	unsigned int		offset;
+};
+
 #define LOCKSTAT_POINTS		4
 
 /*
@@ -100,7 +105,7 @@ struct lock_class {
 	 * IRQ/softirq usage tracking bits:
 	 */
 	unsigned long			usage_mask;
-	struct stack_trace		usage_traces[XXX_LOCK_USAGE_STATES];
+	struct lock_trace		usage_traces[XXX_LOCK_USAGE_STATES];
 
 	/*
 	 * Generation counter, when doing certain classes of graph walking,
@@ -188,7 +193,7 @@ struct lock_list {
 	struct list_head		entry;
 	struct lock_class		*class;
 	struct lock_class		*links_to;
-	struct stack_trace		trace;
+	struct lock_trace		trace;
 	int				distance;
 
 	/*
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -434,18 +434,13 @@ static void print_lockdep_off(const char
 #endif
 }
 
-static int save_trace(struct stack_trace *trace)
+static int save_trace(struct lock_trace *trace)
 {
-	trace->nr_entries = 0;
-	trace->max_entries = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
-	trace->entries = stack_trace + nr_stack_trace_entries;
-
-	trace->skip = 3;
-
-	save_stack_trace(trace);
-
-	trace->max_entries = trace->nr_entries;
+	unsigned long *entries = stack_trace + nr_stack_trace_entries;
+	unsigned int nent = MAX_STACK_TRACE_ENTRIES - nr_stack_trace_entries;
 
+	trace->offset = nr_stack_trace_entries;
+	trace->nr_entries= stack_trace_save(entries, nent, 3);
 	nr_stack_trace_entries += trace->nr_entries;
 
 	if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) {
@@ -1196,7 +1191,7 @@ static struct lock_list *alloc_list_entr
 static int add_lock_to_list(struct lock_class *this,
 			    struct lock_class *links_to, struct list_head *head,
 			    unsigned long ip, int distance,
-			    struct stack_trace *trace)
+			    struct lock_trace *trace)
 {
 	struct lock_list *entry;
 	/*
@@ -1415,6 +1410,13 @@ static inline int __bfs_backwards(struct
  * checking.
  */
 
+static void print_lock_trace(struct lock_trace *trace, unsigned int spaces)
+{
+	unsigned long *entries = stack_trace + trace->offset;
+
+	stack_trace_print(entries, trace->nr_entries, spaces);
+}
+
 /*
  * Print a dependency chain entry (this is only done when a deadlock
  * has been detected):
@@ -1427,8 +1429,7 @@ print_circular_bug_entry(struct lock_lis
 	printk("\n-> #%u", depth);
 	print_lock_name(target->class);
 	printk(KERN_CONT ":\n");
-	print_stack_trace(&target->trace, 6);
-
+	print_lock_trace(&target->trace, 6);
 	return 0;
 }
 
@@ -1740,7 +1741,7 @@ static void print_lock_class_header(stru
 
 			len += printk("%*s   %s", depth, "", usage_str[bit]);
 			len += printk(KERN_CONT " at:\n");
-			print_stack_trace(class->usage_traces + bit, len);
+			print_lock_trace(class->usage_traces + bit, len);
 		}
 	}
 	printk("%*s }\n", depth, "");
@@ -1765,7 +1766,7 @@ print_shortest_lock_dependencies(struct
 	do {
 		print_lock_class_header(entry->class, depth);
 		printk("%*s ... acquired at:\n", depth, "");
-		print_stack_trace(&entry->trace, 2);
+		print_lock_trace(&entry->trace, 2);
 		printk("\n");
 
 		if (depth == 0 && (entry != root)) {
@@ -1878,14 +1879,14 @@ print_bad_irq_dependency(struct task_str
 	print_lock_name(backwards_entry->class);
 	pr_warn("\n... which became %s-irq-safe at:\n", irqclass);
 
-	print_stack_trace(backwards_entry->class->usage_traces + bit1, 1);
+	print_lock_trace(backwards_entry->class->usage_traces + bit1, 1);
 
 	pr_warn("\nto a %s-irq-unsafe lock:\n", irqclass);
 	print_lock_name(forwards_entry->class);
 	pr_warn("\n... which became %s-irq-unsafe at:\n", irqclass);
 	pr_warn("...");
 
-	print_stack_trace(forwards_entry->class->usage_traces + bit2, 1);
+	print_lock_trace(forwards_entry->class->usage_traces + bit2, 1);
 
 	pr_warn("\nother info that might help us debug this:\n\n");
 	print_irq_lock_scenario(backwards_entry, forwards_entry,
@@ -2161,9 +2162,9 @@ check_prev_add(struct task_struct *curr,
 	       struct held_lock *next, int distance)
 {
 	struct lock_list *uninitialized_var(target_entry);
-	struct stack_trace trace;
 	struct lock_list *entry;
 	struct lock_list this;
+	struct lock_trace trace;
 	int ret;
 
 	if (!hlock_class(prev)->key || !hlock_class(next)->key) {
@@ -2801,7 +2802,7 @@ print_usage_bug(struct task_struct *curr
 	print_lock(this);
 
 	pr_warn("{%s} state was registered at:\n", usage_str[prev_bit]);
-	print_stack_trace(hlock_class(this)->usage_traces + prev_bit, 1);
+	print_lock_trace(hlock_class(this)->usage_traces + prev_bit, 1);
 
 	print_irqtrace_events(curr);
 	pr_warn("\nother info that might help us debug this:\n");



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 36/41] tracing: Simplify stacktrace retrieval in histograms
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (34 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 35/41] lockdep: Simplify stack trace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 37/41] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
                   ` (5 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

The indirection through struct stack_trace is not necessary at all. Use the
storage array based interface.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace_events_hist.c |   12 +++---------
 1 file changed, 3 insertions(+), 9 deletions(-)

--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5186,7 +5186,6 @@ static void event_hist_trigger(struct ev
 	u64 var_ref_vals[TRACING_MAP_VARS_MAX];
 	char compound_key[HIST_KEY_SIZE_MAX];
 	struct tracing_map_elt *elt = NULL;
-	struct stack_trace stacktrace;
 	struct hist_field *key_field;
 	u64 field_contents;
 	void *key = NULL;
@@ -5198,14 +5197,9 @@ static void event_hist_trigger(struct ev
 		key_field = hist_data->fields[i];
 
 		if (key_field->flags & HIST_FIELD_FL_STACKTRACE) {
-			stacktrace.max_entries = HIST_STACKTRACE_DEPTH;
-			stacktrace.entries = entries;
-			stacktrace.nr_entries = 0;
-			stacktrace.skip = HIST_STACKTRACE_SKIP;
-
-			memset(stacktrace.entries, 0, HIST_STACKTRACE_SIZE);
-			save_stack_trace(&stacktrace);
-
+			memset(entries, 0, HIST_STACKTRACE_SIZE);
+			stack_trace_save(entries, HIST_STACKTRACE_DEPTH,
+					 HIST_STACKTRACE_SKIP);
 			key = entries;
 		} else {
 			field_contents = key_field->fn(key_field, elt, rbe, rec);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 37/41] tracing: Use percpu stack trace buffer more intelligently
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (35 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 36/41] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 38/41] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
                   ` (4 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

The per cpu stack trace buffer usage pattern is odd at best. The buffer has
place for 512 stack trace entries on 64-bit and 1024 on 32-bit. When
interrupts or exceptions nest after the per cpu buffer was acquired the
stacktrace length is hardcoded to 8 entries. 512/1024 stack trace entries
in kernel stacks are unrealistic so the buffer is a complete waste.

Split the buffer into chunks of 64 stack entries which is plenty. This
allows nesting contexts (interrupts, exceptions) to utilize the cpu buffer
for stack retrieval and avoids the fixed length allocation along with the
conditional execution pathes.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   77 +++++++++++++++++++++++++--------------------------
 1 file changed, 39 insertions(+), 38 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2749,12 +2749,21 @@ trace_function(struct trace_array *tr,
 
 #ifdef CONFIG_STACKTRACE
 
-#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
+/* 64 entries for kernel stacks are plenty */
+#define FTRACE_KSTACK_ENTRIES	64
+
 struct ftrace_stack {
-	unsigned long		calls[FTRACE_STACK_MAX_ENTRIES];
+	unsigned long		calls[FTRACE_KSTACK_ENTRIES];
 };
 
-static DEFINE_PER_CPU(struct ftrace_stack, ftrace_stack);
+/* This allows 8 level nesting which is plenty */
+#define FTRACE_KSTACK_NESTING	(PAGE_SIZE / sizeof(struct ftrace_stack))
+
+struct ftrace_stacks {
+	struct ftrace_stack	stacks[FTRACE_KSTACK_NESTING];
+};
+
+static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
 static DEFINE_PER_CPU(int, ftrace_stack_reserve);
 
 static void __ftrace_trace_stack(struct ring_buffer *buffer,
@@ -2763,10 +2772,11 @@ static void __ftrace_trace_stack(struct
 {
 	struct trace_event_call *call = &event_kernel_stack;
 	struct ring_buffer_event *event;
+	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
 	struct stack_trace trace;
-	int use_stack;
-	int size = FTRACE_STACK_ENTRIES;
+	int size = FTRACE_KSTACK_ENTRIES;
+	int stackidx;
 
 	trace.nr_entries	= 0;
 	trace.skip		= skip;
@@ -2788,29 +2798,32 @@ static void __ftrace_trace_stack(struct
 	 */
 	preempt_disable_notrace();
 
-	use_stack = __this_cpu_inc_return(ftrace_stack_reserve);
+	stackidx = __this_cpu_inc_return(ftrace_stack_reserve);
+
+	/* This should never happen. If it does, yell once and skip */
+	if (WARN_ON_ONCE(stackidx >= FTRACE_KSTACK_NESTING))
+		goto out;
+
 	/*
-	 * We don't need any atomic variables, just a barrier.
-	 * If an interrupt comes in, we don't care, because it would
-	 * have exited and put the counter back to what we want.
-	 * We just need a barrier to keep gcc from moving things
-	 * around.
+	 * The above __this_cpu_inc_return() is 'atomic' cpu local. An
+	 * interrupt will either see the value pre increment or post
+	 * increment. If the interrupt happens pre increment it will have
+	 * restored the counter when it returns.  We just need a barrier to
+	 * keep gcc from moving things around.
 	 */
 	barrier();
-	if (use_stack == 1) {
-		trace.entries		= this_cpu_ptr(ftrace_stack.calls);
-		trace.max_entries	= FTRACE_STACK_MAX_ENTRIES;
-
-		if (regs)
-			save_stack_trace_regs(regs, &trace);
-		else
-			save_stack_trace(&trace);
-
-		if (trace.nr_entries > size)
-			size = trace.nr_entries;
-	} else
-		/* From now on, use_stack is a boolean */
-		use_stack = 0;
+
+	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
+	trace.entries		= fstack->calls;
+	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
+
+	if (regs)
+		save_stack_trace_regs(regs, &trace);
+	else
+		save_stack_trace(&trace);
+
+	if (trace.nr_entries > size)
+		size = trace.nr_entries;
 
 	size *= sizeof(unsigned long);
 
@@ -2820,19 +2833,7 @@ static void __ftrace_trace_stack(struct
 		goto out;
 	entry = ring_buffer_event_data(event);
 
-	memset(&entry->caller, 0, size);
-
-	if (use_stack)
-		memcpy(&entry->caller, trace.entries,
-		       trace.nr_entries * sizeof(unsigned long));
-	else {
-		trace.max_entries	= FTRACE_STACK_ENTRIES;
-		trace.entries		= entry->caller;
-		if (regs)
-			save_stack_trace_regs(regs, &trace);
-		else
-			save_stack_trace(&trace);
-	}
+	memcpy(&entry->caller, trace.entries, size);
 
 	entry->size = trace.nr_entries;
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 38/41] tracing: Make ftrace_trace_userstack() static and conditional
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (36 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 37/41] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 39/41] tracing: Simplify stack trace retrieval Thomas Gleixner
                   ` (3 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

It's only used in trace.c and there is absolutely no point in compiling it
in when user space stack traces are not supported.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   14 ++++++++------
 kernel/trace/trace.h |    8 --------
 2 files changed, 8 insertions(+), 14 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -159,6 +159,8 @@ static union trace_eval_map_item *trace_
 #endif /* CONFIG_TRACE_EVAL_MAP_FILE */
 
 static int tracing_set_tracer(struct trace_array *tr, const char *buf);
+static void ftrace_trace_userstack(struct ring_buffer *buffer,
+				   unsigned long flags, int pc);
 
 #define MAX_TRACER_SIZE		100
 static char bootup_tracer_buf[MAX_TRACER_SIZE] __initdata;
@@ -2905,9 +2907,10 @@ void trace_dump_stack(int skip)
 }
 EXPORT_SYMBOL_GPL(trace_dump_stack);
 
+#ifdef CONFIG_USER_STACKTRACE_SUPPORT
 static DEFINE_PER_CPU(int, user_stack_count);
 
-void
+static void
 ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
 {
 	struct trace_event_call *call = &event_user_stack;
@@ -2958,13 +2961,12 @@ ftrace_trace_userstack(struct ring_buffe
  out:
 	preempt_enable();
 }
-
-#ifdef UNUSED
-static void __trace_userstack(struct trace_array *tr, unsigned long flags)
+#else /* CONFIG_USER_STACKTRACE_SUPPORT */
+static void ftrace_trace_userstack(struct ring_buffer *buffer,
+				   unsigned long flags, int pc)
 {
-	ftrace_trace_userstack(tr, flags, preempt_count());
 }
-#endif /* UNUSED */
+#endif /* !CONFIG_USER_STACKTRACE_SUPPORT */
 
 #endif /* CONFIG_STACKTRACE */
 
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -782,17 +782,9 @@ void update_max_tr_single(struct trace_a
 #endif /* CONFIG_TRACER_MAX_TRACE */
 
 #ifdef CONFIG_STACKTRACE
-void ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags,
-			    int pc);
-
 void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
 		   int pc);
 #else
-static inline void ftrace_trace_userstack(struct ring_buffer *buffer,
-					  unsigned long flags, int pc)
-{
-}
-
 static inline void __trace_stack(struct trace_array *tr, unsigned long flags,
 				 int skip, int pc)
 {



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 39/41] tracing: Simplify stack trace retrieval
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (37 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 38/41] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 10:28 ` [RFC patch 40/41] stacktrace: Remove obsolete functions Thomas Gleixner
                   ` (2 subsequent siblings)
  41 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

Replace the indirection through struct stack_trace by using the storage
array based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
 kernel/trace/trace.c |   34 +++++++++-------------------------
 1 file changed, 9 insertions(+), 25 deletions(-)

--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -2776,20 +2776,16 @@ static void __ftrace_trace_stack(struct
 	struct ring_buffer_event *event;
 	struct ftrace_stack *fstack;
 	struct stack_entry *entry;
-	struct stack_trace trace;
-	int size = FTRACE_KSTACK_ENTRIES;
+	unsigned int size, nent;
 	int stackidx;
 
-	trace.nr_entries	= 0;
-	trace.skip		= skip;
-
 	/*
 	 * Add one, for this function and the call to save_stack_trace()
 	 * If regs is set, then these functions will not be in the way.
 	 */
 #ifndef CONFIG_UNWINDER_ORC
 	if (!regs)
-		trace.skip++;
+		skip++;
 #endif
 
 	/*
@@ -2816,28 +2812,22 @@ static void __ftrace_trace_stack(struct
 	barrier();
 
 	fstack = this_cpu_ptr(ftrace_stacks.stacks) + (stackidx - 1);
-	trace.entries		= fstack->calls;
-	trace.max_entries	= FTRACE_KSTACK_ENTRIES;
+	nent = ARRAY_SIZE(fstack->calls);
 
 	if (regs)
-		save_stack_trace_regs(regs, &trace);
+		nent = stack_trace_save_regs(regs, fstack->calls, nent, skip);
 	else
-		save_stack_trace(&trace);
-
-	if (trace.nr_entries > size)
-		size = trace.nr_entries;
-
-	size *= sizeof(unsigned long);
+		nent = stack_trace_save(fstack->calls, nent, skip);
 
+	size = nent * sizeof(unsigned long);
 	event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
 					    sizeof(*entry) + size, flags, pc);
 	if (!event)
 		goto out;
 	entry = ring_buffer_event_data(event);
 
-	memcpy(&entry->caller, trace.entries, size);
-
-	entry->size = trace.nr_entries;
+	memcpy(&entry->caller, fstack->calls, size);
+	entry->size = nent;
 
 	if (!call_filter_check_discard(call, entry, buffer, event))
 		__buffer_unlock_commit(buffer, event);
@@ -2916,7 +2906,6 @@ ftrace_trace_userstack(struct ring_buffe
 	struct trace_event_call *call = &event_user_stack;
 	struct ring_buffer_event *event;
 	struct userstack_entry *entry;
-	struct stack_trace trace;
 
 	if (!(global_trace.trace_flags & TRACE_ITER_USERSTACKTRACE))
 		return;
@@ -2947,12 +2936,7 @@ ftrace_trace_userstack(struct ring_buffe
 	entry->tgid		= current->tgid;
 	memset(&entry->caller, 0, sizeof(entry->caller));
 
-	trace.nr_entries	= 0;
-	trace.max_entries	= FTRACE_STACK_ENTRIES;
-	trace.skip		= 0;
-	trace.entries		= entry->caller;
-
-	save_stack_trace_user(&trace);
+	stack_trace_save_user(entry->caller, FTRACE_STACK_ENTRIES, 0);
 	if (!call_filter_check_discard(call, entry, buffer, event))
 		__buffer_unlock_commit(buffer, event);
 



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 40/41] stacktrace: Remove obsolete functions
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (38 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 39/41] tracing: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-11  3:33   ` Josh Poimboeuf
  2019-04-10 10:28 ` [RFC patch 41/41] lib/stackdepot: " Thomas Gleixner
  2019-04-10 11:49 ` [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Peter Zijlstra
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

No more users of the struct stack_trace based interfaces. Remove them.

Remove the macro stubs for !CONFIG_STACKTRACE as well as they are pointless
because the storage on the call sites is conditional on CONFIG_STACKTRACE
already. No point to be 'smart'.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stacktrace.h |   46 +++++++++++++++------------------------------
 kernel/stacktrace.c        |   14 -------------
 2 files changed, 16 insertions(+), 44 deletions(-)

--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -8,23 +8,6 @@ struct task_struct;
 struct pt_regs;
 
 #ifdef CONFIG_STACKTRACE
-struct stack_trace {
-	unsigned int nr_entries, max_entries;
-	unsigned long *entries;
-	int skip;	/* input argument: How many entries to skip */
-};
-
-extern void save_stack_trace(struct stack_trace *trace);
-extern void save_stack_trace_regs(struct pt_regs *regs,
-				  struct stack_trace *trace);
-extern void save_stack_trace_tsk(struct task_struct *tsk,
-				struct stack_trace *trace);
-extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
-					 struct stack_trace *trace);
-
-extern void print_stack_trace(struct stack_trace *trace, int spaces);
-extern int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces);
 
 extern void stack_trace_print(unsigned long *trace, unsigned int nr_entries,
 			      int spaces);
@@ -43,20 +26,23 @@ extern unsigned int stack_trace_save_reg
 extern unsigned int stack_trace_save_user(unsigned long *store,
 					  unsigned int size,
 					  unsigned int skipnr);
+/*
+ * The below is for stack trace internals and architecture
+ * implementations. Do not use in generic code.
+ */
+struct stack_trace {
+	unsigned int nr_entries, max_entries;
+	unsigned long *entries;
+	int skip;	/* input argument: How many entries to skip */
+};
 
-#ifdef CONFIG_USER_STACKTRACE_SUPPORT
+extern void save_stack_trace(struct stack_trace *trace);
+extern void save_stack_trace_regs(struct pt_regs *regs,
+				  struct stack_trace *trace);
+extern void save_stack_trace_tsk(struct task_struct *tsk,
+				struct stack_trace *trace);
+extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
+					 struct stack_trace *trace);
 extern void save_stack_trace_user(struct stack_trace *trace);
-#else
-# define save_stack_trace_user(trace)              do { } while (0)
-#endif
-
-#else /* !CONFIG_STACKTRACE */
-# define save_stack_trace(trace)			do { } while (0)
-# define save_stack_trace_tsk(tsk, trace)		do { } while (0)
-# define save_stack_trace_user(trace)			do { } while (0)
-# define print_stack_trace(trace, spaces)		do { } while (0)
-# define snprint_stack_trace(buf, size, trace, spaces)	do { } while (0)
-# define save_stack_trace_tsk_reliable(tsk, trace)	({ -ENOSYS; })
-#endif /* CONFIG_STACKTRACE */
 
 #endif /* __LINUX_STACKTRACE_H */
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -30,12 +30,6 @@ void stack_trace_print(unsigned long *en
 }
 EXPORT_SYMBOL_GPL(stack_trace_print);
 
-void print_stack_trace(struct stack_trace *trace, int spaces)
-{
-	stack_trace_print(trace->entries, trace->nr_entries, spaces);
-}
-EXPORT_SYMBOL_GPL(print_stack_trace);
-
 /**
  * stack_trace_snprint - Print the entries in the stack trace into a buffer
  * @buf:	Pointer to the print buffer
@@ -70,14 +64,6 @@ int stack_trace_snprint(char *buf, size_
 }
 EXPORT_SYMBOL_GPL(stack_trace_snprint);
 
-int snprint_stack_trace(char *buf, size_t size,
-			struct stack_trace *trace, int spaces)
-{
-	return stack_trace_snprint(buf, size, trace->entries,
-				   trace->nr_entries, spaces);
-}
-EXPORT_SYMBOL_GPL(snprint_stack_trace);
-
 /*
  * Architectures that do not implement save_stack_trace_*()
  * get these weak aliases and once-per-bootup warnings



^ permalink raw reply	[flat|nested] 105+ messages in thread

* [RFC patch 41/41] lib/stackdepot: Remove obsolete functions
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (39 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 40/41] stacktrace: Remove obsolete functions Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
  2019-04-10 13:49   ` Alexander Potapenko
  2019-04-10 11:49 ` [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Peter Zijlstra
  41 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
  To: LKML
  Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

No more users of the struct stack_trace based interfaces.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 include/linux/stackdepot.h |    4 ----
 lib/stackdepot.c           |   20 --------------------
 2 files changed, 24 deletions(-)

--- a/include/linux/stackdepot.h
+++ b/include/linux/stackdepot.h
@@ -23,13 +23,9 @@
 
 typedef u32 depot_stack_handle_t;
 
-struct stack_trace;
-
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
 depot_stack_handle_t stack_depot_save(unsigned long *entries,
 				      unsigned int nr_entries, gfp_t gfp_flags);
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
 unsigned int stack_depot_fetch(depot_stack_handle_t handle,
 			       unsigned long **entries);
 
--- a/lib/stackdepot.c
+++ b/lib/stackdepot.c
@@ -212,14 +212,6 @@ unsigned int stack_depot_fetch(depot_sta
 }
 EXPORT_SYMBOL_GPL(stack_depot_fetch);
 
-void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
-{
-	unsigned int nent = stack_depot_fetch(handle, &trace->entries);
-
-	trace->max_entries = trace->nr_entries = nent;
-}
-EXPORT_SYMBOL_GPL(depot_fetch_stack);
-
 /**
  * stack_depot_save - Save a stack trace from an array
  *
@@ -314,15 +306,3 @@ depot_stack_handle_t stack_depot_save(un
 	return retval;
 }
 EXPORT_SYMBOL_GPL(stack_depot_save);
-
-/**
- * depot_save_stack - save stack in a stack depot.
- * @trace - the stacktrace to save.
- * @alloc_flags - flags for allocating additional memory if required.
- */
-depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
-				      gfp_t alloc_flags)
-{
-	return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
-}
-EXPORT_SYMBOL_GPL(depot_save_stack);



^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 11:08     ` Christoph Hellwig
  0 siblings, 0 replies; 105+ messages in thread
From: Christoph Hellwig @ 2019-04-10 11:08 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, iommu, Robin Murphy, Christoph Hellwig,
	Marek Szyprowski

On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.

This seems to be missing some context, at least stack_trace_save does
not actually exist in mainline.

Please always send the whole series out to everyone on the To and Cc
list, otherwise patch series are not reviewable.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 11:08     ` Christoph Hellwig
  0 siblings, 0 replies; 105+ messages in thread
From: Christoph Hellwig @ 2019-04-10 11:08 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: x86, LKML, Steven Rostedt, iommu, Alexander Potapenko,
	Andy Lutomirski, Josh Poimboeuf, Robin Murphy, Christoph Hellwig

On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.

This seems to be missing some context, at least stack_trace_save does
not actually exist in mainline.

Please always send the whole series out to everyone on the To and Cc
list, otherwise patch series are not reviewable.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-10 10:28 ` [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-10 11:31   ` Johannes Thumshirn
  2019-04-10 12:05     ` Thomas Gleixner
  2019-04-10 12:50   ` David Sterba
  2019-04-10 13:47   ` Alexander Potapenko
  2 siblings, 1 reply; 105+ messages in thread
From: Johannes Thumshirn @ 2019-04-10 11:31 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs

On Wed, Apr 10, 2019 at 12:28:23PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: David Sterba <dsterba@suse.com>
> Cc: Chris Mason <clm@fb.com>
> Cc: Josef Bacik <josef@toxicpanda.com>
> Cc: linux-btrfs@vger.kernel.org
> ---
>  fs/btrfs/ref-verify.c |   15 ++-------------
>  1 file changed, 2 insertions(+), 13 deletions(-)
> 
> --- a/fs/btrfs/ref-verify.c
> +++ b/fs/btrfs/ref-verify.c
> @@ -205,28 +205,17 @@ static struct root_entry *lookup_root_en
>  #ifdef CONFIG_STACKTRACE
>  static void __save_stack_trace(struct ref_action *ra)
>  {
> -	struct stack_trace stack_trace;
> -
> -	stack_trace.max_entries = MAX_TRACE;
> -	stack_trace.nr_entries = 0;
> -	stack_trace.entries = ra->trace;
> -	stack_trace.skip = 2;
> -	save_stack_trace(&stack_trace);
> -	ra->trace_len = stack_trace.nr_entries;
> +	ra->trace_len = stack_trace_save(ra->trace, MAX_TRACE, 2);


Stupid question: why are you passing a '2' for 'skipnr' and in
stack_trace_save() from your series you set stack_trace::skip as skipnr + 1. 

Wouldn't this result in a stack_trace::skip = 3? Or is it the number of
functions to be skipped and you don't want to have stack_trace_save() saved as
well? 

Thanks,
	Johannes
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 13/41] mm/kasan: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 11:31     ` Dmitry Vyukov
  2019-04-14 20:42   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 105+ messages in thread
From: Dmitry Vyukov @ 2019-04-10 11:31 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, the arch/x86 maintainers, Andy Lutomirski,
	Steven Rostedt, Alexander Potapenko, Andrey Ryabinin, kasan-dev,
	Linux-MM

On Wed, Apr 10, 2019 at 1:05 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> No architecture terminates the stack trace with ULONG_MAX anymore. Remove
> the cruft.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: linux-mm@kvack.org
> ---
>  mm/kasan/common.c |    3 ---
>  1 file changed, 3 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -74,9 +74,6 @@ static inline depot_stack_handle_t save_
>
>         save_stack_trace(&trace);
>         filter_irq_stacks(&trace);
> -       if (trace.nr_entries != 0 &&
> -           trace.entries[trace.nr_entries-1] == ULONG_MAX)
> -               trace.nr_entries--;
>
>         return depot_save_stack(&trace, flags);
>  }


Acked-by: Dmitry Vyukov <dvyukov@google.com>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 13/41] mm/kasan: Remove the ULONG_MAX stack trace hackery
@ 2019-04-10 11:31     ` Dmitry Vyukov
  0 siblings, 0 replies; 105+ messages in thread
From: Dmitry Vyukov @ 2019-04-10 11:31 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, the arch/x86 maintainers, Andy Lutomirski,
	Steven Rostedt, Alexander Potapenko, Andrey Ryabinin, kasan-dev,
	Linux-MM

On Wed, Apr 10, 2019 at 1:05 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> No architecture terminates the stack trace with ULONG_MAX anymore. Remove
> the cruft.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: linux-mm@kvack.org
> ---
>  mm/kasan/common.c |    3 ---
>  1 file changed, 3 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -74,9 +74,6 @@ static inline depot_stack_handle_t save_
>
>         save_stack_trace(&trace);
>         filter_irq_stacks(&trace);
> -       if (trace.nr_entries != 0 &&
> -           trace.entries[trace.nr_entries-1] == ULONG_MAX)
> -               trace.nr_entries--;
>
>         return depot_save_stack(&trace, flags);
>  }


Acked-by: Dmitry Vyukov <dvyukov@google.com>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
  2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 11:33     ` Dmitry Vyukov
  2019-04-11  2:55   ` Josh Poimboeuf
  1 sibling, 0 replies; 105+ messages in thread
From: Dmitry Vyukov @ 2019-04-10 11:33 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, the arch/x86 maintainers, Andy Lutomirski,
	Steven Rostedt, Alexander Potapenko, Andrey Ryabinin, kasan-dev,
	Linux-MM

On Wed, Apr 10, 2019 at 1:06 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> ---
>  mm/kasan/common.c |   30 ++++++++++++------------------
>  mm/kasan/report.c |    7 ++++---
>  2 files changed, 16 insertions(+), 21 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
>                  ptr < (unsigned long)&__softirqentry_text_end);
>  }
>
> -static inline void filter_irq_stacks(struct stack_trace *trace)
> +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> +                                            unsigned int nr_entries)
>  {
> -       int i;
> +       unsigned int i;
>
> -       if (!trace->nr_entries)
> -               return;
> -       for (i = 0; i < trace->nr_entries; i++)
> -               if (in_irqentry_text(trace->entries[i])) {
> +       for (i = 0; i < nr_entries; i++) {
> +               if (in_irqentry_text(entries[i])) {
>                         /* Include the irqentry function into the stack. */
> -                       trace->nr_entries = i + 1;
> -                       break;
> +                       return i + 1;
>                 }
> +       }
> +       return nr_entries;
>  }
>
>  static inline depot_stack_handle_t save_stack(gfp_t flags)
>  {
>         unsigned long entries[KASAN_STACK_DEPTH];
> -       struct stack_trace trace = {
> -               .nr_entries = 0,
> -               .entries = entries,
> -               .max_entries = KASAN_STACK_DEPTH,
> -               .skip = 0
> -       };
> +       unsigned int nent;
>
> -       save_stack_trace(&trace);
> -       filter_irq_stacks(&trace);
> -
> -       return depot_save_stack(&trace, flags);
> +       nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> +       nent = filter_irq_stacks(entries, nent);
> +       return stack_depot_save(entries, nent, flags);
>  }
>
>  static inline void set_track(struct kasan_track *track, gfp_t flags)
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
>  {
>         pr_err("%s by task %u:\n", prefix, track->pid);
>         if (track->stack) {
> -               struct stack_trace trace;
> +               unsigned long *entries;
> +               unsigned int nent;
>
> -               depot_fetch_stack(track->stack, &trace);
> -               print_stack_trace(&trace, 0);
> +               nent = stack_depot_fetch(track->stack, &entries);
> +               stack_trace_print(entries, nent, 0);
>         } else {
>                 pr_err("(stack is not available)\n");
>         }


Acked-by: Dmitry Vyukov <dvyukov@google.com>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
@ 2019-04-10 11:33     ` Dmitry Vyukov
  0 siblings, 0 replies; 105+ messages in thread
From: Dmitry Vyukov @ 2019-04-10 11:33 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, the arch/x86 maintainers, Andy Lutomirski,
	Steven Rostedt, Alexander Potapenko, Andrey Ryabinin, kasan-dev,
	Linux-MM

On Wed, Apr 10, 2019 at 1:06 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> ---
>  mm/kasan/common.c |   30 ++++++++++++------------------
>  mm/kasan/report.c |    7 ++++---
>  2 files changed, 16 insertions(+), 21 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
>                  ptr < (unsigned long)&__softirqentry_text_end);
>  }
>
> -static inline void filter_irq_stacks(struct stack_trace *trace)
> +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> +                                            unsigned int nr_entries)
>  {
> -       int i;
> +       unsigned int i;
>
> -       if (!trace->nr_entries)
> -               return;
> -       for (i = 0; i < trace->nr_entries; i++)
> -               if (in_irqentry_text(trace->entries[i])) {
> +       for (i = 0; i < nr_entries; i++) {
> +               if (in_irqentry_text(entries[i])) {
>                         /* Include the irqentry function into the stack. */
> -                       trace->nr_entries = i + 1;
> -                       break;
> +                       return i + 1;
>                 }
> +       }
> +       return nr_entries;
>  }
>
>  static inline depot_stack_handle_t save_stack(gfp_t flags)
>  {
>         unsigned long entries[KASAN_STACK_DEPTH];
> -       struct stack_trace trace = {
> -               .nr_entries = 0,
> -               .entries = entries,
> -               .max_entries = KASAN_STACK_DEPTH,
> -               .skip = 0
> -       };
> +       unsigned int nent;
>
> -       save_stack_trace(&trace);
> -       filter_irq_stacks(&trace);
> -
> -       return depot_save_stack(&trace, flags);
> +       nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> +       nent = filter_irq_stacks(entries, nent);
> +       return stack_depot_save(entries, nent, flags);
>  }
>
>  static inline void set_track(struct kasan_track *track, gfp_t flags)
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
>  {
>         pr_err("%s by task %u:\n", prefix, track->pid);
>         if (track->stack) {
> -               struct stack_trace trace;
> +               unsigned long *entries;
> +               unsigned int nent;
>
> -               depot_fetch_stack(track->stack, &trace);
> -               print_stack_trace(&trace, 0);
> +               nent = stack_depot_fetch(track->stack, &entries);
> +               stack_trace_print(entries, nent, 0);
>         } else {
>                 pr_err("(stack is not available)\n");
>         }


Acked-by: Dmitry Vyukov <dvyukov@google.com>


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace
  2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
                   ` (40 preceding siblings ...)
  2019-04-10 10:28 ` [RFC patch 41/41] lib/stackdepot: " Thomas Gleixner
@ 2019-04-10 11:49 ` Peter Zijlstra
  41 siblings, 0 replies; 105+ messages in thread
From: Peter Zijlstra @ 2019-04-10 11:49 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

On Wed, Apr 10, 2019 at 12:27:54PM +0200, Thomas Gleixner wrote:
> Struct stack_trace is a sinkhole for input and output parameters which is
> largely pointless for most usage sites. In fact if embedded into other data
> structures it creates indirections and extra storage overhead for no benefit.
> 
> Looking at all usage sites makes it clear that they just require an
> interface which is based on a storage array. That array is either on stack,
> global or embedded into some other data structure.
> 
> Some of the stack depot usage sites are outright wrong, but fortunately the
> wrongness just causes more stack being used for nothing and does not have
> functional impact.
> 
> Another oddity is the inconsistent termination of the stack trace with
> ULONG_MAX. It's pointless as the number of entries is what determines the
> length of the stored trace. In fact quite some call sites remove the
> ULONG_MAX marker afterwards with or without nasty comments about it. Not
> all architectures do that and those which do, do it inconsistenly either
> conditional on nr_entries == 0 or unconditionally.
> 
> The following series cleans that up by:
> 
>     1) Removing the ULONG_MAX termination in the architecture code
> 
>     2) Removing the ULONG_MAX fixups at the call sites
> 
>     3) Providing plain storage array based interfaces for stacktrace and
>        stackdepot.
> 
>     4) Cleaning up the mess at the callsites including some related
>        cleanups.
> 
>     5) Removing the struct stack_trace based interfaces
> 
> This is not changing the struct stack_trace interfaces at the architecture
> level, but it removes the exposure to the generic code.
> 
> It's only lightly tested as I'm traveling and access to my test boxes is
> limited.

This is indeed a much needed cleanup; thanks for starting this.

I didn't spot anything wrong while reading through it, so:

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-10 11:31   ` Johannes Thumshirn
@ 2019-04-10 12:05     ` Thomas Gleixner
  2019-04-10 12:38       ` Johannes Thumshirn
  0 siblings, 1 reply; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 12:05 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs

On Wed, 10 Apr 2019, Johannes Thumshirn wrote:

> On Wed, Apr 10, 2019 at 12:28:23PM +0200, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace with an invocation of
> > the storage array based interface.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: David Sterba <dsterba@suse.com>
> > Cc: Chris Mason <clm@fb.com>
> > Cc: Josef Bacik <josef@toxicpanda.com>
> > Cc: linux-btrfs@vger.kernel.org
> > ---
> >  fs/btrfs/ref-verify.c |   15 ++-------------
> >  1 file changed, 2 insertions(+), 13 deletions(-)
> > 
> > --- a/fs/btrfs/ref-verify.c
> > +++ b/fs/btrfs/ref-verify.c
> > @@ -205,28 +205,17 @@ static struct root_entry *lookup_root_en
> >  #ifdef CONFIG_STACKTRACE
> >  static void __save_stack_trace(struct ref_action *ra)
> >  {
> > -	struct stack_trace stack_trace;
> > -
> > -	stack_trace.max_entries = MAX_TRACE;
> > -	stack_trace.nr_entries = 0;
> > -	stack_trace.entries = ra->trace;
> > -	stack_trace.skip = 2;
> > -	save_stack_trace(&stack_trace);
> > -	ra->trace_len = stack_trace.nr_entries;
> > +	ra->trace_len = stack_trace_save(ra->trace, MAX_TRACE, 2);
> 
> 
> Stupid question: why are you passing a '2' for 'skipnr' and in
> stack_trace_save() from your series you set stack_trace::skip as skipnr + 1. 
> 
> Wouldn't this result in a stack_trace::skip = 3? Or is it the number of
> functions to be skipped and you don't want to have stack_trace_save() saved as
> well? 

Correct. The extra call will shift the skipped one up, so I compensate for that.

Thanks,

	tglx

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 12:08       ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 12:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, iommu, Robin Murphy, Marek Szyprowski

On Wed, 10 Apr 2019, Christoph Hellwig wrote:

> On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace with an invocation of
> > the storage array based interface.
> 
> This seems to be missing some context, at least stack_trace_save does
> not actually exist in mainline.
> 
> Please always send the whole series out to everyone on the To and Cc
> list, otherwise patch series are not reviewable.
 
Bah. People complain about overly broad cc-lists and the context is on
lkml. But sure, I just bounced it to you.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 12:08       ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-10 12:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: x86, LKML, Steven Rostedt, iommu, Alexander Potapenko,
	Andy Lutomirski, Josh Poimboeuf, Robin Murphy

On Wed, 10 Apr 2019, Christoph Hellwig wrote:

> On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace with an invocation of
> > the storage array based interface.
> 
> This seems to be missing some context, at least stack_trace_save does
> not actually exist in mainline.
> 
> Please always send the whole series out to everyone on the To and Cc
> list, otherwise patch series are not reviewable.
 
Bah. People complain about overly broad cc-lists and the context is on
lkml. But sure, I just bounced it to you.

Thanks,

	tglx

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 12:25         ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-10 12:25 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	Alexander Potapenko, iommu, Robin Murphy, Marek Szyprowski

On Wed, 10 Apr 2019 14:08:19 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Wed, 10 Apr 2019, Christoph Hellwig wrote:
> > 
> > Please always send the whole series out to everyone on the To and Cc
> > list, otherwise patch series are not reviewable.  
>  
> Bah. People complain about overly broad cc-lists and the context is on
> lkml. But sure, I just bounced it to you.
> 

What I think is the best in between is to have the cover letter sent to
everyone in the patch series, and then individual patches sent to those
that need to know. git sendmail does this but quilt sendmail does
not :-(   I've been looking at fixing quilt to do the same.

-- Steve

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-10 12:25         ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-10 12:25 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: x86, LKML, iommu, Alexander Potapenko, Andy Lutomirski,
	Josh Poimboeuf, Robin Murphy, Christoph Hellwig

On Wed, 10 Apr 2019 14:08:19 +0200 (CEST)
Thomas Gleixner <tglx@linutronix.de> wrote:

> On Wed, 10 Apr 2019, Christoph Hellwig wrote:
> > 
> > Please always send the whole series out to everyone on the To and Cc
> > list, otherwise patch series are not reviewable.  
>  
> Bah. People complain about overly broad cc-lists and the context is on
> lkml. But sure, I just bounced it to you.
> 

What I think is the best in between is to have the cover letter sent to
everyone in the patch series, and then individual patches sent to those
that need to know. git sendmail does this but quilt sendmail does
not :-(   I've been looking at fixing quilt to do the same.

-- Steve
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-10 12:05     ` Thomas Gleixner
@ 2019-04-10 12:38       ` Johannes Thumshirn
  0 siblings, 0 replies; 105+ messages in thread
From: Johannes Thumshirn @ 2019-04-10 12:38 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs

On Wed, Apr 10, 2019 at 02:05:17PM +0200, Thomas Gleixner wrote:
> Correct. The extra call will shift the skipped one up, so I compensate for that.

OK, then
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
in case the series goes in.
-- 
Johannes Thumshirn                            SUSE Labs Filesystems
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it
  2019-04-10 10:28 ` [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it Thomas Gleixner
@ 2019-04-10 12:47   ` Steven Rostedt
  2019-04-11  0:19     ` AKASHI Takahiro
  0 siblings, 1 reply; 105+ messages in thread
From: Steven Rostedt @ 2019-04-10 12:47 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Alexander Potapenko,
	AKASHI Takahiro

On Wed, 10 Apr 2019 12:28:11 +0200
Thomas Gleixner <tglx@linutronix.de> wrote:

> It's only used in the source file where it is defined and it's using the
> stack_trace_ namespace. Rename it to free it up for stack trace related
> functions.
> 

Can you put it back to its original name "print_max_stack()" which was
changed by this commit:

bb99d8ccec7 ("tracing: Allow arch-specific stack tracer")

I actually want to do a clean up and remove all "trace_" functions that
are not a tracepoint. It's getting confusing to see a "trace_..." and
search for the corresponding TRACE_EVENT() macro and not being able to
find it.

Hmm, I'm not sure why Akashi changed that function to be global in the
first place. It looks like only check_stack() needed to be changed.

Akashi?

-- Steve


> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  include/linux/ftrace.h     |    1 -
>  kernel/trace/trace_stack.c |    4 ++--
>  2 files changed, 2 insertions(+), 3 deletions(-)
> 
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -251,7 +251,6 @@ extern unsigned long stack_trace_max_siz
>  extern arch_spinlock_t stack_trace_max_lock;
>  
>  extern int stack_tracer_enabled;
> -void stack_trace_print(void);
>  int
>  stack_trace_sysctl(struct ctl_table *table, int write,
>  		   void __user *buffer, size_t *lenp,
> --- a/kernel/trace/trace_stack.c
> +++ b/kernel/trace/trace_stack.c
> @@ -41,7 +41,7 @@ static DEFINE_MUTEX(stack_sysctl_mutex);
>  int stack_tracer_enabled;
>  static int last_stack_tracer_enabled;
>  
> -void stack_trace_print(void)
> +static void trace_stack_trace_print(void)
>  {
>  	long i;
>  	int size;
> @@ -179,7 +179,7 @@ check_stack(unsigned long ip, unsigned l
>  	stack_trace_max.nr_entries = x;
>  
>  	if (task_stack_end_corrupted(current)) {
> -		stack_trace_print();
> +		trace_stack_trace_print();
>  		BUG();
>  	}
>  
> 


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-10 10:28 ` [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
  2019-04-10 11:31   ` Johannes Thumshirn
@ 2019-04-10 12:50   ` David Sterba
  2019-04-10 13:47   ` Alexander Potapenko
  2 siblings, 0 replies; 105+ messages in thread
From: David Sterba @ 2019-04-10 12:50 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Chris Mason, Steven Rostedt, Alexander Potapenko,
	Andy Lutomirski, x86, Josh Poimboeuf, Josef Bacik, linux-btrfs

On Wed, Apr 10, 2019 at 12:28:23PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: David Sterba <dsterba@suse.com>
> Cc: Chris Mason <clm@fb.com>
> Cc: Josef Bacik <josef@toxicpanda.com>
> Cc: linux-btrfs@vger.kernel.org

Acked-by: David Sterba <dsterba@suse.com>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 19/41] lib/stackdepot: Provide functions which operate on plain storage arrays
  2019-04-10 10:28 ` [RFC patch 19/41] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
@ 2019-04-10 13:39   ` Alexander Potapenko
  0 siblings, 0 replies; 105+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:39 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

On Wed, Apr 10, 2019 at 1:05 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> The struct stack_trace indirection in the stack depot functions is a truly
> pointless excercise which requires horrible code at the callsites.
>
> Provide interfaces based on plain storage arrays.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  include/linux/stackdepot.h |    4 ++
>  lib/stackdepot.c           |   66 ++++++++++++++++++++++++++++++++-------------
>  2 files changed, 51 insertions(+), 19 deletions(-)
>
> --- a/include/linux/stackdepot.h
> +++ b/include/linux/stackdepot.h
> @@ -26,7 +26,11 @@ typedef u32 depot_stack_handle_t;
>  struct stack_trace;
>
>  depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
> +depot_stack_handle_t stack_depot_save(unsigned long *entries,
> +                                     unsigned int nr_entries, gfp_t gfp_flags);
>
>  void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
> +unsigned int stack_depot_fetch(depot_stack_handle_t handle,
> +                              unsigned long **entries);
>
>  #endif
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -194,40 +194,56 @@ static inline struct stack_record *find_
>         return NULL;
>  }
>
> -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
> +/**
> + * stack_depot_fetch - Fetch stack entries from a depot
> + *
> + * @entries:           Pointer to store the entries address
> + */
> +unsigned int stack_depot_fetch(depot_stack_handle_t handle,
> +                              unsigned long **entries)
>  {
>         union handle_parts parts = { .handle = handle };
>         void *slab = stack_slabs[parts.slabindex];
>         size_t offset = parts.offset << STACK_ALLOC_ALIGN;
>         struct stack_record *stack = slab + offset;
>
> -       trace->nr_entries = trace->max_entries = stack->size;
> -       trace->entries = stack->entries;
> -       trace->skip = 0;
> +       *entries = stack->entries;
> +       return stack->size;
> +}
> +EXPORT_SYMBOL_GPL(stack_depot_fetch);
> +
> +void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
> +{
> +       unsigned int nent = stack_depot_fetch(handle, &trace->entries);
> +
> +       trace->max_entries = trace->nr_entries = nent;
>  }
>  EXPORT_SYMBOL_GPL(depot_fetch_stack);
>
>  /**
> - * depot_save_stack - save stack in a stack depot.
> - * @trace - the stacktrace to save.
> - * @alloc_flags - flags for allocating additional memory if required.
> + * stack_depot_save - Save a stack trace from an array
>   *
> - * Returns the handle of the stack struct stored in depot.
> + * @entries:           Pointer to storage array
> + * @nr_entries:                Size of the storage array
> + * @alloc_flags:       Allocation gfp flags
> + *
> + * Returns the handle of the stack struct stored in depot
>   */
> -depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
> -                                   gfp_t alloc_flags)
> +depot_stack_handle_t stack_depot_save(unsigned long *entries,
> +                                     unsigned int nr_entries,
> +                                     gfp_t alloc_flags)
>  {
> -       u32 hash;
> -       depot_stack_handle_t retval = 0;
>         struct stack_record *found = NULL, **bucket;
> -       unsigned long flags;
> +       depot_stack_handle_t retval = 0;
>         struct page *page = NULL;
>         void *prealloc = NULL;
> +       unsigned long flags;
> +       u32 hash;
>
> -       if (unlikely(trace->nr_entries == 0))
> +       if (unlikely(nr_entries == 0))
>                 goto fast_exit;
>
> -       hash = hash_stack(trace->entries, trace->nr_entries);
> +       hash = hash_stack(entries, nr_entries);
>         bucket = &stack_table[hash & STACK_HASH_MASK];
>
>         /*
> @@ -235,8 +251,8 @@ depot_stack_handle_t depot_save_stack(st
>          * The smp_load_acquire() here pairs with smp_store_release() to
>          * |bucket| below.
>          */
> -       found = find_stack(smp_load_acquire(bucket), trace->entries,
> -                          trace->nr_entries, hash);
> +       found = find_stack(smp_load_acquire(bucket), entries,
> +                          nr_entries, hash);
>         if (found)
>                 goto exit;
>
> @@ -264,10 +280,10 @@ depot_stack_handle_t depot_save_stack(st
>
>         spin_lock_irqsave(&depot_lock, flags);
>
> -       found = find_stack(*bucket, trace->entries, trace->nr_entries, hash);
> +       found = find_stack(*bucket, entries, nr_entries, hash);
>         if (!found) {
>                 struct stack_record *new =
> -                       depot_alloc_stack(trace->entries, trace->nr_entries,
> +                       depot_alloc_stack(entries, nr_entries,
>                                           hash, &prealloc, alloc_flags);
>                 if (new) {
>                         new->next = *bucket;
> @@ -297,4 +313,16 @@ depot_stack_handle_t depot_save_stack(st
>  fast_exit:
>         return retval;
>  }
> +EXPORT_SYMBOL_GPL(stack_depot_save);
> +
> +/**
> + * depot_save_stack - save stack in a stack depot.
> + * @trace - the stacktrace to save.
> + * @alloc_flags - flags for allocating additional memory if required.
> + */
> +depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
> +                                     gfp_t alloc_flags)
> +{
> +       return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
> +}
>  EXPORT_SYMBOL_GPL(depot_save_stack);
>
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval
  2019-04-10 10:28 ` [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
  2019-04-10 11:31   ` Johannes Thumshirn
  2019-04-10 12:50   ` David Sterba
@ 2019-04-10 13:47   ` Alexander Potapenko
  2 siblings, 0 replies; 105+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:47 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, David Sterba, Chris Mason, Josef Bacik,
	linux-btrfs

On Wed, Apr 10, 2019 at 1:06 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: David Sterba <dsterba@suse.com>
> Cc: Chris Mason <clm@fb.com>
> Cc: Josef Bacik <josef@toxicpanda.com>
> Cc: linux-btrfs@vger.kernel.org
> ---
>  fs/btrfs/ref-verify.c |   15 ++-------------
>  1 file changed, 2 insertions(+), 13 deletions(-)
>
> --- a/fs/btrfs/ref-verify.c
> +++ b/fs/btrfs/ref-verify.c
> @@ -205,28 +205,17 @@ static struct root_entry *lookup_root_en
>  #ifdef CONFIG_STACKTRACE
>  static void __save_stack_trace(struct ref_action *ra)
>  {
> -       struct stack_trace stack_trace;
> -
> -       stack_trace.max_entries = MAX_TRACE;
> -       stack_trace.nr_entries = 0;
> -       stack_trace.entries = ra->trace;
> -       stack_trace.skip = 2;
> -       save_stack_trace(&stack_trace);
> -       ra->trace_len = stack_trace.nr_entries;
> +       ra->trace_len = stack_trace_save(ra->trace, MAX_TRACE, 2);
Now that stack_trace.skip is gone, it's unclear what this "2" stands for.
Maybe add an inline comment saying it's skipnr?
(This is probably valid for all other stack_trace_save() callsites)
>  }
>
>  static void __print_stack_trace(struct btrfs_fs_info *fs_info,
>                                 struct ref_action *ra)
>  {
> -       struct stack_trace trace;
> -
>         if (ra->trace_len == 0) {
>                 btrfs_err(fs_info, "  ref-verify: no stacktrace");
>                 return;
>         }
> -       trace.nr_entries = ra->trace_len;
> -       trace.entries = ra->trace;
> -       print_stack_trace(&trace, 2);
> +       stack_trace_print(ra->trace, ra->trace_len, 2);
>  }
>  #else
>  static void inline __save_stack_trace(struct ref_action *ra)
>
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 41/41] lib/stackdepot: Remove obsolete functions
  2019-04-10 10:28 ` [RFC patch 41/41] lib/stackdepot: " Thomas Gleixner
@ 2019-04-10 13:49   ` Alexander Potapenko
  0 siblings, 0 replies; 105+ messages in thread
From: Alexander Potapenko @ 2019-04-10 13:49 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

On Wed, Apr 10, 2019 at 1:06 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> No more users of the struct stack_trace based interfaces.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Alexander Potapenko <glider@google.com>
> ---
>  include/linux/stackdepot.h |    4 ----
>  lib/stackdepot.c           |   20 --------------------
>  2 files changed, 24 deletions(-)
>
> --- a/include/linux/stackdepot.h
> +++ b/include/linux/stackdepot.h
> @@ -23,13 +23,9 @@
>
>  typedef u32 depot_stack_handle_t;
>
> -struct stack_trace;
> -
> -depot_stack_handle_t depot_save_stack(struct stack_trace *trace, gfp_t flags);
>  depot_stack_handle_t stack_depot_save(unsigned long *entries,
>                                       unsigned int nr_entries, gfp_t gfp_flags);
>
> -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace);
>  unsigned int stack_depot_fetch(depot_stack_handle_t handle,
>                                unsigned long **entries);
>
> --- a/lib/stackdepot.c
> +++ b/lib/stackdepot.c
> @@ -212,14 +212,6 @@ unsigned int stack_depot_fetch(depot_sta
>  }
>  EXPORT_SYMBOL_GPL(stack_depot_fetch);
>
> -void depot_fetch_stack(depot_stack_handle_t handle, struct stack_trace *trace)
> -{
> -       unsigned int nent = stack_depot_fetch(handle, &trace->entries);
> -
> -       trace->max_entries = trace->nr_entries = nent;
> -}
> -EXPORT_SYMBOL_GPL(depot_fetch_stack);
> -
>  /**
>   * stack_depot_save - Save a stack trace from an array
>   *
> @@ -314,15 +306,3 @@ depot_stack_handle_t stack_depot_save(un
>         return retval;
>  }
>  EXPORT_SYMBOL_GPL(stack_depot_save);
> -
> -/**
> - * depot_save_stack - save stack in a stack depot.
> - * @trace - the stacktrace to save.
> - * @alloc_flags - flags for allocating additional memory if required.
> - */
> -depot_stack_handle_t depot_save_stack(struct stack_trace *trace,
> -                                     gfp_t alloc_flags)
> -{
> -       return stack_depot_save(trace->entries, trace->nr_entries, alloc_flags);
> -}
> -EXPORT_SYMBOL_GPL(depot_save_stack);
>
>


-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it
  2019-04-10 12:47   ` Steven Rostedt
@ 2019-04-11  0:19     ` AKASHI Takahiro
  0 siblings, 0 replies; 105+ messages in thread
From: AKASHI Takahiro @ 2019-04-11  0:19 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Thomas Gleixner, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	Alexander Potapenko

On Wed, Apr 10, 2019 at 08:47:03AM -0400, Steven Rostedt wrote:
> On Wed, 10 Apr 2019 12:28:11 +0200
> Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> > It's only used in the source file where it is defined and it's using the
> > stack_trace_ namespace. Rename it to free it up for stack trace related
> > functions.
> > 
> 
> Can you put it back to its original name "print_max_stack()" which was
> changed by this commit:
> 
> bb99d8ccec7 ("tracing: Allow arch-specific stack tracer")
> 
> I actually want to do a clean up and remove all "trace_" functions that
> are not a tracepoint. It's getting confusing to see a "trace_..." and
> search for the corresponding TRACE_EVENT() macro and not being able to
> find it.
> 
> Hmm, I'm not sure why Akashi changed that function to be global in the
> first place. It looks like only check_stack() needed to be changed.
> 
> Akashi?

Well, as indicated in the commit log, I implemented arch64-specific
check_stack() and used stack_trace_print() in there. At the end of the day,
however, only the first part of my patch set[1], including 'bb99d8ccec7',
was merged. Since then the said function was never used outside of the file.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/393716.html

Thanks,
-Takahiro Akashi

> -- Steve
> 
> 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> >  include/linux/ftrace.h     |    1 -
> >  kernel/trace/trace_stack.c |    4 ++--
> >  2 files changed, 2 insertions(+), 3 deletions(-)
> > 
> > --- a/include/linux/ftrace.h
> > +++ b/include/linux/ftrace.h
> > @@ -251,7 +251,6 @@ extern unsigned long stack_trace_max_siz
> >  extern arch_spinlock_t stack_trace_max_lock;
> >  
> >  extern int stack_tracer_enabled;
> > -void stack_trace_print(void);
> >  int
> >  stack_trace_sysctl(struct ctl_table *table, int write,
> >  		   void __user *buffer, size_t *lenp,
> > --- a/kernel/trace/trace_stack.c
> > +++ b/kernel/trace/trace_stack.c
> > @@ -41,7 +41,7 @@ static DEFINE_MUTEX(stack_sysctl_mutex);
> >  int stack_tracer_enabled;
> >  static int last_stack_tracer_enabled;
> >  
> > -void stack_trace_print(void)
> > +static void trace_stack_trace_print(void)
> >  {
> >  	long i;
> >  	int size;
> > @@ -179,7 +179,7 @@ check_stack(unsigned long ip, unsigned l
> >  	stack_trace_max.nr_entries = x;
> >  
> >  	if (task_stack_end_corrupted(current)) {
> > -		stack_trace_print();
> > +		trace_stack_trace_print();
> >  		BUG();
> >  	}
> >  
> > 
> 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 16/41] tracing: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 16/41] tracing: " Thomas Gleixner
@ 2019-04-11  2:34   ` Josh Poimboeuf
  2019-04-11  3:07     ` Steven Rostedt
  2019-04-14 20:44   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
  1 sibling, 1 reply; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11  2:34 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko

On Wed, Apr 10, 2019 at 12:28:10PM +0200, Thomas Gleixner wrote:
> No architecture terminates the stack trace with ULONG_MAX anymore. As the
> code checks the number of entries stored anyway there is no point in
> keeping all that ULONG_MAX magic around.
> 
> The histogram code zeroes the storage before saving the stack, so if the
> trace is shorter than the maximum number of entries it can terminate the
> print loop if a zero entry is detected.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
>  kernel/trace/trace_events_hist.c |    2 +-
>  kernel/trace/trace_stack.c       |   20 +++++---------------
>  2 files changed, 6 insertions(+), 16 deletions(-)
> 
> --- a/kernel/trace/trace_events_hist.c
> +++ b/kernel/trace/trace_events_hist.c
> @@ -5246,7 +5246,7 @@ static void hist_trigger_stacktrace_prin
>  	unsigned int i;
>  
>  	for (i = 0; i < max_entries; i++) {
> -		if (stacktrace_entries[i] == ULONG_MAX)
> +		if (!stacktrace_entries[i])
>  			return;
>  
>  		seq_printf(m, "%*c", 1 + spaces, ' ');
> --- a/kernel/trace/trace_stack.c
> +++ b/kernel/trace/trace_stack.c
> @@ -18,8 +18,7 @@
>  
>  #include "trace.h"
>  
> -static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES+1] =
> -	 { [0 ... (STACK_TRACE_ENTRIES)] = ULONG_MAX };
> +static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1];

Is the "+ 1" still needed?  AFAICT, accesses to this array never go past
nr_entries.

Also I've been staring at the code but I can't figure out why
max_entries is "- 1".

struct stack_trace stack_trace_max = {
	.max_entries		= STACK_TRACE_ENTRIES - 1,
	.entries		= &stack_dump_trace[0],
};

-- 
Josh

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 20/41] backtrace-test: Simplify stack trace handling
  2019-04-10 10:28 ` [RFC patch 20/41] backtrace-test: Simplify stack trace handling Thomas Gleixner
@ 2019-04-11  2:47   ` Josh Poimboeuf
  0 siblings, 0 replies; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11  2:47 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko

On Wed, Apr 10, 2019 at 12:28:14PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  kernel/backtracetest.c |   11 +++--------
>  1 file changed, 3 insertions(+), 8 deletions(-)
> 
> --- a/kernel/backtracetest.c
> +++ b/kernel/backtracetest.c
> @@ -48,19 +48,14 @@ static void backtrace_test_irq(void)
>  #ifdef CONFIG_STACKTRACE
>  static void backtrace_test_saved(void)
>  {
> -	struct stack_trace trace;
>  	unsigned long entries[8];
> +	unsigned int nent;

"Nent" isn't immediately readable to my eyes.  How about just good old
"nr_entries"?  (for this patch and all the others)

-- 
Josh

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
  2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
  2019-04-10 11:33     ` Dmitry Vyukov
@ 2019-04-11  2:55   ` Josh Poimboeuf
  2019-04-14 16:54       ` Thomas Gleixner
  1 sibling, 1 reply; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11  2:55 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm

On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> ---
>  mm/kasan/common.c |   30 ++++++++++++------------------
>  mm/kasan/report.c |    7 ++++---
>  2 files changed, 16 insertions(+), 21 deletions(-)
> 
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
>  		 ptr < (unsigned long)&__softirqentry_text_end);
>  }
>  
> -static inline void filter_irq_stacks(struct stack_trace *trace)
> +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> +					     unsigned int nr_entries)
>  {
> -	int i;
> +	unsigned int i;
>  
> -	if (!trace->nr_entries)
> -		return;
> -	for (i = 0; i < trace->nr_entries; i++)
> -		if (in_irqentry_text(trace->entries[i])) {
> +	for (i = 0; i < nr_entries; i++) {
> +		if (in_irqentry_text(entries[i])) {
>  			/* Include the irqentry function into the stack. */
> -			trace->nr_entries = i + 1;
> -			break;
> +			return i + 1;

Isn't this an off-by-one error if "i" points to the last entry of the
array?

-- 
Josh

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11  3:02     ` Josh Poimboeuf
  0 siblings, 0 replies; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11  3:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	iommu, Robin Murphy, Christoph Hellwig, Marek Szyprowski

On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: iommu@lists.linux-foundation.org
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> ---
>  kernel/dma/debug.c |   13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
> 
> --- a/kernel/dma/debug.c
> +++ b/kernel/dma/debug.c
> @@ -89,8 +89,8 @@ struct dma_debug_entry {
>  	int		 sg_mapped_ents;
>  	enum map_err_types  map_err_type;
>  #ifdef CONFIG_STACKTRACE
> -	struct		 stack_trace stacktrace;
> -	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
> +	unsigned int	st_len;
> +	unsigned long	st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];

nit: st_entries isn't very readable.  Thanks to the magic of compilers,
the characters are free, so why not call them "stacktrace_entries" and
"stacktrace_len".

-- 
Josh

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11  3:02     ` Josh Poimboeuf
  0 siblings, 0 replies; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11  3:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: x86, LKML, Steven Rostedt, iommu, Alexander Potapenko,
	Andy Lutomirski, Robin Murphy, Christoph Hellwig

On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace with an invocation of
> the storage array based interface.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: iommu@lists.linux-foundation.org
> Cc: Robin Murphy <robin.murphy@arm.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Marek Szyprowski <m.szyprowski@samsung.com>
> ---
>  kernel/dma/debug.c |   13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
> 
> --- a/kernel/dma/debug.c
> +++ b/kernel/dma/debug.c
> @@ -89,8 +89,8 @@ struct dma_debug_entry {
>  	int		 sg_mapped_ents;
>  	enum map_err_types  map_err_type;
>  #ifdef CONFIG_STACKTRACE
> -	struct		 stack_trace stacktrace;
> -	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
> +	unsigned int	st_len;
> +	unsigned long	st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];

nit: st_entries isn't very readable.  Thanks to the magic of compilers,
the characters are free, so why not call them "stacktrace_entries" and
"stacktrace_len".

-- 
Josh
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 16/41] tracing: Remove the ULONG_MAX stack trace hackery
  2019-04-11  2:34   ` Josh Poimboeuf
@ 2019-04-11  3:07     ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-11  3:07 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Alexander Potapenko

On Wed, 10 Apr 2019 21:34:25 -0500
Josh Poimboeuf <jpoimboe@redhat.com> wrote:

> > --- a/kernel/trace/trace_stack.c
> > +++ b/kernel/trace/trace_stack.c
> > @@ -18,8 +18,7 @@
> >  
> >  #include "trace.h"
> >  
> > -static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES+1] =
> > -	 { [0 ... (STACK_TRACE_ENTRIES)] = ULONG_MAX };
> > +static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1];  
> 
> Is the "+ 1" still needed?  AFAICT, accesses to this array never go past
> nr_entries.

Probably not. But see this for an explanation:

 http://lkml.kernel.org/r/20180620110758.crunhd5bfep7zuiz@kili.mountain


> 
> Also I've been staring at the code but I can't figure out why
> max_entries is "- 1".
> 
> struct stack_trace stack_trace_max = {
> 	.max_entries		= STACK_TRACE_ENTRIES - 1,
> 	.entries		= &stack_dump_trace[0],
> };
> 

Well, it had a reason in the past, but there doesn't seem to be a
reason today.  Looking at git history, that code was originally:

	.max_entries		= STACK_TRACE_ENTRIES - 1,
	.entries		= &stack_dump_trace[1],

Where we had to make max_entries -1 as we started at the first index
into the array.

I'll have to take a new look into this code. After Thomas's clean up
here, I'm sure we can simplify it a bit more.

-- Steve


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11  3:09       ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-11  3:09 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Alexander Potapenko,
	iommu, Robin Murphy, Christoph Hellwig, Marek Szyprowski

On Wed, 10 Apr 2019 22:02:01 -0500
Josh Poimboeuf <jpoimboe@redhat.com> wrote:

> >  #ifdef CONFIG_STACKTRACE
> > -	struct		 stack_trace stacktrace;
> > -	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
> > +	unsigned int	st_len;
> > +	unsigned long	st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];  
> 
> nit: st_entries isn't very readable.  Thanks to the magic of compilers,
> the characters are free, so why not call them "stacktrace_entries" and
> "stacktrace_len".

But doesn't that slow down the time it takes to compile?

/me runs...

-- Steve

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11  3:09       ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-11  3:09 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: x86, LKML, iommu, Alexander Potapenko, Andy Lutomirski,
	Thomas Gleixner, Robin Murphy, Christoph Hellwig

On Wed, 10 Apr 2019 22:02:01 -0500
Josh Poimboeuf <jpoimboe@redhat.com> wrote:

> >  #ifdef CONFIG_STACKTRACE
> > -	struct		 stack_trace stacktrace;
> > -	unsigned long	 st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];
> > +	unsigned int	st_len;
> > +	unsigned long	st_entries[DMA_DEBUG_STACKTRACE_ENTRIES];  
> 
> nit: st_entries isn't very readable.  Thanks to the magic of compilers,
> the characters are free, so why not call them "stacktrace_entries" and
> "stacktrace_len".

But doesn't that slow down the time it takes to compile?

/me runs...

-- Steve
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 40/41] stacktrace: Remove obsolete functions
  2019-04-10 10:28 ` [RFC patch 40/41] stacktrace: Remove obsolete functions Thomas Gleixner
@ 2019-04-11  3:33   ` Josh Poimboeuf
  2019-04-11  9:13     ` Peter Zijlstra
  2019-04-11 13:00     ` Josh Poimboeuf
  0 siblings, 2 replies; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11  3:33 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko

On Wed, Apr 10, 2019 at 12:28:34PM +0200, Thomas Gleixner wrote:
> No more users of the struct stack_trace based interfaces. Remove them.
> 
> Remove the macro stubs for !CONFIG_STACKTRACE as well as they are pointless
> because the storage on the call sites is conditional on CONFIG_STACKTRACE
> already. No point to be 'smart'.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  include/linux/stacktrace.h |   46 +++++++++++++++------------------------------
>  kernel/stacktrace.c        |   14 -------------
>  2 files changed, 16 insertions(+), 44 deletions(-)
> 
> --- a/include/linux/stacktrace.h
> +++ b/include/linux/stacktrace.h
> @@ -8,23 +8,6 @@ struct task_struct;
>  struct pt_regs;
>  
>  #ifdef CONFIG_STACKTRACE
> -struct stack_trace {
> -	unsigned int nr_entries, max_entries;
> -	unsigned long *entries;
> -	int skip;	/* input argument: How many entries to skip */
> -};
> -
> -extern void save_stack_trace(struct stack_trace *trace);
> -extern void save_stack_trace_regs(struct pt_regs *regs,
> -				  struct stack_trace *trace);
> -extern void save_stack_trace_tsk(struct task_struct *tsk,
> -				struct stack_trace *trace);
> -extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
> -					 struct stack_trace *trace);
> -
> -extern void print_stack_trace(struct stack_trace *trace, int spaces);
> -extern int snprint_stack_trace(char *buf, size_t size,
> -			struct stack_trace *trace, int spaces);
>  
>  extern void stack_trace_print(unsigned long *trace, unsigned int nr_entries,
>  			      int spaces);
> @@ -43,20 +26,23 @@ extern unsigned int stack_trace_save_reg
>  extern unsigned int stack_trace_save_user(unsigned long *store,
>  					  unsigned int size,
>  					  unsigned int skipnr);
> +/*
> + * The below is for stack trace internals and architecture
> + * implementations. Do not use in generic code.
> + */
> +struct stack_trace {
> +	unsigned int nr_entries, max_entries;
> +	unsigned long *entries;
> +	int skip;	/* input argument: How many entries to skip */
> +};

I was a bit surprised to see struct stack_trace still standing at the
end of the patch set, but I guess 41 patches is enough :-)  Do we want
to eventually remove the struct altogether?

I was also hoping to see the fragile "skipnr" go away in favor of
something less dependent on compiler optimizations, but I'm not sure how
feasible that would be.

Regardless, these are very nice cleanups, nice work.

> -#ifdef CONFIG_USER_STACKTRACE_SUPPORT
> +extern void save_stack_trace(struct stack_trace *trace);
> +extern void save_stack_trace_regs(struct pt_regs *regs,
> +				  struct stack_trace *trace);
> +extern void save_stack_trace_tsk(struct task_struct *tsk,
> +				struct stack_trace *trace);
> +extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
> +					 struct stack_trace *trace);

save_stack_trace_tsk_reliable() is still in use by generic livepatch
code.

Also I wonder if it would make sense to rename these to
__save_stack_trace_*() or arch_save_stack_trace_*() to help discourage
them from being used by generic code.

-- 
Josh

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 40/41] stacktrace: Remove obsolete functions
  2019-04-11  3:33   ` Josh Poimboeuf
@ 2019-04-11  9:13     ` Peter Zijlstra
  2019-04-11 13:00     ` Josh Poimboeuf
  1 sibling, 0 replies; 105+ messages in thread
From: Peter Zijlstra @ 2019-04-11  9:13 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Thomas Gleixner, LKML, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko

On Wed, Apr 10, 2019 at 10:33:20PM -0500, Josh Poimboeuf wrote:
> On Wed, Apr 10, 2019 at 12:28:34PM +0200, Thomas Gleixner wrote:

> > +struct stack_trace {
> > +	unsigned int nr_entries, max_entries;
> > +	unsigned long *entries;
> > +	int skip;	/* input argument: How many entries to skip */
> > +};
> 
> I was a bit surprised to see struct stack_trace still standing at the
> end of the patch set, but I guess 41 patches is enough :-)  Do we want
> to eventually remove the struct altogether?
> 
> I was also hoping to see the fragile "skipnr" go away in favor of
> something less dependent on compiler optimizations, but I'm not sure how
> feasible that would be.

It will die, but that only takes another nr_arch+1 patches.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 40/41] stacktrace: Remove obsolete functions
  2019-04-11  3:33   ` Josh Poimboeuf
  2019-04-11  9:13     ` Peter Zijlstra
@ 2019-04-11 13:00     ` Josh Poimboeuf
  1 sibling, 0 replies; 105+ messages in thread
From: Josh Poimboeuf @ 2019-04-11 13:00 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko

On Wed, Apr 10, 2019 at 10:33:20PM -0500, Josh Poimboeuf wrote:
> > -#ifdef CONFIG_USER_STACKTRACE_SUPPORT
> > +extern void save_stack_trace(struct stack_trace *trace);
> > +extern void save_stack_trace_regs(struct pt_regs *regs,
> > +				  struct stack_trace *trace);
> > +extern void save_stack_trace_tsk(struct task_struct *tsk,
> > +				struct stack_trace *trace);
> > +extern int save_stack_trace_tsk_reliable(struct task_struct *tsk,
> > +					 struct stack_trace *trace);
> 
> save_stack_trace_tsk_reliable() is still in use by generic livepatch
> code.

kernel/trace/trace_stack.c and include/linux/ftrace.h also still use
struct stack_trace.

-- 
Josh

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11 17:21         ` Christoph Hellwig
  0 siblings, 0 replies; 105+ messages in thread
From: Christoph Hellwig @ 2019-04-11 17:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Christoph Hellwig, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	Steven Rostedt, Alexander Potapenko, iommu, Robin Murphy,
	Marek Szyprowski

On Wed, Apr 10, 2019 at 02:08:19PM +0200, Thomas Gleixner wrote:
> On Wed, 10 Apr 2019, Christoph Hellwig wrote:
> 
> > On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> > > Replace the indirection through struct stack_trace with an invocation of
> > > the storage array based interface.
> > 
> > This seems to be missing some context, at least stack_trace_save does
> > not actually exist in mainline.
> > 
> > Please always send the whole series out to everyone on the To and Cc
> > list, otherwise patch series are not reviewable.
>  
> Bah. People complain about overly broad cc-lists and the context is on
> lkml. But sure, I just bounced it to you.

People should stop complaining about that.  Deleting a mail is a single
keystroke.  Finding all the patches to apply them and test, or even
to review them is a nightmare.  That is why depending on my mood I'll
either complain like now when people do that crap, or if I feel bad
enough just ignore them.  If you don't give me the full context you
can't expect me to have an informed opinion.

Btw, the private forwarding is the worst of all worlds - now I have
the patches, but can't sensibly reply to them..

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11 17:21         ` Christoph Hellwig
  0 siblings, 0 replies; 105+ messages in thread
From: Christoph Hellwig @ 2019-04-11 17:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: x86, LKML, Steven Rostedt, iommu, Alexander Potapenko,
	Andy Lutomirski, Josh Poimboeuf, Robin Murphy, Christoph Hellwig

On Wed, Apr 10, 2019 at 02:08:19PM +0200, Thomas Gleixner wrote:
> On Wed, 10 Apr 2019, Christoph Hellwig wrote:
> 
> > On Wed, Apr 10, 2019 at 12:28:22PM +0200, Thomas Gleixner wrote:
> > > Replace the indirection through struct stack_trace with an invocation of
> > > the storage array based interface.
> > 
> > This seems to be missing some context, at least stack_trace_save does
> > not actually exist in mainline.
> > 
> > Please always send the whole series out to everyone on the To and Cc
> > list, otherwise patch series are not reviewable.
>  
> Bah. People complain about overly broad cc-lists and the context is on
> lkml. But sure, I just bounced it to you.

People should stop complaining about that.  Deleting a mail is a single
keystroke.  Finding all the patches to apply them and test, or even
to review them is a nightmare.  That is why depending on my mood I'll
either complain like now when people do that crap, or if I feel bad
enough just ignore them.  If you don't give me the full context you
can't expect me to have an informed opinion.

Btw, the private forwarding is the worst of all worlds - now I have
the patches, but can't sensibly reply to them..
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11 17:36           ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-11 17:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Thomas Gleixner, LKML, Josh Poimboeuf, x86, Andy Lutomirski,
	Alexander Potapenko, iommu, Robin Murphy, Marek Szyprowski

On Thu, 11 Apr 2019 19:21:30 +0200
Christoph Hellwig <hch@lst.de> wrote:

> > Bah. People complain about overly broad cc-lists and the context is on
> > lkml. But sure, I just bounced it to you.  
> 
> People should stop complaining about that.  Deleting a mail is a single
> keystroke.  Finding all the patches to apply them and test, or even
> to review them is a nightmare.  That is why depending on my mood I'll
> either complain like now when people do that crap, or if I feel bad
> enough just ignore them.  If you don't give me the full context you
> can't expect me to have an informed opinion.

I guess the issue is when you get a 41 patch series, and there's only
one patch you need to look at. There's times I get Cc'd on patch sets
that I have no idea why I'm on the Cc. If I skim the patch set and
don't see a relevance, I simply ignore it.

But there may be one patch I was suppose to review and I miss it. I
personally prefer to get only Cc'd on the cover letter and the patch I
need to review. Now, if that patch is dependent on other patches, then
perhaps it would be nice to be Cc'd on them too.

In other words, I much rather be Cc'd on only the patches that pertain
to me (and the supporting patches for it) then the entire series.
Especially when it's 40 patches or more.

Yes, it's a single click to delete patches that I don't need to look
at, but what I usually do in these cases is just delete the entire
series.

Note, as I do a lot with stack traces, this entire series pertains to
me and I'm happy I was on the full Cc list. But there's other examples
where it does not.

> 
> Btw, the private forwarding is the worst of all worlds - now I have
> the patches, but can't sensibly reply to them..

BTW, lore.kernel.org has a way to reply back to the list.

-- Steve

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11 17:36           ` Steven Rostedt
  0 siblings, 0 replies; 105+ messages in thread
From: Steven Rostedt @ 2019-04-11 17:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: x86, LKML, iommu, Alexander Potapenko, Andy Lutomirski,
	Josh Poimboeuf, Thomas Gleixner, Robin Murphy

On Thu, 11 Apr 2019 19:21:30 +0200
Christoph Hellwig <hch@lst.de> wrote:

> > Bah. People complain about overly broad cc-lists and the context is on
> > lkml. But sure, I just bounced it to you.  
> 
> People should stop complaining about that.  Deleting a mail is a single
> keystroke.  Finding all the patches to apply them and test, or even
> to review them is a nightmare.  That is why depending on my mood I'll
> either complain like now when people do that crap, or if I feel bad
> enough just ignore them.  If you don't give me the full context you
> can't expect me to have an informed opinion.

I guess the issue is when you get a 41 patch series, and there's only
one patch you need to look at. There's times I get Cc'd on patch sets
that I have no idea why I'm on the Cc. If I skim the patch set and
don't see a relevance, I simply ignore it.

But there may be one patch I was suppose to review and I miss it. I
personally prefer to get only Cc'd on the cover letter and the patch I
need to review. Now, if that patch is dependent on other patches, then
perhaps it would be nice to be Cc'd on them too.

In other words, I much rather be Cc'd on only the patches that pertain
to me (and the supporting patches for it) then the entire series.
Especially when it's 40 patches or more.

Yes, it's a single click to delete patches that I don't need to look
at, but what I usually do in these cases is just delete the entire
series.

Note, as I do a lot with stack traces, this entire series pertains to
me and I'm happy I was on the full Cc list. But there's other examples
where it does not.

> 
> Btw, the private forwarding is the worst of all worlds - now I have
> the patches, but can't sensibly reply to them..

BTW, lore.kernel.org has a way to reply back to the list.

-- Steve
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11 17:44             ` Christoph Hellwig
  0 siblings, 0 replies; 105+ messages in thread
From: Christoph Hellwig @ 2019-04-11 17:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: Christoph Hellwig, Thomas Gleixner, LKML, Josh Poimboeuf, x86,
	Andy Lutomirski, Alexander Potapenko, iommu, Robin Murphy,
	Marek Szyprowski

On Thu, Apr 11, 2019 at 01:36:02PM -0400, Steven Rostedt wrote:
> I guess the issue is when you get a 41 patch series, and there's only
> one patch you need to look at. There's times I get Cc'd on patch sets
> that I have no idea why I'm on the Cc. If I skim the patch set and
> don't see a relevance, I simply ignore it.

I sometimes do that as well, but then again ignoring/deleting is easy.
I wish people would think a little more on whom to Cc.  In general
I don't really need a personal Cc for drive by patches - I'll happily
pick them up from the mailing list and actually prefer it that way.
But I have received contrary feedback from people that do want to be
CCed on every little thing.

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval
@ 2019-04-11 17:44             ` Christoph Hellwig
  0 siblings, 0 replies; 105+ messages in thread
From: Christoph Hellwig @ 2019-04-11 17:44 UTC (permalink / raw)
  To: Steven Rostedt
  Cc: x86, LKML, iommu, Alexander Potapenko, Andy Lutomirski,
	Josh Poimboeuf, Thomas Gleixner, Robin Murphy, Christoph Hellwig

On Thu, Apr 11, 2019 at 01:36:02PM -0400, Steven Rostedt wrote:
> I guess the issue is when you get a 41 patch series, and there's only
> one patch you need to look at. There's times I get Cc'd on patch sets
> that I have no idea why I'm on the Cc. If I skim the patch set and
> don't see a relevance, I simply ignore it.

I sometimes do that as well, but then again ignoring/deleting is easy.
I wish people would think a little more on whom to Cc.  In general
I don't really need a personal Cc for drive by patches - I'll happily
pick them up from the mailing list and actually prefer it that way.
But I have received contrary feedback from people that do want to be
CCed on every little thing.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 21/41] proc: Simplify task stack retrieval
  2019-04-10 10:28 ` [RFC patch 21/41] proc: Simplify task stack retrieval Thomas Gleixner
@ 2019-04-14 14:49   ` Alexey Dobriyan
  0 siblings, 0 replies; 105+ messages in thread
From: Alexey Dobriyan @ 2019-04-14 14:49 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
	Alexander Potapenko, Andrew Morton

On Wed, Apr 10, 2019 at 12:28:15PM +0200, Thomas Gleixner wrote:
> @@ -430,20 +429,16 @@ static int proc_pid_stack(struct seq_fil
>  	if (!entries)
>  		return -ENOMEM;
>  
> -	trace.nr_entries	= 0;
> -	trace.max_entries	= MAX_STACK_TRACE_DEPTH;
> -	trace.entries		= entries;
> -	trace.skip		= 0;
> -
>  	err = lock_trace(task);
>  	if (!err) {
> -		unsigned int i;
> +		unsigned int i, nent;
>  
> -		save_stack_trace_tsk(task, &trace);
> +		nent = stack_trace_save_tsk(task, entries,
> +					    MAX_STACK_TRACE_DEPTH, 0);
>  
> -		for (i = 0; i < trace.nr_entries; i++) {
> +		for (i = 0; i < nent; i++)
>  			seq_printf(m, "[<0>] %pB\n", (void *)entries[i]);
> -		}
> +

I only object to {} removal. The rule of mandatory {} that new languages
have adopted is pretty cool. Otherwise

Reviewed-by: Alexey Dobriyan <adobriyan@gmail.com>

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
  2019-04-11  2:55   ` Josh Poimboeuf
@ 2019-04-14 16:54       ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-14 16:54 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm

On Wed, 10 Apr 2019, Josh Poimboeuf wrote:
> On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace by using the storage
> > array based interfaces.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > Cc: Alexander Potapenko <glider@google.com>
> > Cc: Dmitry Vyukov <dvyukov@google.com>
> > Cc: kasan-dev@googlegroups.com
> > Cc: linux-mm@kvack.org
> > ---
> >  mm/kasan/common.c |   30 ++++++++++++------------------
> >  mm/kasan/report.c |    7 ++++---
> >  2 files changed, 16 insertions(+), 21 deletions(-)
> > 
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> >  		 ptr < (unsigned long)&__softirqentry_text_end);
> >  }
> >  
> > -static inline void filter_irq_stacks(struct stack_trace *trace)
> > +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> > +					     unsigned int nr_entries)
> >  {
> > -	int i;
> > +	unsigned int i;
> >  
> > -	if (!trace->nr_entries)
> > -		return;
> > -	for (i = 0; i < trace->nr_entries; i++)
> > -		if (in_irqentry_text(trace->entries[i])) {
> > +	for (i = 0; i < nr_entries; i++) {
> > +		if (in_irqentry_text(entries[i])) {
> >  			/* Include the irqentry function into the stack. */
> > -			trace->nr_entries = i + 1;
> > -			break;
> > +			return i + 1;
> 
> Isn't this an off-by-one error if "i" points to the last entry of the
> array?

Yes, copied one ...

^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
@ 2019-04-14 16:54       ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-14 16:54 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm

On Wed, 10 Apr 2019, Josh Poimboeuf wrote:
> On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace by using the storage
> > array based interfaces.
> > 
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > Cc: Alexander Potapenko <glider@google.com>
> > Cc: Dmitry Vyukov <dvyukov@google.com>
> > Cc: kasan-dev@googlegroups.com
> > Cc: linux-mm@kvack.org
> > ---
> >  mm/kasan/common.c |   30 ++++++++++++------------------
> >  mm/kasan/report.c |    7 ++++---
> >  2 files changed, 16 insertions(+), 21 deletions(-)
> > 
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> >  		 ptr < (unsigned long)&__softirqentry_text_end);
> >  }
> >  
> > -static inline void filter_irq_stacks(struct stack_trace *trace)
> > +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> > +					     unsigned int nr_entries)
> >  {
> > -	int i;
> > +	unsigned int i;
> >  
> > -	if (!trace->nr_entries)
> > -		return;
> > -	for (i = 0; i < trace->nr_entries; i++)
> > -		if (in_irqentry_text(trace->entries[i])) {
> > +	for (i = 0; i < nr_entries; i++) {
> > +		if (in_irqentry_text(entries[i])) {
> >  			/* Include the irqentry function into the stack. */
> > -			trace->nr_entries = i + 1;
> > -			break;
> > +			return i + 1;
> 
> Isn't this an off-by-one error if "i" points to the last entry of the
> array?

Yes, copied one ...


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
  2019-04-14 16:54       ` Thomas Gleixner
@ 2019-04-14 17:00         ` Thomas Gleixner
  -1 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-14 17:00 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm

On Sun, 14 Apr 2019, Thomas Gleixner wrote:
> On Wed, 10 Apr 2019, Josh Poimboeuf wrote:
> > On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> > > Replace the indirection through struct stack_trace by using the storage
> > > array based interfaces.
> > > 
> > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > > Cc: Alexander Potapenko <glider@google.com>
> > > Cc: Dmitry Vyukov <dvyukov@google.com>
> > > Cc: kasan-dev@googlegroups.com
> > > Cc: linux-mm@kvack.org
> > > ---
> > >  mm/kasan/common.c |   30 ++++++++++++------------------
> > >  mm/kasan/report.c |    7 ++++---
> > >  2 files changed, 16 insertions(+), 21 deletions(-)
> > > 
> > > --- a/mm/kasan/common.c
> > > +++ b/mm/kasan/common.c
> > > @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> > >  		 ptr < (unsigned long)&__softirqentry_text_end);
> > >  }
> > >  
> > > -static inline void filter_irq_stacks(struct stack_trace *trace)
> > > +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> > > +					     unsigned int nr_entries)
> > >  {
> > > -	int i;
> > > +	unsigned int i;
> > >  
> > > -	if (!trace->nr_entries)
> > > -		return;
> > > -	for (i = 0; i < trace->nr_entries; i++)
> > > -		if (in_irqentry_text(trace->entries[i])) {
> > > +	for (i = 0; i < nr_entries; i++) {
> > > +		if (in_irqentry_text(entries[i])) {
> > >  			/* Include the irqentry function into the stack. */
> > > -			trace->nr_entries = i + 1;
> > > -			break;
> > > +			return i + 1;
> > 
> > Isn't this an off-by-one error if "i" points to the last entry of the
> > array?
> 
> Yes, copied one ...

Oh, no. The point is that it returns the number of stack entries to
store. So if i == nr_entries - 1, then it returns nr_entries, i.e. all
entries are stored.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 105+ messages in thread

* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
@ 2019-04-14 17:00         ` Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: Thomas Gleixner @ 2019-04-14 17:00 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
	Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm

On Sun, 14 Apr 2019, Thomas Gleixner wrote:
> On Wed, 10 Apr 2019, Josh Poimboeuf wrote:
> > On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> > > Replace the indirection through struct stack_trace by using the storage
> > > array based interfaces.
> > > 
> > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > > Cc: Alexander Potapenko <glider@google.com>
> > > Cc: Dmitry Vyukov <dvyukov@google.com>
> > > Cc: kasan-dev@googlegroups.com
> > > Cc: linux-mm@kvack.org
> > > ---
> > >  mm/kasan/common.c |   30 ++++++++++++------------------
> > >  mm/kasan/report.c |    7 ++++---
> > >  2 files changed, 16 insertions(+), 21 deletions(-)
> > > 
> > > --- a/mm/kasan/common.c
> > > +++ b/mm/kasan/common.c
> > > @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> > >  		 ptr < (unsigned long)&__softirqentry_text_end);
> > >  }
> > >  
> > > -static inline void filter_irq_stacks(struct stack_trace *trace)
> > > +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> > > +					     unsigned int nr_entries)
> > >  {
> > > -	int i;
> > > +	unsigned int i;
> > >  
> > > -	if (!trace->nr_entries)
> > > -		return;
> > > -	for (i = 0; i < trace->nr_entries; i++)
> > > -		if (in_irqentry_text(trace->entries[i])) {
> > > +	for (i = 0; i < nr_entries; i++) {
> > > +		if (in_irqentry_text(entries[i])) {
> > >  			/* Include the irqentry function into the stack. */
> > > -			trace->nr_entries = i + 1;
> > > -			break;
> > > +			return i + 1;
> > 
> > Isn't this an off-by-one error if "i" points to the last entry of the
> > array?
> 
> Yes, copied one ...

Oh, no. The point is that it returns the number of stack entries to
store. So if i == nr_entries - 1, then it returns nr_entries, i.e. all
entries are stored.

Thanks,

	tglx


^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] um/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 ` [RFC patch 01/41] um/stacktrace: Remove the pointless ULONG_MAX marker Thomas Gleixner
@ 2019-04-14 20:34   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: luto, tglx, linux-kernel, mingo, rostedt, peterz, richard, hpa,
	jpoimboe, glider

Commit-ID:  fdc7833964d83b7f7f39a03e2ee48a229ba0291f
Gitweb:     https://git.kernel.org/tip/fdc7833964d83b7f7f39a03e2ee48a229ba0291f
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:27:55 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:27 +0200

um/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: linux-um@lists.infradead.org
Link: https://lkml.kernel.org/r/20190410103643.662853876@linutronix.de

---
 arch/um/kernel/stacktrace.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/um/kernel/stacktrace.c b/arch/um/kernel/stacktrace.c
index ebe7bcf62684..bd95e020d509 100644
--- a/arch/um/kernel/stacktrace.c
+++ b/arch/um/kernel/stacktrace.c
@@ -63,8 +63,6 @@ static const struct stacktrace_ops dump_ops = {
 static void __save_stack_trace(struct task_struct *tsk, struct stack_trace *trace)
 {
 	dump_trace(tsk, &dump_ops, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace(struct stack_trace *trace)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] x86/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
@ 2019-04-14 20:34   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, luto, hpa, mingo, rostedt, peterz, glider, jpoimboe, linux-kernel

Commit-ID:  c5c27a0a583844c69a433039e4fd6396ba23551b
Gitweb:     https://git.kernel.org/tip/c5c27a0a583844c69a433039e4fd6396ba23551b
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:27:56 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:27 +0200

x86/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Link: https://lkml.kernel.org/r/20190410103643.750954603@linutronix.de

---
 arch/x86/kernel/stacktrace.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 5c2d71a1dc06..b2f706f1e0b7 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -46,9 +46,6 @@ static void noinline __save_stack_trace(struct stack_trace *trace,
 		if (!addr || save_stack_address(trace, addr, nosched))
 			break;
 	}
-
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 /*
@@ -97,7 +94,7 @@ __save_stack_trace_reliable(struct stack_trace *trace,
 		if (regs) {
 			/* Success path for user tasks */
 			if (user_mode(regs))
-				goto success;
+				return 0;
 
 			/*
 			 * Kernel mode registers on the stack indicate an
@@ -132,10 +129,6 @@ __save_stack_trace_reliable(struct stack_trace *trace,
 	if (!(task->flags & (PF_KTHREAD | PF_IDLE)))
 		return -EINVAL;
 
-success:
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
-
 	return 0;
 }
 
@@ -221,9 +214,6 @@ void save_stack_trace_user(struct stack_trace *trace)
 	/*
 	 * Trace user stack if we are not a kernel thread
 	 */
-	if (current->mm) {
+	if (current->mm)
 		__save_stack_trace_user(trace);
-	}
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] arm/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27   ` Thomas Gleixner
  (?)
@ 2019-04-14 20:35   ` tip-bot for Thomas Gleixner
  -1 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jpoimboe, hpa, glider, luto, linux, tglx, peterz, mingo, rostedt,
	linux-kernel

Commit-ID:  2a2bcfa0c94d8bc4770676a6799928036296c037
Gitweb:     https://git.kernel.org/tip/2a2bcfa0c94d8bc4770676a6799928036296c037
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:27:57 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:27 +0200

arm/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lkml.kernel.org/r/20190410103643.843075256@linutronix.de

---
 arch/arm/kernel/stacktrace.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/arch/arm/kernel/stacktrace.c b/arch/arm/kernel/stacktrace.c
index a56e7c856ab5..86870f40f9a0 100644
--- a/arch/arm/kernel/stacktrace.c
+++ b/arch/arm/kernel/stacktrace.c
@@ -115,8 +115,6 @@ static noinline void __save_stack_trace(struct task_struct *tsk,
 		 * running on another CPU?  For now, ignore it as we
 		 * can't guarantee we won't explode.
 		 */
-		if (trace->nr_entries < trace->max_entries)
-			trace->entries[trace->nr_entries++] = ULONG_MAX;
 		return;
 #else
 		frame.fp = thread_saved_fp(tsk);
@@ -134,8 +132,6 @@ static noinline void __save_stack_trace(struct task_struct *tsk,
 	}
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
@@ -153,8 +149,6 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 	frame.pc = regs->ARM_pc;
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] sh/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27   ` Thomas Gleixner
  (?)
@ 2019-04-14 20:36   ` tip-bot for Thomas Gleixner
  -1 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: jpoimboe, peterz, tglx, linux-kernel, dalias, rostedt, luto,
	mingo, glider, ysato, kuninori.morimoto.gx, horms+renesas, hpa

Commit-ID:  b01f6d368d296cac099383a3eb200e135420f885
Gitweb:     https://git.kernel.org/tip/b01f6d368d296cac099383a3eb200e135420f885
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:27:58 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:28 +0200

sh/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Rich Felker <dalias@libc.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Cc: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Cc: linux-sh@vger.kernel.org
Cc: Simon Horman <horms+renesas@verge.net.au>
Link: https://lkml.kernel.org/r/20190410103643.932464393@linutronix.de

---
 arch/sh/kernel/stacktrace.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/sh/kernel/stacktrace.c b/arch/sh/kernel/stacktrace.c
index f3cb2cccb262..2950b19ad077 100644
--- a/arch/sh/kernel/stacktrace.c
+++ b/arch/sh/kernel/stacktrace.c
@@ -49,8 +49,6 @@ void save_stack_trace(struct stack_trace *trace)
 	unsigned long *sp = (unsigned long *)current_stack_pointer;
 
 	unwind_stack(current, NULL, sp,  &save_stack_ops, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
@@ -84,7 +82,5 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 	unsigned long *sp = (unsigned long *)tsk->thread.sp;
 
 	unwind_stack(current, NULL, sp,  &save_stack_ops_nosched, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] unicore32/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:27 ` [RFC patch 05/41] unicore32/stacktrace: " Thomas Gleixner
@ 2019-04-14 20:36   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: glider, jpoimboe, luto, tglx, hpa, linux-kernel, rostedt, mingo,
	peterz, gxt

Commit-ID:  f8a9a269c28ddd5d741e747ceca753af01c828f2
Gitweb:     https://git.kernel.org/tip/f8a9a269c28ddd5d741e747ceca753af01c828f2
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:27:59 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:28 +0200

unicore32/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Guan Xuetao <gxt@pku.edu.cn>
Link: https://lkml.kernel.org/r/20190410103644.036077691@linutronix.de

---
 arch/unicore32/kernel/stacktrace.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/unicore32/kernel/stacktrace.c b/arch/unicore32/kernel/stacktrace.c
index 9976e767d51c..e37da8c6837b 100644
--- a/arch/unicore32/kernel/stacktrace.c
+++ b/arch/unicore32/kernel/stacktrace.c
@@ -120,8 +120,6 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 	}
 
 	walk_stackframe(&frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 
 void save_stack_trace(struct stack_trace *trace)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] riscv/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:28   ` Thomas Gleixner
  (?)
@ 2019-04-14 20:37   ` tip-bot for Thomas Gleixner
  -1 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: luto, palmer, aou, jpoimboe, hpa, glider, peterz, mingo, tglx,
	linux-kernel, rostedt

Commit-ID:  fa9833992d5ff3c0d6e81d708bec363bce2fb54c
Gitweb:     https://git.kernel.org/tip/fa9833992d5ff3c0d6e81d708bec363bce2fb54c
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:00 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:28 +0200

riscv/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: linux-riscv@lists.infradead.org
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Link: https://lkml.kernel.org/r/20190410103644.131061192@linutronix.de

---
 arch/riscv/kernel/stacktrace.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
index a4b1d94371a0..4d403274c2e8 100644
--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -169,8 +169,6 @@ static bool save_trace(unsigned long pc, void *arg)
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
 	walk_stackframe(tsk, NULL, save_trace, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] arm64/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:28   ` Thomas Gleixner
  (?)
@ 2019-04-14 20:38   ` tip-bot for Thomas Gleixner
  -1 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, catalin.marinas, rostedt, peterz, hpa, will.deacon, tglx,
	jpoimboe, linux-kernel, luto, glider

Commit-ID:  7b2c7b6233497bfab8826ece574bc1c26e97478d
Gitweb:     https://git.kernel.org/tip/7b2c7b6233497bfab8826ece574bc1c26e97478d
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:01 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:29 +0200

arm64/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Link: https://lkml.kernel.org/r/20190410103644.220247845@linutronix.de

---
 arch/arm64/kernel/stacktrace.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index d908b5e9e949..b00ec7d483d1 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -140,8 +140,6 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 #endif
 
 	walk_stackframe(current, &frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);
 
@@ -172,8 +170,6 @@ static noinline void __save_stack_trace(struct task_struct *tsk,
 #endif
 
 	walk_stackframe(tsk, &frame, save_trace, &data);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 
 	put_task_stack(tsk);
 }

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] parisc/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:28 ` [RFC patch 08/41] parisc/stacktrace: " Thomas Gleixner
@ 2019-04-14 20:38   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: James.Bottomley, hpa, glider, mingo, rostedt, luto, tglx,
	jpoimboe, linux-kernel, peterz, deller

Commit-ID:  4f3bd6ca310b594df09c8f1e319cda9baf502ec8
Gitweb:     https://git.kernel.org/tip/4f3bd6ca310b594df09c8f1e319cda9baf502ec8
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:02 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:29 +0200

parisc/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-parisc@vger.kernel.org
Link: https://lkml.kernel.org/r/20190410103644.308534788@linutronix.de

---
 arch/parisc/kernel/stacktrace.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/parisc/kernel/stacktrace.c b/arch/parisc/kernel/stacktrace.c
index ec5835e83a7a..6f0b9c8d8052 100644
--- a/arch/parisc/kernel/stacktrace.c
+++ b/arch/parisc/kernel/stacktrace.c
@@ -29,22 +29,17 @@ static void dump_trace(struct task_struct *task, struct stack_trace *trace)
 	}
 }
 
-
 /*
  * Save stack-backtrace addresses into a stack_trace buffer.
  */
 void save_stack_trace(struct stack_trace *trace)
 {
 	dump_trace(current, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
 	dump_trace(tsk, trace);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] s390/stacktrace: Remove the pointless ULONG_MAX marker
  2019-04-10 10:28 ` [RFC patch 09/41] s390/stacktrace: " Thomas Gleixner
@ 2019-04-14 20:39   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: schwidefsky, heiko.carstens, linux-kernel, mingo, rostedt,
	peterz, luto, jpoimboe, tglx, hpa, glider

Commit-ID:  6a28b4c2d93b812512d8d2e5179e61a14f578560
Gitweb:     https://git.kernel.org/tip/6a28b4c2d93b812512d8d2e5179e61a14f578560
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:03 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:29 +0200

s390/stacktrace: Remove the pointless ULONG_MAX marker

Terminating the last trace entry with ULONG_MAX is a completely pointless
exercise and none of the consumers can rely on it because it's
inconsistently implemented across architectures. In fact quite some of the
callers remove the entry and adjust stack_trace.nr_entries afterwards.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Link: https://lkml.kernel.org/r/20190410103644.396788431@linutronix.de

---
 arch/s390/kernel/stacktrace.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
index 460dcfba7d4e..cc9ed9787068 100644
--- a/arch/s390/kernel/stacktrace.c
+++ b/arch/s390/kernel/stacktrace.c
@@ -45,8 +45,6 @@ void save_stack_trace(struct stack_trace *trace)
 
 	sp = current_stack_pointer();
 	dump_trace(save_address, trace, NULL, sp);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace);
 
@@ -58,8 +56,6 @@ void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 	if (tsk == current)
 		sp = current_stack_pointer();
 	dump_trace(save_address_nosched, trace, tsk, sp);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 
@@ -69,7 +65,5 @@ void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
 
 	sp = kernel_stack_pointer(regs);
 	dump_trace(save_address, trace, NULL, sp);
-	if (trace->nr_entries < trace->max_entries)
-		trace->entries[trace->nr_entries++] = ULONG_MAX;
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_regs);

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] lockdep: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 10/41] lockdep: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
@ 2019-04-14 20:40   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:40 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, luto, rostedt, linux-kernel, will.deacon, hpa, glider,
	mingo, tglx, jpoimboe

Commit-ID:  2dfed4565afe263751d2451ad22336ad806c25a6
Gitweb:     https://git.kernel.org/tip/2dfed4565afe263751d2451ad22336ad806c25a6
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:04 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:30 +0200

lockdep: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Will Deacon <will.deacon@arm.com>
Link: https://lkml.kernel.org/r/20190410103644.485737321@linutronix.de

---
 kernel/locking/lockdep.c | 11 -----------
 1 file changed, 11 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e16766ff184b..2edf9501d906 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -444,17 +444,6 @@ static int save_trace(struct stack_trace *trace)
 
 	save_stack_trace(trace);
 
-	/*
-	 * Some daft arches put -1 at the end to indicate its a full trace.
-	 *
-	 * <rant> this is buggy anyway, since it takes a whole extra entry so a
-	 * complete trace that maxes out the entries provided will be reported
-	 * as incomplete, friggin useless </rant>
-	 */
-	if (trace->nr_entries != 0 &&
-	    trace->entries[trace->nr_entries-1] == ULONG_MAX)
-		trace->nr_entries--;
-
 	trace->max_entries = trace->nr_entries;
 
 	nr_stack_trace_entries += trace->nr_entries;

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] mm/slub: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: " Thomas Gleixner
@ 2019-04-14 20:40   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:40 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: cl, rostedt, linux-kernel, penberg, akpm, luto, peterz, glider,
	hpa, tglx, jpoimboe, mingo, rientjes

Commit-ID:  b8ca7ff7731f57b256fcc13a9b7d4913f5282e5c
Gitweb:     https://git.kernel.org/tip/b8ca7ff7731f57b256fcc13a9b7d4913f5282e5c
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:05 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:30 +0200

mm/slub: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

While at it remove the pointless loop of clearing the stack array
completely. It's sufficient to clear the last entry as the consumers break
out on the first zeroed entry anyway.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
Link: https://lkml.kernel.org/r/20190410103644.574058244@linutronix.de

---
 mm/slub.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index d30ede89f4a6..e2ccd12b6faa 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -553,7 +553,6 @@ static void set_track(struct kmem_cache *s, void *object,
 	if (addr) {
 #ifdef CONFIG_STACKTRACE
 		struct stack_trace trace;
-		int i;
 
 		trace.nr_entries = 0;
 		trace.max_entries = TRACK_ADDRS_COUNT;
@@ -563,20 +562,16 @@ static void set_track(struct kmem_cache *s, void *object,
 		save_stack_trace(&trace);
 		metadata_access_disable();
 
-		/* See rant in lockdep.c */
-		if (trace.nr_entries != 0 &&
-		    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
-			trace.nr_entries--;
-
-		for (i = trace.nr_entries; i < TRACK_ADDRS_COUNT; i++)
-			p->addrs[i] = 0;
+		if (trace.nr_entries < TRACK_ADDRS_COUNT)
+			p->addrs[trace.nr_entries] = 0;
 #endif
 		p->addr = addr;
 		p->cpu = smp_processor_id();
 		p->pid = current->pid;
 		p->when = jiffies;
-	} else
+	} else {
 		memset(p, 0, sizeof(struct track));
+	}
 }
 
 static void init_tracking(struct kmem_cache *s, void *object)

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] mm/page_owner: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
@ 2019-04-14 20:41   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:41 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: glider, akpm, hpa, jpoimboe, rppt, mhocko, rostedt, mingo,
	linux-kernel, peterz, luto, tglx

Commit-ID:  4621c9858f05ab08434221e3a15cc8098645ef2a
Gitweb:     https://git.kernel.org/tip/4621c9858f05ab08434221e3a15cc8098645ef2a
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:06 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:30 +0200

mm/page_owner: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: https://lkml.kernel.org/r/20190410103644.661974663@linutronix.de

---
 mm/page_owner.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/mm/page_owner.c b/mm/page_owner.c
index 925b6f44a444..df277e6bc3c6 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -148,9 +148,6 @@ static noinline depot_stack_handle_t save_stack(gfp_t flags)
 	depot_stack_handle_t handle;
 
 	save_stack_trace(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
 
 	/*
 	 * We need to check recursion here because our request to stackdepot

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] mm/kasan: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
  2019-04-10 11:31     ` Dmitry Vyukov
@ 2019-04-14 20:42   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, jpoimboe, peterz, linux-kernel, mingo, aryabinin, luto,
	glider, tglx, rostedt, dvyukov

Commit-ID:  ead97a49ec3a3cb9b5133acbfed9a49b91ebf37c
Gitweb:     https://git.kernel.org/tip/ead97a49ec3a3cb9b5133acbfed9a49b91ebf37c
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:07 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:31 +0200

mm/kasan: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Link: https://lkml.kernel.org/r/20190410103644.750219625@linutronix.de

---
 mm/kasan/common.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 80bbe62b16cd..38e5f20a775a 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -74,9 +74,6 @@ static inline depot_stack_handle_t save_stack(gfp_t flags)
 
 	save_stack_trace(&trace);
 	filter_irq_stacks(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
 
 	return depot_save_stack(&trace, flags);
 }

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] latency_top: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 14/41] latency_top: " Thomas Gleixner
@ 2019-04-14 20:42   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: rostedt, peterz, jpoimboe, luto, tglx, mingo, glider, linux-kernel, hpa

Commit-ID:  accddc41b96915ab4e5d37796c6d17d70805999c
Gitweb:     https://git.kernel.org/tip/accddc41b96915ab4e5d37796c6d17d70805999c
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:08 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:31 +0200

latency_top: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. The
consumer terminates on the first zero entry or at the number of entries, so
no functional change.

Remove the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Link: https://lkml.kernel.org/r/20190410103644.853527514@linutronix.de

---
 fs/proc/base.c      |  3 +--
 kernel/latencytop.c | 12 ++++++------
 2 files changed, 7 insertions(+), 8 deletions(-)

diff --git a/fs/proc/base.c b/fs/proc/base.c
index 6a803a0b75df..5569f215fc54 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -489,10 +489,9 @@ static int lstats_show_proc(struct seq_file *m, void *v)
 				   lr->count, lr->time, lr->max);
 			for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
 				unsigned long bt = lr->backtrace[q];
+
 				if (!bt)
 					break;
-				if (bt == ULONG_MAX)
-					break;
 				seq_printf(m, " %ps", (void *)bt);
 			}
 			seq_putc(m, '\n');
diff --git a/kernel/latencytop.c b/kernel/latencytop.c
index 96b4179cee6a..f5a90ab3c6b9 100644
--- a/kernel/latencytop.c
+++ b/kernel/latencytop.c
@@ -120,8 +120,8 @@ account_global_scheduler_latency(struct task_struct *tsk,
 				break;
 			}
 
-			/* 0 and ULONG_MAX entries mean end of backtrace: */
-			if (record == 0 || record == ULONG_MAX)
+			/* 0 entry marks end of backtrace: */
+			if (!record)
 				break;
 		}
 		if (same) {
@@ -210,8 +210,8 @@ __account_scheduler_latency(struct task_struct *tsk, int usecs, int inter)
 				break;
 			}
 
-			/* 0 and ULONG_MAX entries mean end of backtrace: */
-			if (record == 0 || record == ULONG_MAX)
+			/* 0 entry is end of backtrace */
+			if (!record)
 				break;
 		}
 		if (same) {
@@ -252,10 +252,10 @@ static int lstats_show(struct seq_file *m, void *v)
 				   lr->count, lr->time, lr->max);
 			for (q = 0; q < LT_BACKTRACEDEPTH; q++) {
 				unsigned long bt = lr->backtrace[q];
+
 				if (!bt)
 					break;
-				if (bt == ULONG_MAX)
-					break;
+
 				seq_printf(m, " %ps", (void *)bt);
 			}
 			seq_puts(m, "\n");

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] drm: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 15/41] drm: " Thomas Gleixner
@ 2019-04-14 20:43   ` tip-bot for Thomas Gleixner
  0 siblings, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:43 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: rostedt, linux-kernel, peterz, joonas.lahtinen, jani.nikula,
	daniel, tglx, mingo, maarten.lankhorst, jpoimboe, rodrigo.vivi,
	glider, airlied, luto, hpa

Commit-ID:  fa49e2eac9aa8259e1ea540d1bd301448d5b735d
Gitweb:     https://git.kernel.org/tip/fa49e2eac9aa8259e1ea540d1bd301448d5b735d
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:09 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:32 +0200

drm: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Cc: intel-gfx@lists.freedesktop.org
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: David Airlie <airlied@linux.ie>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://lkml.kernel.org/r/20190410103644.945059666@linutronix.de

---
 drivers/gpu/drm/drm_mm.c                | 3 ---
 drivers/gpu/drm/i915/intel_runtime_pm.c | 4 ----
 2 files changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 2b4f373736c7..69552777e13a 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -113,9 +113,6 @@ static noinline void save_stack(struct drm_mm_node *node)
 	};
 
 	save_stack_trace(&trace);
-	if (trace.nr_entries != 0 &&
-	    trace.entries[trace.nr_entries-1] == ULONG_MAX)
-		trace.nr_entries--;
 
 	/* May be called under spinlock, so avoid sleeping */
 	node->stack = depot_save_stack(&trace, GFP_NOWAIT);
diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
index a017a4232c0f..1f8acbb332c9 100644
--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
+++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
@@ -67,10 +67,6 @@ static noinline depot_stack_handle_t __save_depot_stack(void)
 	};
 
 	save_stack_trace(&trace);
-	if (trace.nr_entries &&
-	    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
-		trace.nr_entries--;
-
 	return depot_save_stack(&trace, GFP_NOWAIT | __GFP_NOWARN);
 }
 

^ permalink raw reply	[flat|nested] 105+ messages in thread

* [tip:core/stacktrace] tracing: Remove the ULONG_MAX stack trace hackery
  2019-04-10 10:28 ` [RFC patch 16/41] tracing: " Thomas Gleixner
  2019-04-11  2:34   ` Josh Poimboeuf
@ 2019-04-14 20:44   ` tip-bot for Thomas Gleixner
  1 sibling, 0 replies; 105+ messages in thread
From: tip-bot for Thomas Gleixner @ 2019-04-14 20:44 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: rostedt, glider, tglx, peterz, mingo, luto, jpoimboe, linux-kernel, hpa

Commit-ID:  4285f2fcef8001ead0f1c9315ba50302cab68cda
Gitweb:     https://git.kernel.org/tip/4285f2fcef8001ead0f1c9315ba50302cab68cda
Author:     Thomas Gleixner <tglx@linutronix.de>
AuthorDate: Wed, 10 Apr 2019 12:28:10 +0200
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Sun, 14 Apr 2019 19:58:32 +0200

tracing: Remove the ULONG_MAX stack trace hackery

No architecture terminates the stack trace with ULONG_MAX anymore. As the
code checks the number of entries stored anyway there is no point in
keeping all that ULONG_MAX magic around.

The histogram code zeroes the storage before saving the stack, so if the
trace is shorter than the maximum number of entries it can terminate the
print loop if a zero entry is detected.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Alexander Potapenko <glider@google.com>
Link: https://lkml.kernel.org/r/20190410103645.048761764@linutronix.de

---
 kernel/trace/trace_events_hist.c |  2 +-
 kernel/trace/trace_stack.c       | 20 +++++---------------
 2 files changed, 6 insertions(+), 16 deletions(-)

diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 795aa2038377..21ceae299f7e 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5246,7 +5246,7 @@ static void hist_trigger_stacktrace_print(struct seq_file *m,
 	unsigned int i;
 
 	for (i = 0; i < max_entries; i++) {
-		if (stacktrace_entries[i] == ULONG_MAX)
+		if (!stacktrace_entries[i])
 			return;
 
 		seq_printf(m, "%*c", 1 + spaces, ' ');
diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c
index eec648a0d673..c6e54ff25cae 100644
--- a/kernel/trace/trace_stack.c
+++ b/kernel/trace/trace_stack.c
@@ -18,8 +18,7 @@
 
 #include "trace.h"
 
-static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES+1] =
-	 { [0 ... (STACK_TRACE_ENTRIES)] = ULONG_MAX };
+static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES + 1];
 unsigned stack_trace_index[STACK_TRACE_ENTRIES];
 
 /*
@@ -52,10 +51,7 @@ void stack_trace_print(void)
 			   stack_trace_max.nr_entries);
 
 	for (i = 0; i < stack_trace_max.nr_entries; i++) {
-		if (stack_dump_trace[i] == ULONG_MAX)
-			break;
-		if (i+1 == stack_trace_max.nr_entries ||
-				stack_dump_trace[i+1] == ULONG_MAX)
+		if (i + 1 == stack_trace_max.nr_entries)
 			size = stack_trace_index[i];
 		else
 			size = stack_trace_index[i] - stack_trace_index[i+1];
@@ -150,8 +146,6 @@ check_stack(unsigned long ip, unsigned long *stack)
 		p = start;
 
 		for (; p < top && i < stack_trace_max.nr_entries; p++) {
-			if (stack_dump_trace[i] == ULONG_MAX)
-				break;
 			/*
 			 * The READ_ONCE_NOCHECK is used to let KASAN know that
 			 * this is not a stack-out-of-bounds error.
@@ -183,8 +177,6 @@ check_stack(unsigned long ip, unsigned long *stack)
 	}
 
 	stack_trace_max.nr_entries = x;
-	for (; x < i; x++)
-		stack_dump_trace[x] = ULONG_MAX;
 
 	if (task_stack_end_corrupted(current)) {
 		stack_trace_print();
@@ -286,7 +278,7 @@ __next(struct seq_file *m, loff_t *pos)
 {
 	long n = *pos - 1;
 
-	if (n >= stack_trace_max.nr_entries || stack_dump_trace[n] == ULONG_MAX)
+	if (n >= stack_trace_max.nr_entries)
 		return NULL;
 
 	m->private = (void *)n;
@@ -360,12 +352,10 @@ static int t_show(struct seq_file *m, void *v)
 
 	i = *(long *)v;
 
-	if (i >= stack_trace_max.nr_entries ||
-	    stack_dump_trace[i] == ULONG_MAX)
+	if (i >= stack_trace_max.nr_entries)
 		return 0;
 
-	if (i+1 == stack_trace_max.nr_entries ||
-	    stack_dump_trace[i+1] == ULONG_MAX)
+	if (i + 1 == stack_trace_max.nr_entries)
 		size = stack_trace_index[i];
 	else
 		size = stack_trace_index[i] - stack_trace_index[i+1];

^ permalink raw reply	[flat|nested] 105+ messages in thread

end of thread, other threads:[~2019-04-14 20:44 UTC | newest]

Thread overview: 105+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-10 10:27 [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Thomas Gleixner
2019-04-10 10:27 ` [RFC patch 01/41] um/stacktrace: Remove the pointless ULONG_MAX marker Thomas Gleixner
2019-04-14 20:34   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:27 ` [RFC patch 02/41] x86/stacktrace: " Thomas Gleixner
2019-04-14 20:34   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:27 ` [RFC patch 03/41] arm/stacktrace: " Thomas Gleixner
2019-04-10 10:27   ` Thomas Gleixner
2019-04-14 20:35   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:27 ` [RFC patch 04/41] sh/stacktrace: " Thomas Gleixner
2019-04-10 10:27   ` Thomas Gleixner
2019-04-14 20:36   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:27 ` [RFC patch 05/41] unicore32/stacktrace: " Thomas Gleixner
2019-04-14 20:36   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 06/41] riscv/stacktrace: " Thomas Gleixner
2019-04-10 10:28   ` Thomas Gleixner
2019-04-14 20:37   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 07/41] arm64/stacktrace: " Thomas Gleixner
2019-04-10 10:28   ` Thomas Gleixner
2019-04-14 20:38   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 08/41] parisc/stacktrace: " Thomas Gleixner
2019-04-14 20:38   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 09/41] s390/stacktrace: " Thomas Gleixner
2019-04-14 20:39   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 10/41] lockdep: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
2019-04-14 20:40   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: " Thomas Gleixner
2019-04-14 20:40   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
2019-04-14 20:41   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
2019-04-10 11:31   ` Dmitry Vyukov
2019-04-10 11:31     ` Dmitry Vyukov
2019-04-14 20:42   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 14/41] latency_top: " Thomas Gleixner
2019-04-14 20:42   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 15/41] drm: " Thomas Gleixner
2019-04-14 20:43   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 16/41] tracing: " Thomas Gleixner
2019-04-11  2:34   ` Josh Poimboeuf
2019-04-11  3:07     ` Steven Rostedt
2019-04-14 20:44   ` [tip:core/stacktrace] " tip-bot for Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 17/41] tracing: Make stack_trace_print() static and rename it Thomas Gleixner
2019-04-10 12:47   ` Steven Rostedt
2019-04-11  0:19     ` AKASHI Takahiro
2019-04-10 10:28 ` [RFC patch 18/41] stacktrace: Provide helpers for common stack trace operations Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 19/41] lib/stackdepot: Provide functions which operate on plain storage arrays Thomas Gleixner
2019-04-10 13:39   ` Alexander Potapenko
2019-04-10 10:28 ` [RFC patch 20/41] backtrace-test: Simplify stack trace handling Thomas Gleixner
2019-04-11  2:47   ` Josh Poimboeuf
2019-04-10 10:28 ` [RFC patch 21/41] proc: Simplify task stack retrieval Thomas Gleixner
2019-04-14 14:49   ` Alexey Dobriyan
2019-04-10 10:28 ` [RFC patch 22/41] latency_top: Simplify stack trace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 23/41] mm/slub: Simplify stack trace retrieval Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
2019-04-10 11:33   ` Dmitry Vyukov
2019-04-10 11:33     ` Dmitry Vyukov
2019-04-11  2:55   ` Josh Poimboeuf
2019-04-14 16:54     ` Thomas Gleixner
2019-04-14 16:54       ` Thomas Gleixner
2019-04-14 17:00       ` Thomas Gleixner
2019-04-14 17:00         ` Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 26/41] mm/page_owner: Simplify stack trace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 27/41] fault-inject: Simplify stacktrace retrieval Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 28/41] dma/debug: Simplify stracktrace retrieval Thomas Gleixner
2019-04-10 10:28   ` Thomas Gleixner
2019-04-10 11:08   ` Christoph Hellwig
2019-04-10 11:08     ` Christoph Hellwig
2019-04-10 12:08     ` Thomas Gleixner
2019-04-10 12:08       ` Thomas Gleixner
2019-04-10 12:25       ` Steven Rostedt
2019-04-10 12:25         ` Steven Rostedt
2019-04-11 17:21       ` Christoph Hellwig
2019-04-11 17:21         ` Christoph Hellwig
2019-04-11 17:36         ` Steven Rostedt
2019-04-11 17:36           ` Steven Rostedt
2019-04-11 17:44           ` Christoph Hellwig
2019-04-11 17:44             ` Christoph Hellwig
2019-04-11  3:02   ` Josh Poimboeuf
2019-04-11  3:02     ` Josh Poimboeuf
2019-04-11  3:09     ` Steven Rostedt
2019-04-11  3:09       ` Steven Rostedt
2019-04-10 10:28 ` [RFC patch 29/41] btrfs: ref-verify: Simplify stack trace retrieval Thomas Gleixner
2019-04-10 11:31   ` Johannes Thumshirn
2019-04-10 12:05     ` Thomas Gleixner
2019-04-10 12:38       ` Johannes Thumshirn
2019-04-10 12:50   ` David Sterba
2019-04-10 13:47   ` Alexander Potapenko
2019-04-10 10:28 ` [RFC patch 30/41] dm bufio: " Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 31/41] dm persistent data: Simplify stack trace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 32/41] drm: Simplify stacktrace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 33/41] lockdep: Remove unused trace argument from print_circular_bug() Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 34/41] lockdep: Move stack trace logic into check_prev_add() Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 35/41] lockdep: Simplify stack trace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 36/41] tracing: Simplify stacktrace retrieval in histograms Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 37/41] tracing: Use percpu stack trace buffer more intelligently Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 38/41] tracing: Make ftrace_trace_userstack() static and conditional Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 39/41] tracing: Simplify stack trace retrieval Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 40/41] stacktrace: Remove obsolete functions Thomas Gleixner
2019-04-11  3:33   ` Josh Poimboeuf
2019-04-11  9:13     ` Peter Zijlstra
2019-04-11 13:00     ` Josh Poimboeuf
2019-04-10 10:28 ` [RFC patch 41/41] lib/stackdepot: " Thomas Gleixner
2019-04-10 13:49   ` Alexander Potapenko
2019-04-10 11:49 ` [RFC patch 00/41] stacktrace: Avoid the pointless redirection through struct stack_trace Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.