All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] arm64: Convert to ARCH_STACKWALK
@ 2020-07-15 20:28 ` Mark Brown
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel, Mark Brown

This series updates the arm64 stacktrace code to use the newer and much
simpler arch_stack_walk() interface, the main benefit being a single
entry point to the arch code with no need for the arch code to worry
about skipping frames. Along the way I noticed that the reliable
parameter to the arch_stack_walk() callback appears to now be redundant
so there's also a patch here removing that from the existing code to
simplify the interface.

This depends on 0de674afe83cb2367 (arm64: stacktrace: Move export for
save_stack_trace_tsk()) due to the code that was fixed there being
removed.

Mark Brown (3):
  stacktrace: Remove reliable argument from arch_stack_walk() callback
  arm64: stacktrace: Make stack walk callback consistent with generic
    code
  arm64: stacktrace: Convert to ARCH_STACKWALK

 arch/arm64/Kconfig                  |  1 +
 arch/arm64/include/asm/stacktrace.h |  2 +-
 arch/arm64/kernel/perf_callchain.c  |  6 +--
 arch/arm64/kernel/return_address.c  |  8 +--
 arch/arm64/kernel/stacktrace.c      | 84 ++++-------------------------
 arch/s390/kernel/stacktrace.c       |  4 +-
 arch/x86/kernel/stacktrace.c        | 10 ++--
 include/linux/stacktrace.h          |  5 +-
 kernel/stacktrace.c                 |  8 ++-
 9 files changed, 30 insertions(+), 98 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 0/3] arm64: Convert to ARCH_STACKWALK
@ 2020-07-15 20:28 ` Mark Brown
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: linux-s390, x86, linux-kernel, Christian Borntraeger, Mark Brown,
	Jiri Slaby, Ingo Molnar, linux-arm-kernel

This series updates the arm64 stacktrace code to use the newer and much
simpler arch_stack_walk() interface, the main benefit being a single
entry point to the arch code with no need for the arch code to worry
about skipping frames. Along the way I noticed that the reliable
parameter to the arch_stack_walk() callback appears to now be redundant
so there's also a patch here removing that from the existing code to
simplify the interface.

This depends on 0de674afe83cb2367 (arm64: stacktrace: Move export for
save_stack_trace_tsk()) due to the code that was fixed there being
removed.

Mark Brown (3):
  stacktrace: Remove reliable argument from arch_stack_walk() callback
  arm64: stacktrace: Make stack walk callback consistent with generic
    code
  arm64: stacktrace: Convert to ARCH_STACKWALK

 arch/arm64/Kconfig                  |  1 +
 arch/arm64/include/asm/stacktrace.h |  2 +-
 arch/arm64/kernel/perf_callchain.c  |  6 +--
 arch/arm64/kernel/return_address.c  |  8 +--
 arch/arm64/kernel/stacktrace.c      | 84 ++++-------------------------
 arch/s390/kernel/stacktrace.c       |  4 +-
 arch/x86/kernel/stacktrace.c        | 10 ++--
 include/linux/stacktrace.h          |  5 +-
 kernel/stacktrace.c                 |  8 ++-
 9 files changed, 30 insertions(+), 98 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback
  2020-07-15 20:28 ` Mark Brown
@ 2020-07-15 20:28   ` Mark Brown
  -1 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel, Mark Brown

Currently the callback passed to arch_stack_walk() has an argument called
reliable passed to it to indicate if the stack entry is reliable, a comment
says that this is used by some printk() consumers. However in the current
kernel none of the arch_stack_walk() implementations ever set this flag to
true and the only callback implementation we have is in the generic
stacktrace code which ignores the flag. It therefore appears that this
flag is redundant so we can simplify and clarify things by removing it.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/s390/kernel/stacktrace.c |  4 ++--
 arch/x86/kernel/stacktrace.c  | 10 +++++-----
 include/linux/stacktrace.h    |  5 +----
 kernel/stacktrace.c           |  8 +++-----
 4 files changed, 11 insertions(+), 16 deletions(-)

diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
index fc5419ac64c8..7f1266c24f6b 100644
--- a/arch/s390/kernel/stacktrace.c
+++ b/arch/s390/kernel/stacktrace.c
@@ -19,7 +19,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 
 	unwind_for_each_frame(&state, task, regs, 0) {
 		addr = unwind_get_return_address(&state);
-		if (!addr || !consume_entry(cookie, addr, false))
+		if (!addr || !consume_entry(cookie, addr))
 			break;
 	}
 }
@@ -56,7 +56,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
 			return -EINVAL;
 #endif
 
-		if (!consume_entry(cookie, addr, false))
+		if (!consume_entry(cookie, addr))
 			return -EINVAL;
 	}
 
diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 6ad43fc44556..5c30999fc705 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -18,13 +18,13 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 	struct unwind_state state;
 	unsigned long addr;
 
-	if (regs && !consume_entry(cookie, regs->ip, false))
+	if (regs && !consume_entry(cookie, regs->ip))
 		return;
 
 	for (unwind_start(&state, task, regs, NULL); !unwind_done(&state);
 	     unwind_next_frame(&state)) {
 		addr = unwind_get_return_address(&state);
-		if (!addr || !consume_entry(cookie, addr, false))
+		if (!addr || !consume_entry(cookie, addr))
 			break;
 	}
 }
@@ -73,7 +73,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
 		if (!addr)
 			return -EINVAL;
 
-		if (!consume_entry(cookie, addr, false))
+		if (!consume_entry(cookie, addr))
 			return -EINVAL;
 	}
 
@@ -119,7 +119,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
 {
 	const void __user *fp = (const void __user *)regs->bp;
 
-	if (!consume_entry(cookie, regs->ip, false))
+	if (!consume_entry(cookie, regs->ip))
 		return;
 
 	while (1) {
@@ -133,7 +133,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
 			break;
 		if (!frame.ret_addr)
 			break;
-		if (!consume_entry(cookie, frame.ret_addr, false))
+		if (!consume_entry(cookie, frame.ret_addr))
 			break;
 		fp = frame.next_fp;
 	}
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index b7af8cc13eda..50e2df30b0aa 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -29,14 +29,11 @@ unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
  * stack_trace_consume_fn - Callback for arch_stack_walk()
  * @cookie:	Caller supplied pointer handed back by arch_stack_walk()
  * @addr:	The stack entry address to consume
- * @reliable:	True when the stack entry is reliable. Required by
- *		some printk based consumers.
  *
  * Return:	True, if the entry was consumed or skipped
  *		False, if there is no space left to store
  */
-typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
-				       bool reliable);
+typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr);
 /**
  * arch_stack_walk - Architecture specific function to walk the stack
  * @consume_entry:	Callback which is invoked by the architecture code for
diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index 946f44a9e86a..9f8117c7cfdd 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -78,8 +78,7 @@ struct stacktrace_cookie {
 	unsigned int	len;
 };
 
-static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
-				      bool reliable)
+static bool stack_trace_consume_entry(void *cookie, unsigned long addr)
 {
 	struct stacktrace_cookie *c = cookie;
 
@@ -94,12 +93,11 @@ static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
 	return c->len < c->size;
 }
 
-static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
-					      bool reliable)
+static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr)
 {
 	if (in_sched_functions(addr))
 		return true;
-	return stack_trace_consume_entry(cookie, addr, reliable);
+	return stack_trace_consume_entry(cookie, addr);
 }
 
 /**
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback
@ 2020-07-15 20:28   ` Mark Brown
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: linux-s390, x86, linux-kernel, Christian Borntraeger, Mark Brown,
	Jiri Slaby, Ingo Molnar, linux-arm-kernel

Currently the callback passed to arch_stack_walk() has an argument called
reliable passed to it to indicate if the stack entry is reliable, a comment
says that this is used by some printk() consumers. However in the current
kernel none of the arch_stack_walk() implementations ever set this flag to
true and the only callback implementation we have is in the generic
stacktrace code which ignores the flag. It therefore appears that this
flag is redundant so we can simplify and clarify things by removing it.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/s390/kernel/stacktrace.c |  4 ++--
 arch/x86/kernel/stacktrace.c  | 10 +++++-----
 include/linux/stacktrace.h    |  5 +----
 kernel/stacktrace.c           |  8 +++-----
 4 files changed, 11 insertions(+), 16 deletions(-)

diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
index fc5419ac64c8..7f1266c24f6b 100644
--- a/arch/s390/kernel/stacktrace.c
+++ b/arch/s390/kernel/stacktrace.c
@@ -19,7 +19,7 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 
 	unwind_for_each_frame(&state, task, regs, 0) {
 		addr = unwind_get_return_address(&state);
-		if (!addr || !consume_entry(cookie, addr, false))
+		if (!addr || !consume_entry(cookie, addr))
 			break;
 	}
 }
@@ -56,7 +56,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
 			return -EINVAL;
 #endif
 
-		if (!consume_entry(cookie, addr, false))
+		if (!consume_entry(cookie, addr))
 			return -EINVAL;
 	}
 
diff --git a/arch/x86/kernel/stacktrace.c b/arch/x86/kernel/stacktrace.c
index 6ad43fc44556..5c30999fc705 100644
--- a/arch/x86/kernel/stacktrace.c
+++ b/arch/x86/kernel/stacktrace.c
@@ -18,13 +18,13 @@ void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 	struct unwind_state state;
 	unsigned long addr;
 
-	if (regs && !consume_entry(cookie, regs->ip, false))
+	if (regs && !consume_entry(cookie, regs->ip))
 		return;
 
 	for (unwind_start(&state, task, regs, NULL); !unwind_done(&state);
 	     unwind_next_frame(&state)) {
 		addr = unwind_get_return_address(&state);
-		if (!addr || !consume_entry(cookie, addr, false))
+		if (!addr || !consume_entry(cookie, addr))
 			break;
 	}
 }
@@ -73,7 +73,7 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry,
 		if (!addr)
 			return -EINVAL;
 
-		if (!consume_entry(cookie, addr, false))
+		if (!consume_entry(cookie, addr))
 			return -EINVAL;
 	}
 
@@ -119,7 +119,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
 {
 	const void __user *fp = (const void __user *)regs->bp;
 
-	if (!consume_entry(cookie, regs->ip, false))
+	if (!consume_entry(cookie, regs->ip))
 		return;
 
 	while (1) {
@@ -133,7 +133,7 @@ void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
 			break;
 		if (!frame.ret_addr)
 			break;
-		if (!consume_entry(cookie, frame.ret_addr, false))
+		if (!consume_entry(cookie, frame.ret_addr))
 			break;
 		fp = frame.next_fp;
 	}
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index b7af8cc13eda..50e2df30b0aa 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -29,14 +29,11 @@ unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
  * stack_trace_consume_fn - Callback for arch_stack_walk()
  * @cookie:	Caller supplied pointer handed back by arch_stack_walk()
  * @addr:	The stack entry address to consume
- * @reliable:	True when the stack entry is reliable. Required by
- *		some printk based consumers.
  *
  * Return:	True, if the entry was consumed or skipped
  *		False, if there is no space left to store
  */
-typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr,
-				       bool reliable);
+typedef bool (*stack_trace_consume_fn)(void *cookie, unsigned long addr);
 /**
  * arch_stack_walk - Architecture specific function to walk the stack
  * @consume_entry:	Callback which is invoked by the architecture code for
diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
index 946f44a9e86a..9f8117c7cfdd 100644
--- a/kernel/stacktrace.c
+++ b/kernel/stacktrace.c
@@ -78,8 +78,7 @@ struct stacktrace_cookie {
 	unsigned int	len;
 };
 
-static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
-				      bool reliable)
+static bool stack_trace_consume_entry(void *cookie, unsigned long addr)
 {
 	struct stacktrace_cookie *c = cookie;
 
@@ -94,12 +93,11 @@ static bool stack_trace_consume_entry(void *cookie, unsigned long addr,
 	return c->len < c->size;
 }
 
-static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr,
-					      bool reliable)
+static bool stack_trace_consume_entry_nosched(void *cookie, unsigned long addr)
 {
 	if (in_sched_functions(addr))
 		return true;
-	return stack_trace_consume_entry(cookie, addr, reliable);
+	return stack_trace_consume_entry(cookie, addr);
 }
 
 /**
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] arm64: stacktrace: Make stack walk callback consistent with generic code
  2020-07-15 20:28 ` Mark Brown
@ 2020-07-15 20:28   ` Mark Brown
  -1 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel, Mark Brown

As with the generic arch_stack_walk() code the arm64 stack walk code takes
a callback that is called per stack frame. Currently the arm64 code always
passes a struct stackframe to the callback and the generic code just passes
the pc, however none of the users ever reference anything in the struct
other than the pc value. The arm64 code also uses a return type of int
while the generic code uses a return type of bool though in both cases the
return value is a boolean value.

In order to reduce code duplication when arm64 is converted to use
arch_stack_walk() change the signature of the arm64 specific callback to
match that of the generic code.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/stacktrace.h |  2 +-
 arch/arm64/kernel/perf_callchain.c  |  6 +++---
 arch/arm64/kernel/return_address.c  |  8 ++++----
 arch/arm64/kernel/stacktrace.c      | 11 +++++------
 4 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index fc7613023c19..eb29b1fe8255 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -63,7 +63,7 @@ struct stackframe {
 
 extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
 extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
-			    int (*fn)(struct stackframe *, void *), void *data);
+			    bool (*fn)(void *, unsigned long), void *data);
 extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
 			   const char *loglvl);
 
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index b0e03e052dd1..bd2a91bd9e9e 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
  * whist unwinding the stackframe and is like a subroutine return so we use
  * the PC.
  */
-static int callchain_trace(struct stackframe *frame, void *data)
+static bool callchain_trace(void *data, unsigned long pc)
 {
 	struct perf_callchain_entry_ctx *entry = data;
-	perf_callchain_store(entry, frame->pc);
-	return 0;
+	perf_callchain_store(entry, pc);
+	return false;
 }
 
 void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c
index a5e8b3b9d798..6434427a827a 100644
--- a/arch/arm64/kernel/return_address.c
+++ b/arch/arm64/kernel/return_address.c
@@ -18,16 +18,16 @@ struct return_address_data {
 	void *addr;
 };
 
-static int save_return_addr(struct stackframe *frame, void *d)
+static bool save_return_addr(void *d, unsigned long pc)
 {
 	struct return_address_data *data = d;
 
 	if (!data->level) {
-		data->addr = (void *)frame->pc;
-		return 1;
+		data->addr = (void *)pc;
+		return true;
 	} else {
 		--data->level;
-		return 0;
+		return false;
 	}
 }
 NOKPROBE_SYMBOL(save_return_addr);
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 2dd8e3b8b94b..743cf11fbfca 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -118,12 +118,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 NOKPROBE_SYMBOL(unwind_frame);
 
 void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
-		     int (*fn)(struct stackframe *, void *), void *data)
+			     bool (*fn)(void *, unsigned long), void *data)
 {
 	while (1) {
 		int ret;
 
-		if (fn(frame, data))
+		if (fn(data, frame->pc))
 			break;
 		ret = unwind_frame(tsk, frame);
 		if (ret < 0)
@@ -139,17 +139,16 @@ struct stack_trace_data {
 	unsigned int skip;
 };
 
-static int save_trace(struct stackframe *frame, void *d)
+static bool save_trace(void *d, unsigned long addr)
 {
 	struct stack_trace_data *data = d;
 	struct stack_trace *trace = data->trace;
-	unsigned long addr = frame->pc;
 
 	if (data->no_sched_functions && in_sched_functions(addr))
-		return 0;
+		return false;
 	if (data->skip) {
 		data->skip--;
-		return 0;
+		return false;
 	}
 
 	trace->entries[trace->nr_entries++] = addr;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/3] arm64: stacktrace: Make stack walk callback consistent with generic code
@ 2020-07-15 20:28   ` Mark Brown
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: linux-s390, x86, linux-kernel, Christian Borntraeger, Mark Brown,
	Jiri Slaby, Ingo Molnar, linux-arm-kernel

As with the generic arch_stack_walk() code the arm64 stack walk code takes
a callback that is called per stack frame. Currently the arm64 code always
passes a struct stackframe to the callback and the generic code just passes
the pc, however none of the users ever reference anything in the struct
other than the pc value. The arm64 code also uses a return type of int
while the generic code uses a return type of bool though in both cases the
return value is a boolean value.

In order to reduce code duplication when arm64 is converted to use
arch_stack_walk() change the signature of the arm64 specific callback to
match that of the generic code.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/include/asm/stacktrace.h |  2 +-
 arch/arm64/kernel/perf_callchain.c  |  6 +++---
 arch/arm64/kernel/return_address.c  |  8 ++++----
 arch/arm64/kernel/stacktrace.c      | 11 +++++------
 4 files changed, 13 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index fc7613023c19..eb29b1fe8255 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -63,7 +63,7 @@ struct stackframe {
 
 extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
 extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
-			    int (*fn)(struct stackframe *, void *), void *data);
+			    bool (*fn)(void *, unsigned long), void *data);
 extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
 			   const char *loglvl);
 
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index b0e03e052dd1..bd2a91bd9e9e 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -137,11 +137,11 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
  * whist unwinding the stackframe and is like a subroutine return so we use
  * the PC.
  */
-static int callchain_trace(struct stackframe *frame, void *data)
+static bool callchain_trace(void *data, unsigned long pc)
 {
 	struct perf_callchain_entry_ctx *entry = data;
-	perf_callchain_store(entry, frame->pc);
-	return 0;
+	perf_callchain_store(entry, pc);
+	return false;
 }
 
 void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c
index a5e8b3b9d798..6434427a827a 100644
--- a/arch/arm64/kernel/return_address.c
+++ b/arch/arm64/kernel/return_address.c
@@ -18,16 +18,16 @@ struct return_address_data {
 	void *addr;
 };
 
-static int save_return_addr(struct stackframe *frame, void *d)
+static bool save_return_addr(void *d, unsigned long pc)
 {
 	struct return_address_data *data = d;
 
 	if (!data->level) {
-		data->addr = (void *)frame->pc;
-		return 1;
+		data->addr = (void *)pc;
+		return true;
 	} else {
 		--data->level;
-		return 0;
+		return false;
 	}
 }
 NOKPROBE_SYMBOL(save_return_addr);
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 2dd8e3b8b94b..743cf11fbfca 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -118,12 +118,12 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 NOKPROBE_SYMBOL(unwind_frame);
 
 void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
-		     int (*fn)(struct stackframe *, void *), void *data)
+			     bool (*fn)(void *, unsigned long), void *data)
 {
 	while (1) {
 		int ret;
 
-		if (fn(frame, data))
+		if (fn(data, frame->pc))
 			break;
 		ret = unwind_frame(tsk, frame);
 		if (ret < 0)
@@ -139,17 +139,16 @@ struct stack_trace_data {
 	unsigned int skip;
 };
 
-static int save_trace(struct stackframe *frame, void *d)
+static bool save_trace(void *d, unsigned long addr)
 {
 	struct stack_trace_data *data = d;
 	struct stack_trace *trace = data->trace;
-	unsigned long addr = frame->pc;
 
 	if (data->no_sched_functions && in_sched_functions(addr))
-		return 0;
+		return false;
 	if (data->skip) {
 		data->skip--;
-		return 0;
+		return false;
 	}
 
 	trace->entries[trace->nr_entries++] = addr;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
  2020-07-15 20:28 ` Mark Brown
@ 2020-07-15 20:28   ` Mark Brown
  -1 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel, Mark Brown

Historically architectures have had duplicated code in their stack trace
implementations for filtering what gets traced. In order to avoid this
duplication some generic code has been provided using a new interface
arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which
factors all this out into the generic stack trace code. Convert arm64
to use this common infrastructure.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/Kconfig             |  1 +
 arch/arm64/kernel/stacktrace.c | 79 ++++------------------------------
 2 files changed, 9 insertions(+), 71 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5d4f02b3dfe9..6ed4b6c6df95 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -29,6 +29,7 @@ config ARM64
 	select ARCH_HAS_SETUP_DMA_OPS
 	select ARCH_HAS_SET_DIRECT_MAP
 	select ARCH_HAS_SET_MEMORY
+	select ARCH_STACKWALK
 	select ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_STRICT_MODULE_RWX
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 743cf11fbfca..a33fba048954 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -133,82 +133,19 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
 NOKPROBE_SYMBOL(walk_stackframe);
 
 #ifdef CONFIG_STACKTRACE
-struct stack_trace_data {
-	struct stack_trace *trace;
-	unsigned int no_sched_functions;
-	unsigned int skip;
-};
 
-static bool save_trace(void *d, unsigned long addr)
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+		     struct task_struct *task, struct pt_regs *regs)
 {
-	struct stack_trace_data *data = d;
-	struct stack_trace *trace = data->trace;
-
-	if (data->no_sched_functions && in_sched_functions(addr))
-		return false;
-	if (data->skip) {
-		data->skip--;
-		return false;
-	}
-
-	trace->entries[trace->nr_entries++] = addr;
-
-	return trace->nr_entries >= trace->max_entries;
-}
-
-void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
-{
-	struct stack_trace_data data;
-	struct stackframe frame;
-
-	data.trace = trace;
-	data.skip = trace->skip;
-	data.no_sched_functions = 0;
-
-	start_backtrace(&frame, regs->regs[29], regs->pc);
-	walk_stackframe(current, &frame, save_trace, &data);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_regs);
-
-static noinline void __save_stack_trace(struct task_struct *tsk,
-	struct stack_trace *trace, unsigned int nosched)
-{
-	struct stack_trace_data data;
 	struct stackframe frame;
 
-	if (!try_get_task_stack(tsk))
-		return;
+	if (regs)
+		start_backtrace(&frame, regs->regs[29], regs->pc);
+	else
+		start_backtrace(&frame, thread_saved_fp(task),
+				thread_saved_pc(task));
 
-	data.trace = trace;
-	data.skip = trace->skip;
-	data.no_sched_functions = nosched;
-
-	if (tsk != current) {
-		start_backtrace(&frame, thread_saved_fp(tsk),
-				thread_saved_pc(tsk));
-	} else {
-		/* We don't want this function nor the caller */
-		data.skip += 2;
-		start_backtrace(&frame,
-				(unsigned long)__builtin_frame_address(0),
-				(unsigned long)__save_stack_trace);
-	}
-
-	walk_stackframe(tsk, &frame, save_trace, &data);
-
-	put_task_stack(tsk);
-}
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
-{
-	__save_stack_trace(tsk, trace, 1);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
-
-void save_stack_trace(struct stack_trace *trace)
-{
-	__save_stack_trace(current, trace, 0);
+	walk_stackframe(task, &frame, consume_entry, cookie);
 }
 
-EXPORT_SYMBOL_GPL(save_stack_trace);
 #endif
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
@ 2020-07-15 20:28   ` Mark Brown
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-15 20:28 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin
  Cc: linux-s390, x86, linux-kernel, Christian Borntraeger, Mark Brown,
	Jiri Slaby, Ingo Molnar, linux-arm-kernel

Historically architectures have had duplicated code in their stack trace
implementations for filtering what gets traced. In order to avoid this
duplication some generic code has been provided using a new interface
arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which
factors all this out into the generic stack trace code. Convert arm64
to use this common infrastructure.

Signed-off-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/Kconfig             |  1 +
 arch/arm64/kernel/stacktrace.c | 79 ++++------------------------------
 2 files changed, 9 insertions(+), 71 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5d4f02b3dfe9..6ed4b6c6df95 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -29,6 +29,7 @@ config ARM64
 	select ARCH_HAS_SETUP_DMA_OPS
 	select ARCH_HAS_SET_DIRECT_MAP
 	select ARCH_HAS_SET_MEMORY
+	select ARCH_STACKWALK
 	select ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_HAS_STRICT_MODULE_RWX
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 743cf11fbfca..a33fba048954 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -133,82 +133,19 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
 NOKPROBE_SYMBOL(walk_stackframe);
 
 #ifdef CONFIG_STACKTRACE
-struct stack_trace_data {
-	struct stack_trace *trace;
-	unsigned int no_sched_functions;
-	unsigned int skip;
-};
 
-static bool save_trace(void *d, unsigned long addr)
+void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
+		     struct task_struct *task, struct pt_regs *regs)
 {
-	struct stack_trace_data *data = d;
-	struct stack_trace *trace = data->trace;
-
-	if (data->no_sched_functions && in_sched_functions(addr))
-		return false;
-	if (data->skip) {
-		data->skip--;
-		return false;
-	}
-
-	trace->entries[trace->nr_entries++] = addr;
-
-	return trace->nr_entries >= trace->max_entries;
-}
-
-void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
-{
-	struct stack_trace_data data;
-	struct stackframe frame;
-
-	data.trace = trace;
-	data.skip = trace->skip;
-	data.no_sched_functions = 0;
-
-	start_backtrace(&frame, regs->regs[29], regs->pc);
-	walk_stackframe(current, &frame, save_trace, &data);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_regs);
-
-static noinline void __save_stack_trace(struct task_struct *tsk,
-	struct stack_trace *trace, unsigned int nosched)
-{
-	struct stack_trace_data data;
 	struct stackframe frame;
 
-	if (!try_get_task_stack(tsk))
-		return;
+	if (regs)
+		start_backtrace(&frame, regs->regs[29], regs->pc);
+	else
+		start_backtrace(&frame, thread_saved_fp(task),
+				thread_saved_pc(task));
 
-	data.trace = trace;
-	data.skip = trace->skip;
-	data.no_sched_functions = nosched;
-
-	if (tsk != current) {
-		start_backtrace(&frame, thread_saved_fp(tsk),
-				thread_saved_pc(tsk));
-	} else {
-		/* We don't want this function nor the caller */
-		data.skip += 2;
-		start_backtrace(&frame,
-				(unsigned long)__builtin_frame_address(0),
-				(unsigned long)__save_stack_trace);
-	}
-
-	walk_stackframe(tsk, &frame, save_trace, &data);
-
-	put_task_stack(tsk);
-}
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
-{
-	__save_stack_trace(tsk, trace, 1);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
-
-void save_stack_trace(struct stack_trace *trace)
-{
-	__save_stack_trace(current, trace, 0);
+	walk_stackframe(task, &frame, consume_entry, cookie);
 }
 
-EXPORT_SYMBOL_GPL(save_stack_trace);
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
  2020-07-15 20:28   ` Mark Brown
@ 2020-07-16 11:56     ` Miroslav Benes
  -1 siblings, 0 replies; 16+ messages in thread
From: Miroslav Benes @ 2020-07-16 11:56 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel

Hi,

On Wed, 15 Jul 2020, Mark Brown wrote:

> Historically architectures have had duplicated code in their stack trace
> implementations for filtering what gets traced. In order to avoid this
> duplication some generic code has been provided using a new interface
> arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which
> factors all this out into the generic stack trace code. Convert arm64
> to use this common infrastructure.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/Kconfig             |  1 +
>  arch/arm64/kernel/stacktrace.c | 79 ++++------------------------------
>  2 files changed, 9 insertions(+), 71 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 5d4f02b3dfe9..6ed4b6c6df95 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -29,6 +29,7 @@ config ARM64
>  	select ARCH_HAS_SETUP_DMA_OPS
>  	select ARCH_HAS_SET_DIRECT_MAP
>  	select ARCH_HAS_SET_MEMORY
> +	select ARCH_STACKWALK
>  	select ARCH_HAS_STRICT_KERNEL_RWX
>  	select ARCH_HAS_STRICT_MODULE_RWX
>  	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 743cf11fbfca..a33fba048954 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -133,82 +133,19 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
>  NOKPROBE_SYMBOL(walk_stackframe);
>  
>  #ifdef CONFIG_STACKTRACE
> -struct stack_trace_data {
> -	struct stack_trace *trace;
> -	unsigned int no_sched_functions;
> -	unsigned int skip;
> -};
>  
> -static bool save_trace(void *d, unsigned long addr)
> +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> +		     struct task_struct *task, struct pt_regs *regs)
>  {
> -	struct stack_trace_data *data = d;
> -	struct stack_trace *trace = data->trace;
> -
> -	if (data->no_sched_functions && in_sched_functions(addr))
> -		return false;
> -	if (data->skip) {
> -		data->skip--;
> -		return false;
> -	}
> -
> -	trace->entries[trace->nr_entries++] = addr;
> -
> -	return trace->nr_entries >= trace->max_entries;
> -}
> -
> -void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
> -{
> -	struct stack_trace_data data;
> -	struct stackframe frame;
> -
> -	data.trace = trace;
> -	data.skip = trace->skip;
> -	data.no_sched_functions = 0;
> -
> -	start_backtrace(&frame, regs->regs[29], regs->pc);
> -	walk_stackframe(current, &frame, save_trace, &data);
> -}
> -EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> -
> -static noinline void __save_stack_trace(struct task_struct *tsk,
> -	struct stack_trace *trace, unsigned int nosched)
> -{
> -	struct stack_trace_data data;
>  	struct stackframe frame;
>  
> -	if (!try_get_task_stack(tsk))
> -		return;
> +	if (regs)
> +		start_backtrace(&frame, regs->regs[29], regs->pc);
> +	else
> +		start_backtrace(&frame, thread_saved_fp(task),
> +				thread_saved_pc(task));
>  
> -	data.trace = trace;
> -	data.skip = trace->skip;
> -	data.no_sched_functions = nosched;
> -
> -	if (tsk != current) {
> -		start_backtrace(&frame, thread_saved_fp(tsk),
> -				thread_saved_pc(tsk));
> -	} else {
> -		/* We don't want this function nor the caller */
> -		data.skip += 2;
> -		start_backtrace(&frame,
> -				(unsigned long)__builtin_frame_address(0),
> -				(unsigned long)__save_stack_trace);
> -	}
> -
> -	walk_stackframe(tsk, &frame, save_trace, &data);
> -
> -	put_task_stack(tsk);
> -}
> -
> -void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
> -{
> -	__save_stack_trace(tsk, trace, 1);
> -}
> -EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
> -
> -void save_stack_trace(struct stack_trace *trace)
> -{
> -	__save_stack_trace(current, trace, 0);
> +	walk_stackframe(task, &frame, consume_entry, cookie);
>  }

just an idea for further improvement (and it might be a matter of taste). 
Wouldn't it be slightly better to do one more step and define "struct 
unwind_state" instead of "struct stackframe" and also some iterator for 
the unwinding and use that right in new arch_stack_walk() instead of 
walk_stackframe()? I mean, take the unbounded loop, "inline" it to 
arch_stack_walk() and replace the loop with the iterator. The body of the 
iterator would call to unwind_frame() and consume_entry() and that's it. 
It would make arm64 implementation very similar to x86 and s390 and thus 
easier to follow when one switches between architectures all the time.

Tangential to this patch, but another idea for improvement is in 
unwind_frame(). If I am not missing something, everything in 
CONFIG_FUNCTION_GRAPH_TRACER could be replaced by a simple call to 
ftrace_graph_ret_addr(). Again see for example unwind_next_frame() in
arch/s390/kernel/unwind_bc.c (x86 has it too).

Regards
Miroslav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
@ 2020-07-16 11:56     ` Miroslav Benes
  0 siblings, 0 replies; 16+ messages in thread
From: Miroslav Benes @ 2020-07-16 11:56 UTC (permalink / raw)
  To: Mark Brown
  Cc: linux-s390, Vasily Gorbik, Catalin Marinas, Heiko Carstens, x86,
	linux-kernel, Christian Borntraeger, Borislav Petkov,
	H. Peter Anvin, Jiri Slaby, Thomas Gleixner, Will Deacon,
	Ingo Molnar, linux-arm-kernel

Hi,

On Wed, 15 Jul 2020, Mark Brown wrote:

> Historically architectures have had duplicated code in their stack trace
> implementations for filtering what gets traced. In order to avoid this
> duplication some generic code has been provided using a new interface
> arch_stack_walk(), enabled by selecting ARCH_STACKWALK in Kconfig, which
> factors all this out into the generic stack trace code. Convert arm64
> to use this common infrastructure.
> 
> Signed-off-by: Mark Brown <broonie@kernel.org>
> ---
>  arch/arm64/Kconfig             |  1 +
>  arch/arm64/kernel/stacktrace.c | 79 ++++------------------------------
>  2 files changed, 9 insertions(+), 71 deletions(-)
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 5d4f02b3dfe9..6ed4b6c6df95 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -29,6 +29,7 @@ config ARM64
>  	select ARCH_HAS_SETUP_DMA_OPS
>  	select ARCH_HAS_SET_DIRECT_MAP
>  	select ARCH_HAS_SET_MEMORY
> +	select ARCH_STACKWALK
>  	select ARCH_HAS_STRICT_KERNEL_RWX
>  	select ARCH_HAS_STRICT_MODULE_RWX
>  	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 743cf11fbfca..a33fba048954 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -133,82 +133,19 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
>  NOKPROBE_SYMBOL(walk_stackframe);
>  
>  #ifdef CONFIG_STACKTRACE
> -struct stack_trace_data {
> -	struct stack_trace *trace;
> -	unsigned int no_sched_functions;
> -	unsigned int skip;
> -};
>  
> -static bool save_trace(void *d, unsigned long addr)
> +void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
> +		     struct task_struct *task, struct pt_regs *regs)
>  {
> -	struct stack_trace_data *data = d;
> -	struct stack_trace *trace = data->trace;
> -
> -	if (data->no_sched_functions && in_sched_functions(addr))
> -		return false;
> -	if (data->skip) {
> -		data->skip--;
> -		return false;
> -	}
> -
> -	trace->entries[trace->nr_entries++] = addr;
> -
> -	return trace->nr_entries >= trace->max_entries;
> -}
> -
> -void save_stack_trace_regs(struct pt_regs *regs, struct stack_trace *trace)
> -{
> -	struct stack_trace_data data;
> -	struct stackframe frame;
> -
> -	data.trace = trace;
> -	data.skip = trace->skip;
> -	data.no_sched_functions = 0;
> -
> -	start_backtrace(&frame, regs->regs[29], regs->pc);
> -	walk_stackframe(current, &frame, save_trace, &data);
> -}
> -EXPORT_SYMBOL_GPL(save_stack_trace_regs);
> -
> -static noinline void __save_stack_trace(struct task_struct *tsk,
> -	struct stack_trace *trace, unsigned int nosched)
> -{
> -	struct stack_trace_data data;
>  	struct stackframe frame;
>  
> -	if (!try_get_task_stack(tsk))
> -		return;
> +	if (regs)
> +		start_backtrace(&frame, regs->regs[29], regs->pc);
> +	else
> +		start_backtrace(&frame, thread_saved_fp(task),
> +				thread_saved_pc(task));
>  
> -	data.trace = trace;
> -	data.skip = trace->skip;
> -	data.no_sched_functions = nosched;
> -
> -	if (tsk != current) {
> -		start_backtrace(&frame, thread_saved_fp(tsk),
> -				thread_saved_pc(tsk));
> -	} else {
> -		/* We don't want this function nor the caller */
> -		data.skip += 2;
> -		start_backtrace(&frame,
> -				(unsigned long)__builtin_frame_address(0),
> -				(unsigned long)__save_stack_trace);
> -	}
> -
> -	walk_stackframe(tsk, &frame, save_trace, &data);
> -
> -	put_task_stack(tsk);
> -}
> -
> -void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
> -{
> -	__save_stack_trace(tsk, trace, 1);
> -}
> -EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
> -
> -void save_stack_trace(struct stack_trace *trace)
> -{
> -	__save_stack_trace(current, trace, 0);
> +	walk_stackframe(task, &frame, consume_entry, cookie);
>  }

just an idea for further improvement (and it might be a matter of taste). 
Wouldn't it be slightly better to do one more step and define "struct 
unwind_state" instead of "struct stackframe" and also some iterator for 
the unwinding and use that right in new arch_stack_walk() instead of 
walk_stackframe()? I mean, take the unbounded loop, "inline" it to 
arch_stack_walk() and replace the loop with the iterator. The body of the 
iterator would call to unwind_frame() and consume_entry() and that's it. 
It would make arm64 implementation very similar to x86 and s390 and thus 
easier to follow when one switches between architectures all the time.

Tangential to this patch, but another idea for improvement is in 
unwind_frame(). If I am not missing something, everything in 
CONFIG_FUNCTION_GRAPH_TRACER could be replaced by a simple call to 
ftrace_graph_ret_addr(). Again see for example unwind_next_frame() in
arch/s390/kernel/unwind_bc.c (x86 has it too).

Regards
Miroslav

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
  2020-07-16 11:56     ` Miroslav Benes
@ 2020-07-16 16:20       ` Mark Brown
  -1 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-16 16:20 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1967 bytes --]

On Thu, Jul 16, 2020 at 01:56:13PM +0200, Miroslav Benes wrote:
> On Wed, 15 Jul 2020, Mark Brown wrote:

> > -void save_stack_trace(struct stack_trace *trace)
> > -{
> > -	__save_stack_trace(current, trace, 0);
> > +	walk_stackframe(task, &frame, consume_entry, cookie);
> >  }

> just an idea for further improvement (and it might be a matter of taste). 

Yeah, there's some more stuff that can be done - the reason I'm looking
at this code is to do reliable stack trace which is going to require at
least some changes to the actual unwinder, this just seemed like a
useful block moving things forwards in itself and I particularly wanted
feedback on patch 1.

> Wouldn't it be slightly better to do one more step and define "struct 
> unwind_state" instead of "struct stackframe" and also some iterator for 
> the unwinding and use that right in new arch_stack_walk() instead of 
> walk_stackframe()? I mean, take the unbounded loop, "inline" it to 
> arch_stack_walk() and replace the loop with the iterator. The body of the 
> iterator would call to unwind_frame() and consume_entry() and that's it. 
> It would make arm64 implementation very similar to x86 and s390 and thus 
> easier to follow when one switches between architectures all the time.

That's definitely on the radar, the unwinding stuff needs other changes
for the reliable stack trace (if nothing else we need to distinguish
between "errors" due to reaching the bottom of the stack and errors due
to bogosity) which so far looked sensible to bundle up together.

> Tangential to this patch, but another idea for improvement is in 
> unwind_frame(). If I am not missing something, everything in 
> CONFIG_FUNCTION_GRAPH_TRACER could be replaced by a simple call to 
> ftrace_graph_ret_addr(). Again see for example unwind_next_frame() in
> arch/s390/kernel/unwind_bc.c (x86 has it too).

Yes, I'd noticed some divergence there and was going to look into it.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
@ 2020-07-16 16:20       ` Mark Brown
  0 siblings, 0 replies; 16+ messages in thread
From: Mark Brown @ 2020-07-16 16:20 UTC (permalink / raw)
  To: Miroslav Benes
  Cc: linux-s390, Vasily Gorbik, Catalin Marinas, Heiko Carstens, x86,
	linux-kernel, Christian Borntraeger, Borislav Petkov,
	H. Peter Anvin, Jiri Slaby, Thomas Gleixner, Will Deacon,
	Ingo Molnar, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 1967 bytes --]

On Thu, Jul 16, 2020 at 01:56:13PM +0200, Miroslav Benes wrote:
> On Wed, 15 Jul 2020, Mark Brown wrote:

> > -void save_stack_trace(struct stack_trace *trace)
> > -{
> > -	__save_stack_trace(current, trace, 0);
> > +	walk_stackframe(task, &frame, consume_entry, cookie);
> >  }

> just an idea for further improvement (and it might be a matter of taste). 

Yeah, there's some more stuff that can be done - the reason I'm looking
at this code is to do reliable stack trace which is going to require at
least some changes to the actual unwinder, this just seemed like a
useful block moving things forwards in itself and I particularly wanted
feedback on patch 1.

> Wouldn't it be slightly better to do one more step and define "struct 
> unwind_state" instead of "struct stackframe" and also some iterator for 
> the unwinding and use that right in new arch_stack_walk() instead of 
> walk_stackframe()? I mean, take the unbounded loop, "inline" it to 
> arch_stack_walk() and replace the loop with the iterator. The body of the 
> iterator would call to unwind_frame() and consume_entry() and that's it. 
> It would make arm64 implementation very similar to x86 and s390 and thus 
> easier to follow when one switches between architectures all the time.

That's definitely on the radar, the unwinding stuff needs other changes
for the reliable stack trace (if nothing else we need to distinguish
between "errors" due to reaching the bottom of the stack and errors due
to bogosity) which so far looked sensible to bundle up together.

> Tangential to this patch, but another idea for improvement is in 
> unwind_frame(). If I am not missing something, everything in 
> CONFIG_FUNCTION_GRAPH_TRACER could be replaced by a simple call to 
> ftrace_graph_ret_addr(). Again see for example unwind_next_frame() in
> arch/s390/kernel/unwind_bc.c (x86 has it too).

Yes, I'd noticed some divergence there and was going to look into it.

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback
  2020-07-15 20:28   ` Mark Brown
@ 2020-07-17  9:04     ` Miroslav Benes
  -1 siblings, 0 replies; 16+ messages in thread
From: Miroslav Benes @ 2020-07-17  9:04 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel

On Wed, 15 Jul 2020, Mark Brown wrote:

> Currently the callback passed to arch_stack_walk() has an argument called
> reliable passed to it to indicate if the stack entry is reliable, a comment
> says that this is used by some printk() consumers. However in the current
> kernel none of the arch_stack_walk() implementations ever set this flag to
> true and the only callback implementation we have is in the generic
> stacktrace code which ignores the flag. It therefore appears that this
> flag is redundant so we can simplify and clarify things by removing it.

Correct. I dug around and it seems it was the case even when it was 
introduced. So I wonder about the comment. I don't remember the details, 
maybe Thomas or someone else does. But the patch looks correct.

Miroslav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback
@ 2020-07-17  9:04     ` Miroslav Benes
  0 siblings, 0 replies; 16+ messages in thread
From: Miroslav Benes @ 2020-07-17  9:04 UTC (permalink / raw)
  To: Mark Brown
  Cc: linux-s390, Vasily Gorbik, Catalin Marinas, Heiko Carstens, x86,
	linux-kernel, Christian Borntraeger, Borislav Petkov,
	H. Peter Anvin, Jiri Slaby, Thomas Gleixner, Will Deacon,
	Ingo Molnar, linux-arm-kernel

On Wed, 15 Jul 2020, Mark Brown wrote:

> Currently the callback passed to arch_stack_walk() has an argument called
> reliable passed to it to indicate if the stack entry is reliable, a comment
> says that this is used by some printk() consumers. However in the current
> kernel none of the arch_stack_walk() implementations ever set this flag to
> true and the only callback implementation we have is in the generic
> stacktrace code which ignores the flag. It therefore appears that this
> flag is redundant so we can simplify and clarify things by removing it.

Correct. I dug around and it seems it was the case even when it was 
introduced. So I wonder about the comment. I don't remember the details, 
maybe Thomas or someone else does. But the patch looks correct.

Miroslav

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
  2020-07-16 16:20       ` Mark Brown
@ 2020-07-17  9:06         ` Miroslav Benes
  -1 siblings, 0 replies; 16+ messages in thread
From: Miroslav Benes @ 2020-07-17  9:06 UTC (permalink / raw)
  To: Mark Brown
  Cc: Catalin Marinas, Will Deacon, Heiko Carstens, Vasily Gorbik,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Christian Borntraeger, Ingo Molnar, Jiri Slaby, x86,
	linux-arm-kernel, linux-s390, linux-kernel

On Thu, 16 Jul 2020, Mark Brown wrote:

> On Thu, Jul 16, 2020 at 01:56:13PM +0200, Miroslav Benes wrote:
> > On Wed, 15 Jul 2020, Mark Brown wrote:
> 
> > > -void save_stack_trace(struct stack_trace *trace)
> > > -{
> > > -	__save_stack_trace(current, trace, 0);
> > > +	walk_stackframe(task, &frame, consume_entry, cookie);
> > >  }
> 
> > just an idea for further improvement (and it might be a matter of taste). 
> 
> Yeah, there's some more stuff that can be done - the reason I'm looking
> at this code is to do reliable stack trace which is going to require at
> least some changes to the actual unwinder, this just seemed like a
> useful block moving things forwards in itself and I particularly wanted
> feedback on patch 1.

Understood. Reliable stack traces would be an important step for live 
patching on arm64, so I am looking forward to seeing that.

Thanks
Miroslav

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK
@ 2020-07-17  9:06         ` Miroslav Benes
  0 siblings, 0 replies; 16+ messages in thread
From: Miroslav Benes @ 2020-07-17  9:06 UTC (permalink / raw)
  To: Mark Brown
  Cc: linux-s390, Vasily Gorbik, Catalin Marinas, Heiko Carstens, x86,
	linux-kernel, Christian Borntraeger, Borislav Petkov,
	H. Peter Anvin, Jiri Slaby, Thomas Gleixner, Will Deacon,
	Ingo Molnar, linux-arm-kernel

On Thu, 16 Jul 2020, Mark Brown wrote:

> On Thu, Jul 16, 2020 at 01:56:13PM +0200, Miroslav Benes wrote:
> > On Wed, 15 Jul 2020, Mark Brown wrote:
> 
> > > -void save_stack_trace(struct stack_trace *trace)
> > > -{
> > > -	__save_stack_trace(current, trace, 0);
> > > +	walk_stackframe(task, &frame, consume_entry, cookie);
> > >  }
> 
> > just an idea for further improvement (and it might be a matter of taste). 
> 
> Yeah, there's some more stuff that can be done - the reason I'm looking
> at this code is to do reliable stack trace which is going to require at
> least some changes to the actual unwinder, this just seemed like a
> useful block moving things forwards in itself and I particularly wanted
> feedback on patch 1.

Understood. Reliable stack traces would be an important step for live 
patching on arm64, so I am looking forward to seeing that.

Thanks
Miroslav

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-07-17  9:08 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-15 20:28 [PATCH 0/3] arm64: Convert to ARCH_STACKWALK Mark Brown
2020-07-15 20:28 ` Mark Brown
2020-07-15 20:28 ` [PATCH 1/3] stacktrace: Remove reliable argument from arch_stack_walk() callback Mark Brown
2020-07-15 20:28   ` Mark Brown
2020-07-17  9:04   ` Miroslav Benes
2020-07-17  9:04     ` Miroslav Benes
2020-07-15 20:28 ` [PATCH 2/3] arm64: stacktrace: Make stack walk callback consistent with generic code Mark Brown
2020-07-15 20:28   ` Mark Brown
2020-07-15 20:28 ` [PATCH 3/3] arm64: stacktrace: Convert to ARCH_STACKWALK Mark Brown
2020-07-15 20:28   ` Mark Brown
2020-07-16 11:56   ` Miroslav Benes
2020-07-16 11:56     ` Miroslav Benes
2020-07-16 16:20     ` Mark Brown
2020-07-16 16:20       ` Mark Brown
2020-07-17  9:06       ` Miroslav Benes
2020-07-17  9:06         ` Miroslav Benes

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.