All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] arm64: stacktrace: unify unwind code
@ 2021-11-29 14:28 Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE Mark Rutland
                   ` (9 more replies)
  0 siblings, 10 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

For historical reasons arm64 has a number of open-coded unwind functions. We'd
like to unify these to reduce the amount of unwind code we have to expose, and
to make it easier for subsequent patches to rework the unwind code for
RELIABLE_STACKTRACE.

These patches unify the various unwinders using arch_stack_walk(). So that we
can use arch_stack_walk() without having to expose /proc/${PID}/stack, I've
picked Peter's patch decoupling ARCH_STACKWALK from STACKTRACE, which was
previously posted at:

  https://lore.kernel.org/lkml/20211022152104.356586621@infradead.org/

As the direction of travel seems to be to not unify the get_wchan()
implementations, Peter suggested I pick the patch for now. This is the only
patch in the series touching other architectures.

The bulk of the series was perviously posted in Madhavan's series adding
reliability checks to the unwinder:

  https://lore.kernel.org/linux-arm-kernel/20211015025847.17694-1-madvenka@linux.microsoft.com/

I have made some minor tweaks and updated each commit message to explain why
the transformation is safe. Largely the changes should have no functional
effect, but in a couple of cases there is a (benign and/or desirable)
functional change, which is described in the relevant commit message.

To make it possible for get_wchan() to use arch_stack_walk(), and to correct
the expected behaviour of stack_trace_consume_entry_nosched() and
stack_trace_save_tsk(), we need to mark arm64's __switch_to() as __sched, for
which I've added a preparatory patch.

Since v1 [1]:
* Add necessary includes of <linux/stacktrace.h>
* Remove unnecessary includes of <asm/stacktrace.h>
* Rebase to v5.16-rc3

[1] https://lore.kernel.org/r/20211117140737.44420-1-mark.rutland@arm.com/

Thanks,
Mark.

Madhavan T. Venkataraman (5):
  arm64: Make perf_callchain_kernel() use arch_stack_walk()
  arm64: Make __get_wchan() use arch_stack_walk()
  arm64: Make return_address() use arch_stack_walk()
  arm64: Make profile_pc() use arch_stack_walk()
  arm64: Make dump_backtrace() use arch_stack_walk()

Mark Rutland (3):
  arm64: Add comment for stack_info::kr_cur
  arm64: Mark __switch_to() as __sched
  arm64: Make some stacktrace functions private

Peter Zijlstra (1):
  arch: Make ARCH_STACKWALK independent of STACKTRACE

 arch/arm64/include/asm/stacktrace.h | 10 ++---
 arch/arm64/kernel/perf_callchain.c  | 15 ++------
 arch/arm64/kernel/process.c         | 45 +++++++++++++---------
 arch/arm64/kernel/return_address.c  |  8 +---
 arch/arm64/kernel/stacktrace.c      | 60 +++++++----------------------
 arch/arm64/kernel/time.c            | 25 ++++++------
 arch/powerpc/kernel/Makefile        |  3 +-
 arch/riscv/kernel/stacktrace.c      |  4 --
 arch/s390/kernel/Makefile           |  3 +-
 arch/x86/kernel/Makefile            |  2 +-
 include/linux/stacktrace.h          | 35 +++++++++--------
 11 files changed, 83 insertions(+), 127 deletions(-)

-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-12-10 14:04   ` Catalin Marinas
  2021-11-29 14:28 ` [PATCH v2 2/9] arm64: Add comment for stack_info::kr_cur Mark Rutland
                   ` (8 subsequent siblings)
  9 siblings, 1 reply; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

From: Peter Zijlstra <peterz@infradead.org>

Make arch_stack_walk() available for ARCH_STACKWALK architectures
without it being entangled in STACKTRACE.

Link: https://lore.kernel.org/lkml/20211022152104.356586621@infradead.org/
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[Mark: rebase, drop unnecessary arm change]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Heiko Carstens <hca@linux.ibm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
---
 arch/arm64/kernel/stacktrace.c |  4 ----
 arch/powerpc/kernel/Makefile   |  3 +--
 arch/riscv/kernel/stacktrace.c |  4 ----
 arch/s390/kernel/Makefile      |  3 +--
 arch/x86/kernel/Makefile       |  2 +-
 include/linux/stacktrace.h     | 35 +++++++++++++++++-----------------
 6 files changed, 21 insertions(+), 30 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 94f83cd44e50..e6ba6b000564 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -221,8 +221,6 @@ void show_stack(struct task_struct *tsk, unsigned long *sp, const char *loglvl)
 	barrier();
 }
 
-#ifdef CONFIG_STACKTRACE
-
 noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
 			      void *cookie, struct task_struct *task,
 			      struct pt_regs *regs)
@@ -241,5 +239,3 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
 
 	walk_stackframe(task, &frame, consume_entry, cookie);
 }
-
-#endif
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 5fa68c2ef1f8..b039877c743d 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -47,7 +47,7 @@ obj-y				:= cputable.o syscalls.o \
 				   udbg.o misc.o io.o misc_$(BITS).o \
 				   of_platform.o prom_parse.o firmware.o \
 				   hw_breakpoint_constraints.o interrupt.o \
-				   kdebugfs.o
+				   kdebugfs.o stacktrace.o
 obj-y				+= ptrace/
 obj-$(CONFIG_PPC64)		+= setup_64.o \
 				   paca.o nvram_64.o note.o
@@ -116,7 +116,6 @@ obj-$(CONFIG_OPTPROBES)		+= optprobes.o optprobes_head.o
 obj-$(CONFIG_KPROBES_ON_FTRACE)	+= kprobes-ftrace.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o
 obj-$(CONFIG_PPC_UDBG_16550)	+= legacy_serial.o udbg_16550.o
-obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
 obj-$(CONFIG_SWIOTLB)		+= dma-swiotlb.o
 obj-$(CONFIG_ARCH_HAS_DMA_SET_MASK) += dma-mask.o
 
diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
index 0fcdc0233fac..201ee206fb57 100644
--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -139,12 +139,8 @@ unsigned long __get_wchan(struct task_struct *task)
 	return pc;
 }
 
-#ifdef CONFIG_STACKTRACE
-
 noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
 		     struct task_struct *task, struct pt_regs *regs)
 {
 	walk_stackframe(task, regs, consume_entry, cookie);
 }
-
-#endif /* CONFIG_STACKTRACE */
diff --git a/arch/s390/kernel/Makefile b/arch/s390/kernel/Makefile
index 80f500ffb55c..be8007f367aa 100644
--- a/arch/s390/kernel/Makefile
+++ b/arch/s390/kernel/Makefile
@@ -40,7 +40,7 @@ obj-y	+= sysinfo.o lgr.o os_info.o machine_kexec.o
 obj-y	+= runtime_instr.o cache.o fpu.o dumpstack.o guarded_storage.o sthyi.o
 obj-y	+= entry.o reipl.o relocate_kernel.o kdebugfs.o alternative.o
 obj-y	+= nospec-branch.o ipl_vmparm.o machine_kexec_reloc.o unwind_bc.o
-obj-y	+= smp.o text_amode31.o
+obj-y	+= smp.o text_amode31.o stacktrace.o
 
 extra-y				+= head64.o vmlinux.lds
 
@@ -55,7 +55,6 @@ compat-obj-$(CONFIG_AUDIT)	+= compat_audit.o
 obj-$(CONFIG_COMPAT)		+= compat_linux.o compat_signal.o
 obj-$(CONFIG_COMPAT)		+= $(compat-obj-y)
 obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
-obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
 obj-$(CONFIG_KPROBES)		+= kprobes.o
 obj-$(CONFIG_KPROBES)		+= kprobes_insn_page.o
 obj-$(CONFIG_FUNCTION_TRACER)	+= mcount.o ftrace.o
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 2ff3e600f426..6aef9ee28a39 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -84,7 +84,7 @@ obj-$(CONFIG_IA32_EMULATION)	+= tls.o
 obj-y				+= step.o
 obj-$(CONFIG_INTEL_TXT)		+= tboot.o
 obj-$(CONFIG_ISA_DMA_API)	+= i8237.o
-obj-$(CONFIG_STACKTRACE)	+= stacktrace.o
+obj-y				+= stacktrace.o
 obj-y				+= cpu/
 obj-y				+= acpi/
 obj-y				+= reboot.o
diff --git a/include/linux/stacktrace.h b/include/linux/stacktrace.h
index bef158815e83..97455880ac41 100644
--- a/include/linux/stacktrace.h
+++ b/include/linux/stacktrace.h
@@ -8,22 +8,6 @@
 struct task_struct;
 struct pt_regs;
 
-#ifdef CONFIG_STACKTRACE
-void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
-		       int spaces);
-int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
-			unsigned int nr_entries, int spaces);
-unsigned int stack_trace_save(unsigned long *store, unsigned int size,
-			      unsigned int skipnr);
-unsigned int stack_trace_save_tsk(struct task_struct *task,
-				  unsigned long *store, unsigned int size,
-				  unsigned int skipnr);
-unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
-				   unsigned int size, unsigned int skipnr);
-unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
-unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);
-
-/* Internal interfaces. Do not use in generic code */
 #ifdef CONFIG_ARCH_STACKWALK
 
 /**
@@ -76,8 +60,25 @@ int arch_stack_walk_reliable(stack_trace_consume_fn consume_entry, void *cookie,
 
 void arch_stack_walk_user(stack_trace_consume_fn consume_entry, void *cookie,
 			  const struct pt_regs *regs);
+#endif /* CONFIG_ARCH_STACKWALK */
 
-#else /* CONFIG_ARCH_STACKWALK */
+#ifdef CONFIG_STACKTRACE
+void stack_trace_print(const unsigned long *trace, unsigned int nr_entries,
+		       int spaces);
+int stack_trace_snprint(char *buf, size_t size, const unsigned long *entries,
+			unsigned int nr_entries, int spaces);
+unsigned int stack_trace_save(unsigned long *store, unsigned int size,
+			      unsigned int skipnr);
+unsigned int stack_trace_save_tsk(struct task_struct *task,
+				  unsigned long *store, unsigned int size,
+				  unsigned int skipnr);
+unsigned int stack_trace_save_regs(struct pt_regs *regs, unsigned long *store,
+				   unsigned int size, unsigned int skipnr);
+unsigned int stack_trace_save_user(unsigned long *store, unsigned int size);
+unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries);
+
+#ifndef CONFIG_ARCH_STACKWALK
+/* Internal interfaces. Do not use in generic code */
 struct stack_trace {
 	unsigned int nr_entries, max_entries;
 	unsigned long *entries;
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/9] arm64: Add comment for stack_info::kr_cur
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 3/9] arm64: Mark __switch_to() as __sched Mark Rutland
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

We added stack_info::kr_cur in commit:

  cd9bc2c9258816dc ("arm64: Recover kretprobe modified return address in stacktrace")

... but didn't add anything in the corresponding comment block.

For consistency, add a corresponding comment.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviwed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/stacktrace.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index 6564a01cc085..1367012e0520 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -47,6 +47,10 @@ struct stack_info {
  * @prev_type:   The type of stack this frame record was on, or a synthetic
  *               value of STACK_TYPE_UNKNOWN. This is used to detect a
  *               transition from one stack to another.
+ *
+ * @kr_cur:      When KRETPROBES is selected, holds the kretprobe instance
+ *               associated with the most recently encountered replacement lr
+ *               value.
  */
 struct stackframe {
 	unsigned long fp;
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/9] arm64: Mark __switch_to() as __sched
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 2/9] arm64: Add comment for stack_info::kr_cur Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 17:03   ` Mark Brown
  2021-11-29 14:28 ` [PATCH v2 4/9] arm64: Make perf_callchain_kernel() use arch_stack_walk() Mark Rutland
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

Unlike most architectures (and only in keeping with powerpc), arm64 has
a non __sched() function on the path to our cpu_switch_to() assembly
function.

It is expected that for a blocked task, in_sched_functions() can be used
to skip all functions between the raw context switch assembly and the
scheduler functions that call into __switch_to(). This is the behaviour
expected by stack_trace_consume_entry_nosched(), and the behaviour we'd
like to have such that we an simplify arm64's __get_wchan()
implementation to use arch_stack_walk().

This patch mark's arm64's __switch_to as __sched. This *will not* change
the behaviour of arm64's current __get_wchan() implementation, which
always performs an initial unwind step which skips __switch_to(). This
*will* change the behaviour of stack_trace_consume_entry_nosched() and
stack_trace_save_tsk() to match their expected behaviour on blocked
tasks, skipping all scheduler-internal functions including
__switch_to().

Other than the above, there should be no functional change as a result
of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/kernel/process.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index aacf2f5559a8..980cad7292af 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -490,7 +490,8 @@ void update_sctlr_el1(u64 sctlr)
 /*
  * Thread switching.
  */
-__notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev,
+__notrace_funcgraph __sched
+struct task_struct *__switch_to(struct task_struct *prev,
 				struct task_struct *next)
 {
 	struct task_struct *last;
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 4/9] arm64: Make perf_callchain_kernel() use arch_stack_walk()
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (2 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 3/9] arm64: Mark __switch_to() as __sched Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 5/9] arm64: Make __get_wchan() " Mark Rutland
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
substantially rework arm64's unwinding code. As part of this, we want to
minimize the set of unwind interfaces we expose, and avoid open-coding
of unwind logic outside of stacktrace.c.

Currently perf_callchain_kernel() walks the stack of an interrupted
context by calling start_backtrace() with the context's PC and FP, and
iterating unwind steps using walk_stackframe(). This is functionally
equivalent to calling arch_stack_walk() with the interrupted context's
pt_regs, which will start with the PC and FP from the regs.

Make perf_callchain_kernel() use arch_stack_walk(). This simplifies
perf_callchain_kernel(), and in future will alow us to make
walk_stackframe() private to stacktrace.c.

At the same time, we update the callchain_trace() callback to check the
return value of perf_callchain_store(), which indicates whether there is
space for any further entries. When a non-zero value is returned,
further calls will be ignored, and are redundant, so we can stop the
unwind at this point.

We also remove the stale and confusing comment for callchain_trace.

There should be no functional change as a result of this patch.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
[Mark: elaborate commit message, remove comment, fix includes]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/kernel/perf_callchain.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index 4a72c2727309..e9b7d99f4e3a 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -5,10 +5,10 @@
  * Copyright (C) 2015 ARM Limited
  */
 #include <linux/perf_event.h>
+#include <linux/stacktrace.h>
 #include <linux/uaccess.h>
 
 #include <asm/pointer_auth.h>
-#include <asm/stacktrace.h>
 
 struct frame_tail {
 	struct frame_tail	__user *fp;
@@ -132,30 +132,21 @@ void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
 	}
 }
 
-/*
- * Gets called by walk_stackframe() for every stackframe. This will be called
- * whist unwinding the stackframe and is like a subroutine return so we use
- * the PC.
- */
 static bool callchain_trace(void *data, unsigned long pc)
 {
 	struct perf_callchain_entry_ctx *entry = data;
-	perf_callchain_store(entry, pc);
-	return true;
+	return perf_callchain_store(entry, pc) == 0;
 }
 
 void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
 			   struct pt_regs *regs)
 {
-	struct stackframe frame;
-
 	if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
 		/* We don't support guest os callchain now */
 		return;
 	}
 
-	start_backtrace(&frame, regs->regs[29], regs->pc);
-	walk_stackframe(current, &frame, callchain_trace, entry);
+	arch_stack_walk(callchain_trace, entry, current, regs);
 }
 
 unsigned long perf_instruction_pointer(struct pt_regs *regs)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 5/9] arm64: Make __get_wchan() use arch_stack_walk()
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (3 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 4/9] arm64: Make perf_callchain_kernel() use arch_stack_walk() Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 17:08   ` Mark Brown
  2021-11-29 14:28 ` [PATCH v2 6/9] arm64: Make return_address() " Mark Rutland
                   ` (4 subsequent siblings)
  9 siblings, 1 reply; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
substantially rework arm64's unwinding code. As part of this, we want to
minimize the set of unwind interfaces we expose, and avoid open-coding
of unwind logic outside of stacktrace.c.

Currently, __get_wchan() walks the stack of a blocked task by calling
start_backtrace() with the task's saved PC and FP values, and iterating
unwind steps using unwind_frame(). The initialization is functionally
equivalent to calling arch_stack_walk() with the blocked task, which
will start with the task's saved PC and FP values.

Currently __get_wchan() always performs an initial unwind step, which
will stkip __switch_to(), but as this is now marked as a __sched
function, this no longer needs special handling and will be skipped in
the same way as other sched functions.

Make __get_wchan() use arch_stack_walk(). This simplifies __get_wchan(),
and in future will alow us to make unwind_frame() private to
stacktrace.c. At the same time, we can simplify the try_get_task_stack()
check and avoid the unnecessary `stack_page` variable.

The change to the skipping logic means we may terminate one frame
earlier than previously where there are an excessive number of sched
functions in the trace, but this isn't seen in practice, and wchan is
best-effort anyway, so this should not be a problem.

Other than the above, there should be no functional change as a result
of this patch.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
[Mark: rebase atop wchan changes, elaborate commit message, fix includes]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/kernel/process.c | 42 ++++++++++++++++++++++---------------
 1 file changed, 25 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 980cad7292af..836a933156cd 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -40,6 +40,7 @@
 #include <linux/percpu.h>
 #include <linux/thread_info.h>
 #include <linux/prctl.h>
+#include <linux/stacktrace.h>
 
 #include <asm/alternative.h>
 #include <asm/compat.h>
@@ -529,30 +530,37 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	return last;
 }
 
+struct wchan_info {
+	unsigned long	pc;
+	int		count;
+};
+
+static bool get_wchan_cb(void *arg, unsigned long pc)
+{
+	struct wchan_info *wchan_info = arg;
+
+	if (!in_sched_functions(pc)) {
+		wchan_info->pc = pc;
+		return false;
+	}
+	return wchan_info->count++ < 16;
+}
+
 unsigned long __get_wchan(struct task_struct *p)
 {
-	struct stackframe frame;
-	unsigned long stack_page, ret = 0;
-	int count = 0;
+	struct wchan_info wchan_info = {
+		.pc = 0,
+		.count = 0,
+	};
 
-	stack_page = (unsigned long)try_get_task_stack(p);
-	if (!stack_page)
+	if (!try_get_task_stack(p))
 		return 0;
 
-	start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p));
-
-	do {
-		if (unwind_frame(p, &frame))
-			goto out;
-		if (!in_sched_functions(frame.pc)) {
-			ret = frame.pc;
-			goto out;
-		}
-	} while (count++ < 16);
+	arch_stack_walk(get_wchan_cb, &wchan_info, p, NULL);
 
-out:
 	put_task_stack(p);
-	return ret;
+
+	return wchan_info.pc;
 }
 
 unsigned long arch_align_stack(unsigned long sp)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 6/9] arm64: Make return_address() use arch_stack_walk()
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (4 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 5/9] arm64: Make __get_wchan() " Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 7/9] arm64: Make profile_pc() " Mark Rutland
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
substantially rework arm64's unwinding code. As part of this, we want to
minimize the set of unwind interfaces we expose, and avoid open-coding
of unwind logic outside of stacktrace.c.

Currently return_address() walks the stack of the current task by
calling start_backtrace() with return_address as the PC and the frame
pointer of return_address() as the next frame, iterating unwind steps
using walk_stackframe(). This is functionally equivalent to calling
arch_stack_walk() for the current stack, which will start from its
caller (i.e. return_address()) as the PC and it's caller's frame record
as the next frame.

Make return_address() use arch_stackwalk(). This simplifies
return_address(), and in future will alow us to make walk_stackframe()
private to stacktrace.c.

There should be no functional change as a result of this patch.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
[Mark: elaborate commit message, fix includes]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/kernel/return_address.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c
index a6d18755652f..68330017d04f 100644
--- a/arch/arm64/kernel/return_address.c
+++ b/arch/arm64/kernel/return_address.c
@@ -9,9 +9,9 @@
 #include <linux/export.h>
 #include <linux/ftrace.h>
 #include <linux/kprobes.h>
+#include <linux/stacktrace.h>
 
 #include <asm/stack_pointer.h>
-#include <asm/stacktrace.h>
 
 struct return_address_data {
 	unsigned int level;
@@ -35,15 +35,11 @@ NOKPROBE_SYMBOL(save_return_addr);
 void *return_address(unsigned int level)
 {
 	struct return_address_data data;
-	struct stackframe frame;
 
 	data.level = level + 2;
 	data.addr = NULL;
 
-	start_backtrace(&frame,
-			(unsigned long)__builtin_frame_address(0),
-			(unsigned long)return_address);
-	walk_stackframe(current, &frame, save_return_addr, &data);
+	arch_stack_walk(save_return_addr, &data, current, NULL);
 
 	if (!data.level)
 		return data.addr;
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 7/9] arm64: Make profile_pc() use arch_stack_walk()
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (5 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 6/9] arm64: Make return_address() " Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 8/9] arm64: Make dump_backtrace() " Mark Rutland
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
substantially rework arm64's unwinding code. As part of this, we want to
minimize the set of unwind interfaces we expose, and avoid open-coding
of unwind logic outside of stacktrace.c.

Currently profile_pc() walks the stack of an interrupted context by
calling start_backtrace() with the context's PC and FP, and iterating
unwind steps using walk_stackframe(). This is functionally equivalent to
calling arch_stack_walk() with the interrupted context's pt_regs, which
will start with the PC and FP from the regs.

Make profile_pc() use arch_stack_walk(). This simplifies profile_pc(),
and in future will alow us to make walk_stackframe() private to
stacktrace.c.

At the same time, we remove the early return for when regs->pc is not in
lock functions, as this will be handled by the first call to the
profile_pc_cb() callback.

There should be no functional change as a result of this patch.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
[Mark: remove early return, elaborate commit message, fix includes]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/time.c | 25 +++++++++++++------------
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/kernel/time.c b/arch/arm64/kernel/time.c
index eebbc8d7123e..b5855eb7435d 100644
--- a/arch/arm64/kernel/time.c
+++ b/arch/arm64/kernel/time.c
@@ -18,6 +18,7 @@
 #include <linux/timex.h>
 #include <linux/errno.h>
 #include <linux/profile.h>
+#include <linux/stacktrace.h>
 #include <linux/syscore_ops.h>
 #include <linux/timer.h>
 #include <linux/irq.h>
@@ -29,25 +30,25 @@
 #include <clocksource/arm_arch_timer.h>
 
 #include <asm/thread_info.h>
-#include <asm/stacktrace.h>
 #include <asm/paravirt.h>
 
-unsigned long profile_pc(struct pt_regs *regs)
+static bool profile_pc_cb(void *arg, unsigned long pc)
 {
-	struct stackframe frame;
+	unsigned long *prof_pc = arg;
 
-	if (!in_lock_functions(regs->pc))
-		return regs->pc;
+	if (in_lock_functions(pc))
+		return true;
+	*prof_pc = pc;
+	return false;
+}
 
-	start_backtrace(&frame, regs->regs[29], regs->pc);
+unsigned long profile_pc(struct pt_regs *regs)
+{
+	unsigned long prof_pc = 0;
 
-	do {
-		int ret = unwind_frame(NULL, &frame);
-		if (ret < 0)
-			return 0;
-	} while (in_lock_functions(frame.pc));
+	arch_stack_walk(profile_pc_cb, &prof_pc, current, regs);
 
-	return frame.pc;
+	return prof_pc;
 }
 EXPORT_SYMBOL(profile_pc);
 
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 8/9] arm64: Make dump_backtrace() use arch_stack_walk()
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (6 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 7/9] arm64: Make profile_pc() " Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-11-29 14:28 ` [PATCH v2 9/9] arm64: Make some stacktrace functions private Mark Rutland
  2021-12-10 18:41 ` [PATCH v2 0/9] arm64: stacktrace: unify unwind code Catalin Marinas
  9 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>

To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
substantially rework arm64's unwinding code. As part of this, we want to
minimize the set of unwind interfaces we expose, and avoid open-coding
of unwind logic.

Currently, dump_backtrace() walks the stack of the current task or a
blocked task by calling stact_backtrace() and iterating unwind steps
using unwind_frame(). This can be written more simply in terms of
arch_stack_walk(), considering three distinct cases:

1) When unwinding a blocked task, start_backtrace() is called with the
   blocked task's saved PC and FP, and the unwind proceeds immediately
   from this point without skipping any entries. This is functionally
   equivalent to calling arch_stack_walk() with the blocked task, which
   will start with the task's saved PC and FP.

   There is no functional change to this case.

2) When unwinding the current task without regs, start_backtrace() is
   called with dump_backtrace() as the PC and __builtin_frame_address(0)
   as the next frame, and the unwind proceeds immediately without
   skipping. This is *almost* functionally equivalent to calling
   arch_stack_walk() for the current task, which will start with its
   caller (i.e. an offset into dump_backtrace()) as the PC, and the
   callers frame record as the next frame.

   The only difference being that dump_backtrace() will be reported with
   an offset (which is strictly more correct than currently). Otherwise
   there is no functional cahnge to this case.

3) When unwinding the current task with regs, start_backtrace() is
   called with dump_backtrace() as the PC and __builtin_frame_address(0)
   as the next frame, and the unwind is performed silently until the
   next frame is the frame pointed to by regs->fp. Reporting starts
   from regs->pc and continues from the frame in regs->fp.

   Historically, this pre-unwind was necessary to correctly record
   return addresses rewritten by the ftrace graph calller, but this is
   no longer necessary as these are now recovered using the FP since
   commit:

   c6d3cd32fd0064af ("arm64: ftrace: use HAVE_FUNCTION_GRAPH_RET_ADDR_PTR")

   This pre-unwind is not necessary to recover return addresses
   rewritten by kretprobes, which historically were not recovered, and
   are now recovered using the FP since commit:

   cd9bc2c9258816dc ("arm64: Recover kretprobe modified return address in stacktrace")

   Thus, this is functionally equivalent to calling arch_stack_walk()
   with the current task and regs, which will start with regs->pc as the
   PC and regs->fp as the next frame, without a pre-unwind.

This patch makes dump_backtrace() use arch_stack_walk(). This simplifies
dump_backtrace() and will permit subsequent changes to the unwind code.

Aside from the improved reporting when unwinding current without regs,
there should be no functional change as a result of this patch.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
[Mark: elaborate commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
---
 arch/arm64/kernel/stacktrace.c | 44 +++++-----------------------------
 1 file changed, 6 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index e6ba6b000564..9fc771a05306 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -156,24 +156,20 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
 }
 NOKPROBE_SYMBOL(walk_stackframe);
 
-static void dump_backtrace_entry(unsigned long where, const char *loglvl)
+static bool dump_backtrace_entry(void *arg, unsigned long where)
 {
+	char *loglvl = arg;
 	printk("%s %pSb\n", loglvl, (void *)where);
+	return true;
 }
 
 void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
 		    const char *loglvl)
 {
-	struct stackframe frame;
-	int skip = 0;
-
 	pr_debug("%s(regs = %p tsk = %p)\n", __func__, regs, tsk);
 
-	if (regs) {
-		if (user_mode(regs))
-			return;
-		skip = 1;
-	}
+	if (regs && user_mode(regs))
+		return;
 
 	if (!tsk)
 		tsk = current;
@@ -181,36 +177,8 @@ void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
 	if (!try_get_task_stack(tsk))
 		return;
 
-	if (tsk == current) {
-		start_backtrace(&frame,
-				(unsigned long)__builtin_frame_address(0),
-				(unsigned long)dump_backtrace);
-	} else {
-		/*
-		 * task blocked in __switch_to
-		 */
-		start_backtrace(&frame,
-				thread_saved_fp(tsk),
-				thread_saved_pc(tsk));
-	}
-
 	printk("%sCall trace:\n", loglvl);
-	do {
-		/* skip until specified stack frame */
-		if (!skip) {
-			dump_backtrace_entry(frame.pc, loglvl);
-		} else if (frame.fp == regs->regs[29]) {
-			skip = 0;
-			/*
-			 * Mostly, this is the case where this function is
-			 * called in panic/abort. As exception handler's
-			 * stack frame does not contain the corresponding pc
-			 * at which an exception has taken place, use regs->pc
-			 * instead.
-			 */
-			dump_backtrace_entry(regs->pc, loglvl);
-		}
-	} while (!unwind_frame(tsk, &frame));
+	arch_stack_walk(dump_backtrace_entry, (void *)loglvl, tsk, regs);
 
 	put_task_stack(tsk);
 }
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 9/9] arm64: Make some stacktrace functions private
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (7 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 8/9] arm64: Make dump_backtrace() " Mark Rutland
@ 2021-11-29 14:28 ` Mark Rutland
  2021-12-10 18:41 ` [PATCH v2 0/9] arm64: stacktrace: unify unwind code Catalin Marinas
  9 siblings, 0 replies; 14+ messages in thread
From: Mark Rutland @ 2021-11-29 14:28 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: aou, borntraeger, bp, broonie, catalin.marinas, dave.hansen, gor,
	hca, madvenka, mark.rutland, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will

Now that open-coded stack unwinds have been converted to
arch_stack_walk(), we no longer need to expose any of unwind_frame(),
walk_stackframe(), or start_backtrace() outside of stacktrace.c.

Make those functions private to stacktrace.c, removing their prototypes
from <asm/stacktrace.h> and marking them static.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Cc: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
---
 arch/arm64/include/asm/stacktrace.h |  6 ------
 arch/arm64/kernel/stacktrace.c      | 12 +++++++-----
 2 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h
index 1367012e0520..e77cdef9ca29 100644
--- a/arch/arm64/include/asm/stacktrace.h
+++ b/arch/arm64/include/asm/stacktrace.h
@@ -63,9 +63,6 @@ struct stackframe {
 #endif
 };
 
-extern int unwind_frame(struct task_struct *tsk, struct stackframe *frame);
-extern void walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
-			    bool (*fn)(void *, unsigned long), void *data);
 extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk,
 			   const char *loglvl);
 
@@ -150,7 +147,4 @@ static inline bool on_accessible_stack(const struct task_struct *tsk,
 	return false;
 }
 
-void start_backtrace(struct stackframe *frame, unsigned long fp,
-		     unsigned long pc);
-
 #endif	/* __ASM_STACKTRACE_H */
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 9fc771a05306..0fb58fed54cb 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -33,8 +33,8 @@
  */
 
 
-void start_backtrace(struct stackframe *frame, unsigned long fp,
-		     unsigned long pc)
+static void start_backtrace(struct stackframe *frame, unsigned long fp,
+			    unsigned long pc)
 {
 	frame->fp = fp;
 	frame->pc = pc;
@@ -63,7 +63,8 @@ void start_backtrace(struct stackframe *frame, unsigned long fp,
  * records (e.g. a cycle), determined based on the location and fp value of A
  * and the location (but not the fp value) of B.
  */
-int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
+static int notrace unwind_frame(struct task_struct *tsk,
+				struct stackframe *frame)
 {
 	unsigned long fp = frame->fp;
 	struct stack_info info;
@@ -141,8 +142,9 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
 }
 NOKPROBE_SYMBOL(unwind_frame);
 
-void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
-			     bool (*fn)(void *, unsigned long), void *data)
+static void notrace walk_stackframe(struct task_struct *tsk,
+				    struct stackframe *frame,
+				    bool (*fn)(void *, unsigned long), void *data)
 {
 	while (1) {
 		int ret;
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 3/9] arm64: Mark __switch_to() as __sched
  2021-11-29 14:28 ` [PATCH v2 3/9] arm64: Mark __switch_to() as __sched Mark Rutland
@ 2021-11-29 17:03   ` Mark Brown
  0 siblings, 0 replies; 14+ messages in thread
From: Mark Brown @ 2021-11-29 17:03 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, aou, borntraeger, bp, catalin.marinas,
	dave.hansen, gor, hca, madvenka, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will


[-- Attachment #1.1: Type: text/plain, Size: 265 bytes --]

On Mon, Nov 29, 2021 at 02:28:43PM +0000, Mark Rutland wrote:
> Unlike most architectures (and only in keeping with powerpc), arm64 has
> a non __sched() function on the path to our cpu_switch_to() assembly
> function.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/9] arm64: Make __get_wchan() use arch_stack_walk()
  2021-11-29 14:28 ` [PATCH v2 5/9] arm64: Make __get_wchan() " Mark Rutland
@ 2021-11-29 17:08   ` Mark Brown
  0 siblings, 0 replies; 14+ messages in thread
From: Mark Brown @ 2021-11-29 17:08 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, aou, borntraeger, bp, catalin.marinas,
	dave.hansen, gor, hca, madvenka, mhiramat, mingo, mpe, palmer,
	paul.walmsley, peterz, rostedt, tglx, will


[-- Attachment #1.1: Type: text/plain, Size: 444 bytes --]

On Mon, Nov 29, 2021 at 02:28:45PM +0000, Mark Rutland wrote:
> From: "Madhavan T. Venkataraman" <madvenka@linux.microsoft.com>
> 
> To enable RELIABLE_STACKTRACE and LIVEPATCH on arm64, we need to
> substantially rework arm64's unwinding code. As part of this, we want to
> minimize the set of unwind interfaces we expose, and avoid open-coding
> of unwind logic outside of stacktrace.c.

Reviewed-by: Mark Brown <broonie@kernel.org>

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE
  2021-11-29 14:28 ` [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE Mark Rutland
@ 2021-12-10 14:04   ` Catalin Marinas
  0 siblings, 0 replies; 14+ messages in thread
From: Catalin Marinas @ 2021-12-10 14:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, aou, borntraeger, bp, broonie, dave.hansen,
	gor, hca, madvenka, mhiramat, mingo, mpe, palmer, paul.walmsley,
	peterz, rostedt, tglx, will

On Mon, Nov 29, 2021 at 02:28:41PM +0000, Mark Rutland wrote:
> From: Peter Zijlstra <peterz@infradead.org>
> 
> Make arch_stack_walk() available for ARCH_STACKWALK architectures
> without it being entangled in STACKTRACE.
> 
> Link: https://lore.kernel.org/lkml/20211022152104.356586621@infradead.org/
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> [Mark: rebase, drop unnecessary arm change]
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Albert Ou <aou@eecs.berkeley.edu>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> ---
>  arch/arm64/kernel/stacktrace.c |  4 ----
>  arch/powerpc/kernel/Makefile   |  3 +--
>  arch/riscv/kernel/stacktrace.c |  4 ----
>  arch/s390/kernel/Makefile      |  3 +--
>  arch/x86/kernel/Makefile       |  2 +-
>  include/linux/stacktrace.h     | 35 +++++++++++++++++-----------------
>  6 files changed, 21 insertions(+), 30 deletions(-)

If there are no objections, I plan to take this patch via the arm64
tree together with the rest of the series.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 0/9] arm64: stacktrace: unify unwind code
  2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
                   ` (8 preceding siblings ...)
  2021-11-29 14:28 ` [PATCH v2 9/9] arm64: Make some stacktrace functions private Mark Rutland
@ 2021-12-10 18:41 ` Catalin Marinas
  9 siblings, 0 replies; 14+ messages in thread
From: Catalin Marinas @ 2021-12-10 18:41 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: Will Deacon, tglx, aou, hca, peterz, mpe, palmer, gor,
	dave.hansen, madvenka, paul.walmsley, mingo, borntraeger,
	mhiramat, bp, broonie, rostedt

On Mon, 29 Nov 2021 14:28:40 +0000, Mark Rutland wrote:
> For historical reasons arm64 has a number of open-coded unwind functions. We'd
> like to unify these to reduce the amount of unwind code we have to expose, and
> to make it easier for subsequent patches to rework the unwind code for
> RELIABLE_STACKTRACE.
> 
> These patches unify the various unwinders using arch_stack_walk(). So that we
> can use arch_stack_walk() without having to expose /proc/${PID}/stack, I've
> picked Peter's patch decoupling ARCH_STACKWALK from STACKTRACE, which was
> previously posted at:
> 
> [...]

Applied to arm64 (for-next/stacktrace), thanks!

[1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE
      https://git.kernel.org/arm64/c/1614b2b11fab
[2/9] arm64: Add comment for stack_info::kr_cur
      https://git.kernel.org/arm64/c/1e5428b2b7e8
[3/9] arm64: Mark __switch_to() as __sched
      https://git.kernel.org/arm64/c/86bcbafcb726
[4/9] arm64: Make perf_callchain_kernel() use arch_stack_walk()
      https://git.kernel.org/arm64/c/ed876d35a1dc
[5/9] arm64: Make __get_wchan() use arch_stack_walk()
      https://git.kernel.org/arm64/c/4f62bb7cb165
[6/9] arm64: Make return_address() use arch_stack_walk()
      https://git.kernel.org/arm64/c/39ef362d2d45
[7/9] arm64: Make profile_pc() use arch_stack_walk()
      https://git.kernel.org/arm64/c/22ecd975b61d
[8/9] arm64: Make dump_backtrace() use arch_stack_walk()
      https://git.kernel.org/arm64/c/2dad6dc17bd0
[9/9] arm64: Make some stacktrace functions private
      https://git.kernel.org/arm64/c/d2d1d2645cfd

-- 
Catalin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-12-10 18:42 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-29 14:28 [PATCH v2 0/9] arm64: stacktrace: unify unwind code Mark Rutland
2021-11-29 14:28 ` [PATCH v2 1/9] arch: Make ARCH_STACKWALK independent of STACKTRACE Mark Rutland
2021-12-10 14:04   ` Catalin Marinas
2021-11-29 14:28 ` [PATCH v2 2/9] arm64: Add comment for stack_info::kr_cur Mark Rutland
2021-11-29 14:28 ` [PATCH v2 3/9] arm64: Mark __switch_to() as __sched Mark Rutland
2021-11-29 17:03   ` Mark Brown
2021-11-29 14:28 ` [PATCH v2 4/9] arm64: Make perf_callchain_kernel() use arch_stack_walk() Mark Rutland
2021-11-29 14:28 ` [PATCH v2 5/9] arm64: Make __get_wchan() " Mark Rutland
2021-11-29 17:08   ` Mark Brown
2021-11-29 14:28 ` [PATCH v2 6/9] arm64: Make return_address() " Mark Rutland
2021-11-29 14:28 ` [PATCH v2 7/9] arm64: Make profile_pc() " Mark Rutland
2021-11-29 14:28 ` [PATCH v2 8/9] arm64: Make dump_backtrace() " Mark Rutland
2021-11-29 14:28 ` [PATCH v2 9/9] arm64: Make some stacktrace functions private Mark Rutland
2021-12-10 18:41 ` [PATCH v2 0/9] arm64: stacktrace: unify unwind code Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.