linux-hardening.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback
@ 2021-09-29 22:02 Kees Cook
  2021-09-29 22:02 ` [PATCH v2 1/6] Revert "proc/wchan: use printk format instead of lookup_symbol_name()" Kees Cook
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, kernel test robot, Vito Caputo, Jann Horn,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, Josh Poimboeuf, linux-kernel,
	linux-fsdevel, linux-hardening, x86

Hi,

This attempts to solve the issues from the discussion here[1]. Specifically:

1) wchan leaking raw addresses since 152c432b128c (v5.12).

patch 1 fixes this with a revert.

2) wchan has been broken under ORC, seen as a failure to stack walk
   resulting in _usually_ a 0 value, since ee9f8fce9964 (v4.14).

patches 2-5 fixes this with Qi Zheng's new get_wchan() and changes to
the /proc code to use the new helper suggested by Peter to do the stack
walk only if the process can be kept blocked:
https://lore.kernel.org/lkml/20210929194026.GA4323@worktop.programming.kicks-ass.net/

Peter, can you take this via -tip?

Thanks!

-Kees

[1] https://lore.kernel.org/lkml/20210924054647.v6x6risoa4jhuu6s@shells.gnugeneration.com/

v1: https://lore.kernel.org/lkml/20210924062006.231699-3-keescook@chromium.org/

Kees Cook (5):
  Revert "proc/wchan: use printk format instead of lookup_symbol_name()"
  sched: Add wrapper for get_wchan() to keep task blocked
  proc: Use task_is_running() for wchan in /proc/$pid/stat
  proc: Only report /proc/$pid/wchan when process is blocked
  leaking_addresses: Always print a trailing newline

Qi Zheng (1):
  x86: Fix get_wchan() to support the ORC unwinder

 arch/x86/kernel/process.c    | 51 +++---------------------------------
 fs/proc/array.c              |  7 ++---
 fs/proc/base.c               | 20 ++++++++------
 include/linux/sched.h        |  1 +
 kernel/sched/core.c          | 16 +++++++++++
 scripts/leaking_addresses.pl |  3 ++-
 6 files changed, 36 insertions(+), 62 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v2 1/6] Revert "proc/wchan: use printk format instead of lookup_symbol_name()"
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
@ 2021-09-29 22:02 ` Kees Cook
  2021-09-29 22:02 ` [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked Kees Cook
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, kernel test robot, Vito Caputo, Jann Horn, stable,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, Josh Poimboeuf, linux-kernel,
	linux-fsdevel, linux-hardening, x86

This reverts commit 152c432b128cb043fc107e8f211195fe94b2159c.

When a kernel address couldn't be symbolized for /proc/$pid/wchan, it
would leak the raw value, a potential information exposure. This is a
regression compared to the safer pre-v5.12 behavior.

Reported-by: kernel test robot <oliver.sang@intel.com>
Link: https://lore.kernel.org/all/20210103142726.GC30643@xsang-OptiPlex-9020/
Reported-by: Vito Caputo <vcaputo@pengaru.com>
Link: https://lore.kernel.org/lkml/20210921193249.el476vlhg5k6lfcq@shells.gnugeneration.com/
Reported-by: Jann Horn <jannh@google.com>
Link: https://lore.kernel.org/lkml/CAG48ez2zC=+PuNgezH53HBPZ8CXU5H=vkWx7nJs60G8RXt3w0Q@mail.gmail.com/
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 fs/proc/base.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/fs/proc/base.c b/fs/proc/base.c
index 533d5836eb9a..1f394095eb88 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -67,6 +67,7 @@
 #include <linux/mm.h>
 #include <linux/swap.h>
 #include <linux/rcupdate.h>
+#include <linux/kallsyms.h>
 #include <linux/stacktrace.h>
 #include <linux/resource.h>
 #include <linux/module.h>
@@ -386,17 +387,19 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
 			  struct pid *pid, struct task_struct *task)
 {
 	unsigned long wchan;
+	char symname[KSYM_NAME_LEN];
 
-	if (ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS))
-		wchan = get_wchan(task);
-	else
-		wchan = 0;
+	if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS))
+		goto print0;
 
-	if (wchan)
-		seq_printf(m, "%ps", (void *) wchan);
-	else
-		seq_putc(m, '0');
+	wchan = get_wchan(task);
+	if (wchan && !lookup_symbol_name(wchan, symname)) {
+		seq_puts(m, symname);
+		return 0;
+	}
 
+print0:
+	seq_putc(m, '0');
 	return 0;
 }
 #endif /* CONFIG_KALLSYMS */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
  2021-09-29 22:02 ` [PATCH v2 1/6] Revert "proc/wchan: use printk format instead of lookup_symbol_name()" Kees Cook
@ 2021-09-29 22:02 ` Kees Cook
  2021-09-30  8:27   ` Peter Zijlstra
  2021-09-29 22:02 ` [PATCH v2 3/6] proc: Use task_is_running() for wchan in /proc/$pid/stat Kees Cook
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, kernel test robot, Vito Caputo,
	Jann Horn, Andrew Morton, Christian Brauner, Anand K Mistry,
	Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, Josh Poimboeuf, linux-kernel,
	linux-fsdevel, linux-hardening, x86

Having a stable wchan means the process must be blocked and for it to
stay that way while performing stack unwinding.

Cc: Ingo Molnar <mingo@redhat.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 16 ++++++++++++++++
 2 files changed, 17 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 39039ce8ac4c..0c8185089e20 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2137,6 +2137,7 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
 #endif /* CONFIG_SMP */
 
 extern bool sched_task_on_rq(struct task_struct *p);
+extern unsigned long sched_task_get_wchan(struct task_struct *p);
 
 /*
  * In order to reduce various lock holder preemption latencies provide an
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1bba4128a3e6..4a30455e1ff5 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1962,6 +1962,22 @@ bool sched_task_on_rq(struct task_struct *p)
 	return task_on_rq_queued(p);
 }
 
+unsigned long sched_task_get_wchan(struct task_struct *p)
+{
+	unsigned int state;
+	unsigned long ip = 0;
+
+	/* Only get wchan if task is blocked and we can keep it that way. */
+	raw_spin_lock_irq(&p->pi_lock);
+	state = READ_ONCE(p->__state);
+	smp_rmb(); /* see try_to_wake_up() */
+	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
+		ip = get_wchan(p);
+	raw_spin_unlock_irq(&p->pi_lock);
+
+	return ip;
+}
+
 static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
 {
 	if (!(flags & ENQUEUE_NOCLOCK))
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 3/6] proc: Use task_is_running() for wchan in /proc/$pid/stat
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
  2021-09-29 22:02 ` [PATCH v2 1/6] Revert "proc/wchan: use printk format instead of lookup_symbol_name()" Kees Cook
  2021-09-29 22:02 ` [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked Kees Cook
@ 2021-09-29 22:02 ` Kees Cook
  2021-09-29 22:02 ` [PATCH v2 4/6] proc: Only report /proc/$pid/wchan when process is blocked Kees Cook
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Andrew Morton, Christian Brauner, Anand K Mistry,
	Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Jann Horn, Michal Hocko, Helge Deller, linux-fsdevel,
	kernel test robot, Vito Caputo, Ingo Molnar, Juri Lelli,
	Vincent Guittot, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Mel Gorman, Daniel Bristot de Oliveira, Qi Zheng,
	Tobin C. Harding, Tycho Andersen, Thomas Gleixner,
	Borislav Petkov, H. Peter Anvin, Mark Rutland, Jens Axboe,
	Stefan Metzmacher, Lai Jiangshan, Andy Lutomirski, Dave Hansen,
	Eric W. Biederman, Ohhoon Kwon, Kalesh Singh, YiFei Zhu,
	Josh Poimboeuf, linux-kernel, linux-hardening, x86

The implementations of get_wchan() can be expensive. The only information
imparted here is whether or not a process is currently blocked in the
scheduler (and even this doesn't need to be exact). Avoid doing the
heavy lifting of stack walking and just report that information by using
task_is_running().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Cc: Anand K Mistry <amistry@google.com>
Cc: "Kenta.Tada@sony.com" <Kenta.Tada@sony.com>
Cc: Alexey Gladkov <legion@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Michael Weiß" <michael.weiss@aisec.fraunhofer.de>
Cc: Jann Horn <jannh@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 fs/proc/array.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/fs/proc/array.c b/fs/proc/array.c
index 49be8c8ef555..77cf4187adec 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -541,7 +541,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
 	}
 
 	if (permitted && (!whole || num_threads < 2))
-		wchan = get_wchan(task);
+		wchan = !task_is_running(task);
 	if (!whole) {
 		min_flt = task->min_flt;
 		maj_flt = task->maj_flt;
@@ -606,10 +606,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
 	 *
 	 * This works with older implementations of procps as well.
 	 */
-	if (wchan)
-		seq_puts(m, " 1");
-	else
-		seq_puts(m, " 0");
+	seq_put_decimal_ull(m, " ", wchan);
 
 	seq_put_decimal_ull(m, " ", 0);
 	seq_put_decimal_ull(m, " ", 0);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 4/6] proc: Only report /proc/$pid/wchan when process is blocked
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
                   ` (2 preceding siblings ...)
  2021-09-29 22:02 ` [PATCH v2 3/6] proc: Use task_is_running() for wchan in /proc/$pid/stat Kees Cook
@ 2021-09-29 22:02 ` Kees Cook
  2021-09-29 22:02 ` [PATCH v2 5/6] x86: Fix get_wchan() to support the ORC unwinder Kees Cook
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Andrew Morton, Christian Brauner, Jann Horn,
	Michal Hocko, Helge Deller, linux-fsdevel, kernel test robot,
	Vito Caputo, Ingo Molnar, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Anand K Mistry, Kenta.Tada,
	Alexey Gladkov, Michael Weiß,
	Qi Zheng, Tobin C. Harding, Tycho Andersen, Thomas Gleixner,
	Borislav Petkov, H. Peter Anvin, Mark Rutland, Jens Axboe,
	Stefan Metzmacher, Lai Jiangshan, Andy Lutomirski, Dave Hansen,
	Eric W. Biederman, Ohhoon Kwon, Kalesh Singh, YiFei Zhu,
	Josh Poimboeuf, linux-kernel, linux-hardening, x86

The current get_wchan() implementations do their best to avoid problems
when walking a stack given a process in an unknown state, but this is
fragile and unnecessary. It's only useful to report wchan if a process
is actually blocked, so use the new sched_task_get_wchan() instead.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Cc: Jann Horn <jannh@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Helge Deller <deller@gmx.de>
Cc: linux-fsdevel@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 fs/proc/base.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/proc/base.c b/fs/proc/base.c
index 1f394095eb88..7853592778b2 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -392,7 +392,8 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
 	if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS))
 		goto print0;
 
-	wchan = get_wchan(task);
+	wchan = sched_task_get_wchan(task);
+	/* Must only report symbolized addresses and never raw pointers. */
 	if (wchan && !lookup_symbol_name(wchan, symname)) {
 		seq_puts(m, symname);
 		return 0;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 5/6] x86: Fix get_wchan() to support the ORC unwinder
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
                   ` (3 preceding siblings ...)
  2021-09-29 22:02 ` [PATCH v2 4/6] proc: Only report /proc/$pid/wchan when process is blocked Kees Cook
@ 2021-09-29 22:02 ` Kees Cook
  2021-09-29 22:02 ` [PATCH v2 6/6] leaking_addresses: Always print a trailing newline Kees Cook
  2021-09-30  1:01 ` [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Josh Poimboeuf
  6 siblings, 0 replies; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Qi Zheng, kernel test robot, Vito Caputo, Jann Horn,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Tobin C. Harding, Tycho Andersen,
	Thomas Gleixner, Borislav Petkov, H. Peter Anvin, Mark Rutland,
	Jens Axboe, Stefan Metzmacher, Lai Jiangshan, Andy Lutomirski,
	Dave Hansen, Eric W. Biederman, Ohhoon Kwon, Kalesh Singh,
	YiFei Zhu, Josh Poimboeuf, linux-kernel, linux-fsdevel,
	linux-hardening, x86

From: Qi Zheng <zhengqi.arch@bytedance.com>

Currently, the kernel CONFIG_UNWINDER_ORC option is enabled by default
on x86, but the implementation of get_wchan() is still based on the frame
pointer unwinder, so the /proc/<pid>/wchan usually returned 0 regardless
of whether the task <pid> is running.

Reimplement get_wchan() by calling stack_trace_save_tsk(), which is
adapted to the ORC and frame pointer unwinders.

Fixes: ee9f8fce9964 ("x86/unwind: Add the ORC unwinder")
Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Link: https://lore.kernel.org/r/20210831083625.59554-1-zhengqi.arch@bytedance.com
---
 arch/x86/kernel/process.c | 51 +++------------------------------------
 1 file changed, 3 insertions(+), 48 deletions(-)

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 1d9463e3096b..e645925f9f02 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -944,58 +944,13 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
  */
 unsigned long get_wchan(struct task_struct *p)
 {
-	unsigned long start, bottom, top, sp, fp, ip, ret = 0;
-	int count = 0;
+	unsigned long entry = 0;
 
 	if (p == current || task_is_running(p))
 		return 0;
 
-	if (!try_get_task_stack(p))
-		return 0;
-
-	start = (unsigned long)task_stack_page(p);
-	if (!start)
-		goto out;
-
-	/*
-	 * Layout of the stack page:
-	 *
-	 * ----------- topmax = start + THREAD_SIZE - sizeof(unsigned long)
-	 * PADDING
-	 * ----------- top = topmax - TOP_OF_KERNEL_STACK_PADDING
-	 * stack
-	 * ----------- bottom = start
-	 *
-	 * The tasks stack pointer points at the location where the
-	 * framepointer is stored. The data on the stack is:
-	 * ... IP FP ... IP FP
-	 *
-	 * We need to read FP and IP, so we need to adjust the upper
-	 * bound by another unsigned long.
-	 */
-	top = start + THREAD_SIZE - TOP_OF_KERNEL_STACK_PADDING;
-	top -= 2 * sizeof(unsigned long);
-	bottom = start;
-
-	sp = READ_ONCE(p->thread.sp);
-	if (sp < bottom || sp > top)
-		goto out;
-
-	fp = READ_ONCE_NOCHECK(((struct inactive_task_frame *)sp)->bp);
-	do {
-		if (fp < bottom || fp > top)
-			goto out;
-		ip = READ_ONCE_NOCHECK(*(unsigned long *)(fp + sizeof(unsigned long)));
-		if (!in_sched_functions(ip)) {
-			ret = ip;
-			goto out;
-		}
-		fp = READ_ONCE_NOCHECK(*(unsigned long *)fp);
-	} while (count++ < 16 && !task_is_running(p));
-
-out:
-	put_task_stack(p);
-	return ret;
+	stack_trace_save_tsk(p, &entry, 1, 0);
+	return entry;
 }
 
 long do_arch_prctl_common(struct task_struct *task, int option,
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v2 6/6] leaking_addresses: Always print a trailing newline
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
                   ` (4 preceding siblings ...)
  2021-09-29 22:02 ` [PATCH v2 5/6] x86: Fix get_wchan() to support the ORC unwinder Kees Cook
@ 2021-09-29 22:02 ` Kees Cook
  2021-09-30 14:37   ` Tycho Andersen
  2021-09-30  1:01 ` [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Josh Poimboeuf
  6 siblings, 1 reply; 15+ messages in thread
From: Kees Cook @ 2021-09-29 22:02 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Kees Cook, Tobin C. Harding, Tycho Andersen, linux-hardening,
	kernel test robot, Vito Caputo, Jann Horn, Ingo Molnar,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira,
	Andrew Morton, Christian Brauner, Anand K Mistry, Kenta.Tada,
	Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Thomas Gleixner,
	Borislav Petkov, H. Peter Anvin, Mark Rutland, Jens Axboe,
	Stefan Metzmacher, Lai Jiangshan, Andy Lutomirski, Dave Hansen,
	Eric W. Biederman, Ohhoon Kwon, Kalesh Singh, YiFei Zhu,
	Josh Poimboeuf, linux-kernel, linux-fsdevel, x86

For files that lack trailing newlines and match a leaking address (e.g.
wchan[1]), the leaking_addresses.pl report would run together with the
next line, making things look corrupted.

Unconditionally remove the newline on input, and write it back out on
output.

[1] https://lore.kernel.org/all/20210103142726.GC30643@xsang-OptiPlex-9020/

Cc: "Tobin C. Harding" <me@tobin.cc>
Cc: Tycho Andersen <tycho@tycho.pizza>
Cc: linux-hardening@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 scripts/leaking_addresses.pl | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/scripts/leaking_addresses.pl b/scripts/leaking_addresses.pl
index b2d8b8aa2d99..8f636a23bc3f 100755
--- a/scripts/leaking_addresses.pl
+++ b/scripts/leaking_addresses.pl
@@ -455,8 +455,9 @@ sub parse_file
 
 	open my $fh, "<", $file or return;
 	while ( <$fh> ) {
+		chomp;
 		if (may_leak_address($_)) {
-			print $file . ': ' . $_;
+			printf("$file: $_\n");
 		}
 	}
 	close $fh;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback
  2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
                   ` (5 preceding siblings ...)
  2021-09-29 22:02 ` [PATCH v2 6/6] leaking_addresses: Always print a trailing newline Kees Cook
@ 2021-09-30  1:01 ` Josh Poimboeuf
  2021-09-30  8:40   ` Peter Zijlstra
  6 siblings, 1 reply; 15+ messages in thread
From: Josh Poimboeuf @ 2021-09-30  1:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: Peter Zijlstra, kernel test robot, Vito Caputo, Jann Horn,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, linux-kernel, linux-fsdevel,
	linux-hardening, x86

On Wed, Sep 29, 2021 at 03:02:12PM -0700, Kees Cook wrote:
> Hi,
> 
> This attempts to solve the issues from the discussion here[1]. Specifically:
> 
> 1) wchan leaking raw addresses since 152c432b128c (v5.12).
> 
> patch 1 fixes this with a revert.
> 
> 2) wchan has been broken under ORC, seen as a failure to stack walk
>    resulting in _usually_ a 0 value, since ee9f8fce9964 (v4.14).
> 
> patches 2-5 fixes this with Qi Zheng's new get_wchan() and changes to
> the /proc code to use the new helper suggested by Peter to do the stack
> walk only if the process can be kept blocked:
> https://lore.kernel.org/lkml/20210929194026.GA4323@worktop.programming.kicks-ass.net/
> 
> Peter, can you take this via -tip?

It all looks sane to me.  Thanks for cleaning up this mess.

- Should we use a similar sched wrapper for /proc/$pid/stack to make its
  raciness go away?

- At the risk of triggering a much larger patch set, I suspect
  get_wchan() can be made generic ;-)  It's just a glorified wrapper
  around stack_trace_save_tsk().

Regardless:

Acked-by: Josh Poimboeuf <jpoimboe@redhat.com>

-- 
Josh


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked
  2021-09-29 22:02 ` [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked Kees Cook
@ 2021-09-30  8:27   ` Peter Zijlstra
  2021-09-30  8:28     ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2021-09-30  8:27 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, kernel test robot, Vito Caputo,
	Jann Horn, Andrew Morton, Christian Brauner, Anand K Mistry,
	Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, Josh Poimboeuf, linux-kernel,
	linux-fsdevel, linux-hardening, x86

On Wed, Sep 29, 2021 at 03:02:14PM -0700, Kees Cook wrote:
> Having a stable wchan means the process must be blocked and for it to
> stay that way while performing stack unwinding.

How's this instead?

---
Subject: sched: Wrap get_wchan() to keep task blocked
From: Kees Cook <keescook@chromium.org>
Date: Wed, 29 Sep 2021 15:02:14 -0700

From: Kees Cook <keescook@chromium.org>

Having a stable wchan means the process must be blocked and for it to
stay that way while performing stack unwinding.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
[peterz: __get_wchan]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/alpha/include/asm/processor.h      |    2 +-
 arch/alpha/kernel/process.c             |    5 ++---
 arch/arc/include/asm/processor.h        |    2 +-
 arch/arc/kernel/stacktrace.c            |    4 ++--
 arch/arm/include/asm/processor.h        |    2 +-
 arch/arm/kernel/process.c               |    4 +---
 arch/arm64/include/asm/processor.h      |    2 +-
 arch/arm64/kernel/process.c             |    4 +---
 arch/csky/include/asm/processor.h       |    2 +-
 arch/csky/kernel/stacktrace.c           |    5 ++---
 arch/h8300/include/asm/processor.h      |    2 +-
 arch/h8300/kernel/process.c             |    5 +----
 arch/hexagon/include/asm/processor.h    |    2 +-
 arch/hexagon/kernel/process.c           |    4 +---
 arch/ia64/include/asm/processor.h       |    2 +-
 arch/ia64/kernel/process.c              |    5 +----
 arch/m68k/include/asm/processor.h       |    2 +-
 arch/m68k/kernel/process.c              |    4 +---
 arch/microblaze/include/asm/processor.h |    2 +-
 arch/microblaze/kernel/process.c        |    2 +-
 arch/mips/include/asm/processor.h       |    2 +-
 arch/mips/kernel/process.c              |    8 +++-----
 arch/nds32/include/asm/processor.h      |    2 +-
 arch/nds32/kernel/process.c             |    7 +------
 arch/nios2/include/asm/processor.h      |    2 +-
 arch/nios2/kernel/process.c             |    5 +----
 arch/openrisc/include/asm/processor.h   |    2 +-
 arch/openrisc/kernel/process.c          |    2 +-
 arch/parisc/include/asm/processor.h     |    2 +-
 arch/parisc/kernel/process.c            |    5 +----
 arch/powerpc/include/asm/processor.h    |    2 +-
 arch/powerpc/kernel/process.c           |    9 +++------
 arch/riscv/include/asm/processor.h      |    2 +-
 arch/riscv/kernel/stacktrace.c          |   12 +++++-------
 arch/s390/include/asm/processor.h       |    2 +-
 arch/s390/kernel/process.c              |    4 ++--
 arch/sh/include/asm/processor_32.h      |    2 +-
 arch/sh/kernel/process_32.c             |    5 +----
 arch/sparc/include/asm/processor_32.h   |    2 +-
 arch/sparc/include/asm/processor_64.h   |    2 +-
 arch/sparc/kernel/process_32.c          |    5 +----
 arch/sparc/kernel/process_64.c          |    5 +----
 arch/um/include/asm/processor-generic.h |    2 +-
 arch/um/kernel/process.c                |    5 +----
 arch/x86/include/asm/processor.h        |    2 +-
 arch/x86/kernel/process.c               |    5 +----
 arch/xtensa/include/asm/processor.h     |    2 +-
 arch/xtensa/kernel/process.c            |    5 +----
 include/linux/sched.h                   |    1 +
 kernel/sched/core.c                     |   19 +++++++++++++++++++
 50 files changed, 80 insertions(+), 112 deletions(-)

--- a/arch/alpha/include/asm/processor.h
+++ b/arch/alpha/include/asm/processor.h
@@ -42,7 +42,7 @@ extern void start_thread(struct pt_regs
 struct task_struct;
 extern void release_thread(struct task_struct *);
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define KSTK_EIP(tsk) (task_pt_regs(tsk)->pc)
 
--- a/arch/alpha/kernel/process.c
+++ b/arch/alpha/kernel/process.c
@@ -376,12 +376,11 @@ thread_saved_pc(struct task_struct *t)
 }
 
 unsigned long
-get_wchan(struct task_struct *p)
+__get_wchan(struct task_struct *p)
 {
 	unsigned long schedule_frame;
 	unsigned long pc;
-	if (!p || p == current || task_is_running(p))
-		return 0;
+
 	/*
 	 * This one depends on the frame size of schedule().  Do a
 	 * "disass schedule" in gdb to find the frame size.  Also, the
--- a/arch/arc/include/asm/processor.h
+++ b/arch/arc/include/asm/processor.h
@@ -70,7 +70,7 @@ struct task_struct;
 extern void start_thread(struct pt_regs * regs, unsigned long pc,
 			 unsigned long usp);
 
-extern unsigned int get_wchan(struct task_struct *p);
+extern unsigned int __get_wchan(struct task_struct *p);
 
 #endif /* !__ASSEMBLY__ */
 
--- a/arch/arc/kernel/stacktrace.c
+++ b/arch/arc/kernel/stacktrace.c
@@ -15,7 +15,7 @@
  *      = specifics of data structs where trace is saved(CONFIG_STACKTRACE etc)
  *
  *  vineetg: March 2009
- *  -Implemented correct versions of thread_saved_pc() and get_wchan()
+ *  -Implemented correct versions of thread_saved_pc() and __get_wchan()
  *
  *  rajeshwarr: 2008
  *  -Initial implementation
@@ -248,7 +248,7 @@ void show_stack(struct task_struct *tsk,
  * Of course just returning schedule( ) would be pointless so unwind until
  * the function is not in schedular code
  */
-unsigned int get_wchan(struct task_struct *tsk)
+unsigned int __get_wchan(struct task_struct *tsk)
 {
 	return arc_unwind_core(tsk, NULL, __get_first_nonsched, NULL);
 }
--- a/arch/arm/include/asm/processor.h
+++ b/arch/arm/include/asm/processor.h
@@ -84,7 +84,7 @@ struct task_struct;
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define task_pt_regs(p) \
 	((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -276,13 +276,11 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	struct stackframe frame;
 	unsigned long stack_page;
 	int count = 0;
-	if (!p || p == current || task_is_running(p))
-		return 0;
 
 	frame.fp = thread_saved_fp(p);
 	frame.sp = thread_saved_sp(p);
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -257,7 +257,7 @@ struct task_struct;
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 void update_sctlr_el1(u64 sctlr);
 
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -528,13 +528,11 @@ __notrace_funcgraph struct task_struct *
 	return last;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	struct stackframe frame;
 	unsigned long stack_page, ret = 0;
 	int count = 0;
-	if (!p || p == current || task_is_running(p))
-		return 0;
 
 	stack_page = (unsigned long)try_get_task_stack(p);
 	if (!stack_page)
--- a/arch/csky/include/asm/processor.h
+++ b/arch/csky/include/asm/processor.h
@@ -81,7 +81,7 @@ static inline void release_thread(struct
 
 extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
 #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->usp)
--- a/arch/csky/kernel/stacktrace.c
+++ b/arch/csky/kernel/stacktrace.c
@@ -111,12 +111,11 @@ static bool save_wchan(unsigned long pc,
 	return false;
 }
 
-unsigned long get_wchan(struct task_struct *task)
+unsigned long __get_wchan(struct task_struct *task)
 {
 	unsigned long pc = 0;
 
-	if (likely(task && task != current && !task_is_running(task)))
-		walk_stackframe(task, NULL, save_wchan, &pc);
+	walk_stackframe(task, NULL, save_wchan, &pc);
 	return pc;
 }
 
--- a/arch/h8300/include/asm/processor.h
+++ b/arch/h8300/include/asm/processor.h
@@ -105,7 +105,7 @@ static inline void release_thread(struct
 {
 }
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define	KSTK_EIP(tsk)	\
 	({			 \
--- a/arch/h8300/kernel/process.c
+++ b/arch/h8300/kernel/process.c
@@ -128,15 +128,12 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long fp, pc;
 	unsigned long stack_page;
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	stack_page = (unsigned long)p;
 	fp = ((struct pt_regs *)p->thread.ksp)->er6;
 	do {
--- a/arch/hexagon/include/asm/processor.h
+++ b/arch/hexagon/include/asm/processor.h
@@ -64,7 +64,7 @@ struct thread_struct {
 extern void release_thread(struct task_struct *dead_task);
 
 /* Get wait channel for task P.  */
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 /*  The following stuff is pretty HEXAGON specific.  */
 
--- a/arch/hexagon/kernel/process.c
+++ b/arch/hexagon/kernel/process.c
@@ -130,13 +130,11 @@ void flush_thread(void)
  * is an identification of the point at which the scheduler
  * was invoked by a blocked thread.
  */
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long fp, pc;
 	unsigned long stack_page;
 	int count = 0;
-	if (!p || p == current || task_is_running(p))
-		return 0;
 
 	stack_page = (unsigned long)task_stack_page(p);
 	fp = ((struct hexagon_switch_stack *)p->thread.switch_sp)->fp;
--- a/arch/ia64/include/asm/processor.h
+++ b/arch/ia64/include/asm/processor.h
@@ -330,7 +330,7 @@ struct task_struct;
 #define release_thread(dead_task)
 
 /* Get wait channel for task P.  */
-extern unsigned long get_wchan (struct task_struct *p);
+extern unsigned long __get_wchan (struct task_struct *p);
 
 /* Return instruction pointer of blocked task TSK.  */
 #define KSTK_EIP(tsk)					\
--- a/arch/ia64/kernel/process.c
+++ b/arch/ia64/kernel/process.c
@@ -523,15 +523,12 @@ exit_thread (struct task_struct *tsk)
 }
 
 unsigned long
-get_wchan (struct task_struct *p)
+__get_wchan (struct task_struct *p)
 {
 	struct unw_frame_info info;
 	unsigned long ip;
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	/*
 	 * Note: p may not be a blocked task (it could be current or
 	 * another process running on some other CPU.  Rather than
--- a/arch/m68k/include/asm/processor.h
+++ b/arch/m68k/include/asm/processor.h
@@ -125,7 +125,7 @@ static inline void release_thread(struct
 {
 }
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define	KSTK_EIP(tsk)	\
     ({			\
--- a/arch/m68k/kernel/process.c
+++ b/arch/m68k/kernel/process.c
@@ -263,13 +263,11 @@ int dump_fpu (struct pt_regs *regs, stru
 }
 EXPORT_SYMBOL(dump_fpu);
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long fp, pc;
 	unsigned long stack_page;
 	int count = 0;
-	if (!p || p == current || task_is_running(p))
-		return 0;
 
 	stack_page = (unsigned long)task_stack_page(p);
 	fp = ((struct switch_stack *)p->thread.ksp)->a6;
--- a/arch/microblaze/include/asm/processor.h
+++ b/arch/microblaze/include/asm/processor.h
@@ -68,7 +68,7 @@ static inline void release_thread(struct
 {
 }
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 /* The size allocated for kernel stacks. This _must_ be a power of two! */
 # define KERNEL_STACK_SIZE	0x2000
--- a/arch/microblaze/kernel/process.c
+++ b/arch/microblaze/kernel/process.c
@@ -112,7 +112,7 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 /* TBD (used by procfs) */
 	return 0;
--- a/arch/mips/include/asm/processor.h
+++ b/arch/mips/include/asm/processor.h
@@ -369,7 +369,7 @@ static inline void flush_thread(void)
 {
 }
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
 			 THREAD_SIZE - 32 - sizeof(struct pt_regs))
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -511,7 +511,7 @@ static int __init frame_info_init(void)
 
 	/*
 	 * Without schedule() frame info, result given by
-	 * thread_saved_pc() and get_wchan() are not reliable.
+	 * thread_saved_pc() and __get_wchan() are not reliable.
 	 */
 	if (schedule_mfi.pc_offset < 0)
 		printk("Can't analyze schedule() prologue at %p\n", schedule);
@@ -652,9 +652,9 @@ unsigned long unwind_stack(struct task_s
 #endif
 
 /*
- * get_wchan - a maintenance nightmare^W^Wpain in the ass ...
+ * __get_wchan - a maintenance nightmare^W^Wpain in the ass ...
  */
-unsigned long get_wchan(struct task_struct *task)
+unsigned long __get_wchan(struct task_struct *task)
 {
 	unsigned long pc = 0;
 #ifdef CONFIG_KALLSYMS
@@ -662,8 +662,6 @@ unsigned long get_wchan(struct task_stru
 	unsigned long ra = 0;
 #endif
 
-	if (!task || task == current || task_is_running(task))
-		goto out;
 	if (!task_stack_page(task))
 		goto out;
 
--- a/arch/nds32/include/asm/processor.h
+++ b/arch/nds32/include/asm/processor.h
@@ -83,7 +83,7 @@ extern struct task_struct *last_task_use
 /* Prepare to copy thread state - unlazy all lazy status */
 #define prepare_to_copy(tsk)	do { } while (0)
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define cpu_relax()			barrier()
 
--- a/arch/nds32/kernel/process.c
+++ b/arch/nds32/kernel/process.c
@@ -233,15 +233,12 @@ int dump_fpu(struct pt_regs *regs, elf_f
 
 EXPORT_SYMBOL(dump_fpu);
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long fp, lr;
 	unsigned long stack_start, stack_end;
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	if (IS_ENABLED(CONFIG_FRAME_POINTER)) {
 		stack_start = (unsigned long)end_of_stack(p);
 		stack_end = (unsigned long)task_stack_page(p) + THREAD_SIZE;
@@ -258,5 +255,3 @@ unsigned long get_wchan(struct task_stru
 	}
 	return 0;
 }
-
-EXPORT_SYMBOL(get_wchan);
--- a/arch/nios2/include/asm/processor.h
+++ b/arch/nios2/include/asm/processor.h
@@ -69,7 +69,7 @@ static inline void release_thread(struct
 {
 }
 
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 #define task_pt_regs(p) \
 	((struct pt_regs *)(THREAD_SIZE + task_stack_page(p)) - 1)
--- a/arch/nios2/kernel/process.c
+++ b/arch/nios2/kernel/process.c
@@ -217,15 +217,12 @@ void dump(struct pt_regs *fp)
 	pr_emerg("\n\n");
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long fp, pc;
 	unsigned long stack_page;
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	stack_page = (unsigned long)p;
 	fp = ((struct switch_stack *)p->thread.ksp)->fp;	/* ;dgt2 */
 	do {
--- a/arch/openrisc/include/asm/processor.h
+++ b/arch/openrisc/include/asm/processor.h
@@ -73,7 +73,7 @@ struct thread_struct {
 
 void start_thread(struct pt_regs *regs, unsigned long nip, unsigned long sp);
 void release_thread(struct task_struct *);
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define cpu_relax()     barrier()
 
--- a/arch/openrisc/kernel/process.c
+++ b/arch/openrisc/kernel/process.c
@@ -263,7 +263,7 @@ void dump_elf_thread(elf_greg_t *dest, s
 	dest[35] = 0;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	/* TODO */
 
--- a/arch/parisc/include/asm/processor.h
+++ b/arch/parisc/include/asm/processor.h
@@ -273,7 +273,7 @@ struct mm_struct;
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 #define KSTK_EIP(tsk)	((tsk)->thread.regs.iaoq[0])
 #define KSTK_ESP(tsk)	((tsk)->thread.regs.gr[30])
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -240,15 +240,12 @@ copy_thread(unsigned long clone_flags, u
 }
 
 unsigned long
-get_wchan(struct task_struct *p)
+__get_wchan(struct task_struct *p)
 {
 	struct unwind_frame_info info;
 	unsigned long ip;
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	/*
 	 * These bracket the sleeping functions..
 	 */
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -300,7 +300,7 @@ struct thread_struct {
 
 #define task_pt_regs(tsk)	((tsk)->thread.regs)
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 #define KSTK_EIP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->nip: 0)
 #define KSTK_ESP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->gpr[1]: 0)
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -2111,14 +2111,11 @@ int validate_sp(unsigned long sp, struct
 
 EXPORT_SYMBOL(validate_sp);
 
-static unsigned long __get_wchan(struct task_struct *p)
+static unsigned long ___get_wchan(struct task_struct *p)
 {
 	unsigned long ip, sp;
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	sp = p->thread.ksp;
 	if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD))
 		return 0;
@@ -2137,14 +2134,14 @@ static unsigned long __get_wchan(struct
 	return 0;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long ret;
 
 	if (!try_get_task_stack(p))
 		return 0;
 
-	ret = __get_wchan(p);
+	ret = ___get_wchan(p);
 
 	put_task_stack(p);
 
--- a/arch/riscv/include/asm/processor.h
+++ b/arch/riscv/include/asm/processor.h
@@ -66,7 +66,7 @@ static inline void release_thread(struct
 {
 }
 
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 
 static inline void wait_for_interrupt(void)
--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -128,16 +128,14 @@ static bool save_wchan(void *arg, unsign
 	return true;
 }
 
-unsigned long get_wchan(struct task_struct *task)
+unsigned long __get_wchan(struct task_struct *task)
 {
 	unsigned long pc = 0;
 
-	if (likely(task && task != current && !task_is_running(task))) {
-		if (!try_get_task_stack(task))
-			return 0;
-		walk_stackframe(task, NULL, save_wchan, &pc);
-		put_task_stack(task);
-	}
+	if (!try_get_task_stack(task))
+		return 0;
+	walk_stackframe(task, NULL, save_wchan, &pc);
+	put_task_stack(task);
 	return pc;
 }
 
--- a/arch/s390/include/asm/processor.h
+++ b/arch/s390/include/asm/processor.h
@@ -192,7 +192,7 @@ static inline void release_thread(struct
 void guarded_storage_release(struct task_struct *tsk);
 void gs_load_bc_cb(struct pt_regs *regs);
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 #define task_pt_regs(tsk) ((struct pt_regs *) \
         (task_stack_page(tsk) + THREAD_SIZE) - 1)
 #define KSTK_EIP(tsk)	(task_pt_regs(tsk)->psw.addr)
--- a/arch/s390/kernel/process.c
+++ b/arch/s390/kernel/process.c
@@ -181,12 +181,12 @@ void execve_tail(void)
 	asm volatile("sfpc %0" : : "d" (0));
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	struct unwind_state state;
 	unsigned long ip = 0;
 
-	if (!p || p == current || task_is_running(p) || !task_stack_page(p))
+	if (!task_stack_page(p))
 		return 0;
 
 	if (!try_get_task_stack(p))
--- a/arch/sh/include/asm/processor_32.h
+++ b/arch/sh/include/asm/processor_32.h
@@ -180,7 +180,7 @@ static inline void show_code(struct pt_r
 }
 #endif
 
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->pc)
 #define KSTK_ESP(tsk)  (task_pt_regs(tsk)->regs[15])
--- a/arch/sh/kernel/process_32.c
+++ b/arch/sh/kernel/process_32.c
@@ -182,13 +182,10 @@ __switch_to(struct task_struct *prev, st
 	return prev;
 }
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long pc;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	/*
 	 * The same comment as on the Alpha applies here, too ...
 	 */
--- a/arch/sparc/include/asm/processor_32.h
+++ b/arch/sparc/include/asm/processor_32.h
@@ -89,7 +89,7 @@ static inline void start_thread(struct p
 /* Free all resources held by a thread. */
 #define release_thread(tsk)		do { } while(0)
 
-unsigned long get_wchan(struct task_struct *);
+unsigned long __get_wchan(struct task_struct *);
 
 #define task_pt_regs(tsk) ((tsk)->thread.kregs)
 #define KSTK_EIP(tsk)  ((tsk)->thread.kregs->pc)
--- a/arch/sparc/include/asm/processor_64.h
+++ b/arch/sparc/include/asm/processor_64.h
@@ -183,7 +183,7 @@ do { \
 /* Free all resources held by a thread. */
 #define release_thread(tsk)		do { } while (0)
 
-unsigned long get_wchan(struct task_struct *task);
+unsigned long __get_wchan(struct task_struct *task);
 
 #define task_pt_regs(tsk) (task_thread_info(tsk)->kregs)
 #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->tpc)
--- a/arch/sparc/kernel/process_32.c
+++ b/arch/sparc/kernel/process_32.c
@@ -365,7 +365,7 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-unsigned long get_wchan(struct task_struct *task)
+unsigned long __get_wchan(struct task_struct *task)
 {
 	unsigned long pc, fp, bias = 0;
 	unsigned long task_base = (unsigned long) task;
@@ -373,9 +373,6 @@ unsigned long get_wchan(struct task_stru
 	struct reg_window32 *rw;
 	int count = 0;
 
-	if (!task || task == current || task_is_running(task))
-		goto out;
-
 	fp = task_thread_info(task)->ksp + bias;
 	do {
 		/* Bogus frame pointer? */
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -663,7 +663,7 @@ int arch_dup_task_struct(struct task_str
 	return 0;
 }
 
-unsigned long get_wchan(struct task_struct *task)
+unsigned long __get_wchan(struct task_struct *task)
 {
 	unsigned long pc, fp, bias = 0;
 	struct thread_info *tp;
@@ -671,9 +671,6 @@ unsigned long get_wchan(struct task_stru
         unsigned long ret = 0;
 	int count = 0; 
 
-	if (!task || task == current || task_is_running(task))
-		goto out;
-
 	tp = task_thread_info(task);
 	bias = STACK_BIAS;
 	fp = task_thread_info(task)->ksp + bias;
--- a/arch/um/include/asm/processor-generic.h
+++ b/arch/um/include/asm/processor-generic.h
@@ -106,6 +106,6 @@ extern struct cpuinfo_um boot_cpu_data;
 #define cache_line_size()	(boot_cpu_data.cache_alignment)
 
 #define KSTK_REG(tsk, reg) get_thread_reg(reg, &tsk->thread.switch_buf)
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 #endif
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -364,14 +364,11 @@ unsigned long arch_align_stack(unsigned
 }
 #endif
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long stack_page, sp, ip;
 	bool seen_sched = 0;
 
-	if ((p == NULL) || (p == current) || task_is_running(p))
-		return 0;
-
 	stack_page = (unsigned long) task_stack_page(p);
 	/* Bail if the process has no kernel stack for some reason */
 	if (stack_page == 0)
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -590,7 +590,7 @@ static inline void load_sp0(unsigned lon
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-unsigned long get_wchan(struct task_struct *p);
+unsigned long __get_wchan(struct task_struct *p);
 
 /*
  * Generic CPUID function
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -943,13 +943,10 @@ unsigned long arch_randomize_brk(struct
  * because the task might wake up and we might look at a stack
  * changing under us.
  */
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long entry = 0;
 
-	if (p == current || task_is_running(p))
-		return 0;
-
 	stack_trace_save_tsk(p, &entry, 1, 0);
 	return entry;
 }
--- a/arch/xtensa/include/asm/processor.h
+++ b/arch/xtensa/include/asm/processor.h
@@ -215,7 +215,7 @@ struct mm_struct;
 /* Free all resources held by a thread. */
 #define release_thread(thread) do { } while(0)
 
-extern unsigned long get_wchan(struct task_struct *p);
+extern unsigned long __get_wchan(struct task_struct *p);
 
 #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
 #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->areg[1])
--- a/arch/xtensa/kernel/process.c
+++ b/arch/xtensa/kernel/process.c
@@ -298,15 +298,12 @@ int copy_thread(unsigned long clone_flag
  * These bracket the sleeping functions..
  */
 
-unsigned long get_wchan(struct task_struct *p)
+unsigned long __get_wchan(struct task_struct *p)
 {
 	unsigned long sp, pc;
 	unsigned long stack_page = (unsigned long) task_stack_page(p);
 	int count = 0;
 
-	if (!p || p == current || task_is_running(p))
-		return 0;
-
 	sp = p->thread.sp;
 	pc = MAKE_PC_FROM_RA(p->thread.ra, p->thread.sp);
 
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2139,6 +2139,7 @@ static inline void set_task_cpu(struct t
 #endif /* CONFIG_SMP */
 
 extern bool sched_task_on_rq(struct task_struct *p);
+extern unsigned long get_wchan(struct task_struct *p);
 
 /*
  * In order to reduce various lock holder preemption latencies provide an
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1962,6 +1962,25 @@ bool sched_task_on_rq(struct task_struct
 	return task_on_rq_queued(p);
 }
 
+unsigned long get_wchan(struct task_struct *p)
+{
+	unsigned long ip = 0;
+	unsigned int state;
+
+	if (!p || p == current)
+		return 0;
+
+	/* Only get wchan if task is blocked and we can keep it that way. */
+	raw_spin_lock_irq(&p->pi_lock);
+	state = READ_ONCE(p->__state);
+	smp_rmb(); /* see try_to_wake_up() */
+	if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
+		ip = __get_wchan(p);
+	raw_spin_unlock_irq(&p->pi_lock);
+
+	return ip;
+}
+
 static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
 {
 	if (!(flags & ENQUEUE_NOCLOCK))

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked
  2021-09-30  8:27   ` Peter Zijlstra
@ 2021-09-30  8:28     ` Peter Zijlstra
  2021-09-30  8:29       ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2021-09-30  8:28 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, kernel test robot, Vito Caputo,
	Jann Horn, Andrew Morton, Christian Brauner, Anand K Mistry,
	Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, Josh Poimboeuf, linux-kernel,
	linux-fsdevel, linux-hardening, x86

On Thu, Sep 30, 2021 at 10:27:07AM +0200, Peter Zijlstra wrote:
> On Wed, Sep 29, 2021 at 03:02:14PM -0700, Kees Cook wrote:
> > Having a stable wchan means the process must be blocked and for it to
> > stay that way while performing stack unwinding.
> 
> How's this instead?
> 
On top of which we can do..

---
Subject: arch: __get_wchan || STACKTRACE_SUPPORT
From: Peter Zijlstra <peterz@infradead.org>
Date: Thu Sep 30 10:08:13 CEST 2021

It's trivial to implement __get_wchan() with CONFIG_STACKTRACE

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arc/include/asm/processor.h        |    2 -
 arch/arc/kernel/stacktrace.c            |   17 --------------
 arch/arm/include/asm/processor.h        |    2 -
 arch/arm/kernel/process.c               |   22 -------------------
 arch/arm64/include/asm/processor.h      |    2 -
 arch/arm64/kernel/process.c             |   26 ----------------------
 arch/csky/include/asm/processor.h       |    2 -
 arch/csky/kernel/stacktrace.c           |   18 ---------------
 arch/microblaze/include/asm/processor.h |    2 -
 arch/microblaze/kernel/process.c        |    6 -----
 arch/mips/include/asm/processor.h       |    2 -
 arch/mips/kernel/process.c              |   27 -----------------------
 arch/nds32/include/asm/processor.h      |    2 -
 arch/nds32/kernel/process.c             |   23 -------------------
 arch/openrisc/include/asm/processor.h   |    1 
 arch/openrisc/kernel/process.c          |    6 -----
 arch/parisc/include/asm/processor.h     |    2 -
 arch/parisc/kernel/process.c            |   24 --------------------
 arch/powerpc/include/asm/processor.h    |    2 -
 arch/powerpc/kernel/process.c           |   37 --------------------------------
 arch/riscv/include/asm/processor.h      |    3 --
 arch/riscv/kernel/stacktrace.c          |   21 ------------------
 arch/s390/include/asm/processor.h       |    1 
 arch/s390/kernel/process.c              |   29 -------------------------
 arch/sh/include/asm/processor_32.h      |    2 -
 arch/sh/kernel/process_32.c             |   19 ----------------
 arch/sparc/include/asm/processor_64.h   |    2 -
 arch/sparc/kernel/process_64.c          |   28 ------------------------
 arch/um/include/asm/processor-generic.h |    1 
 arch/um/kernel/process.c                |   32 ---------------------------
 arch/x86/include/asm/processor.h        |    2 -
 arch/x86/kernel/process.c               |   14 ------------
 arch/xtensa/include/asm/processor.h     |    2 -
 arch/xtensa/kernel/process.c            |   29 -------------------------
 kernel/sched/core.c                     |   15 ++++++++++++
 lib/Kconfig.debug                       |    7 ------
 36 files changed, 16 insertions(+), 416 deletions(-)

--- a/arch/arc/include/asm/processor.h
+++ b/arch/arc/include/asm/processor.h
@@ -70,8 +70,6 @@ struct task_struct;
 extern void start_thread(struct pt_regs * regs, unsigned long pc,
 			 unsigned long usp);
 
-extern unsigned int __get_wchan(struct task_struct *p);
-
 #endif /* !__ASSEMBLY__ */
 
 /*
--- a/arch/arc/kernel/stacktrace.c
+++ b/arch/arc/kernel/stacktrace.c
@@ -217,14 +217,6 @@ static int __collect_all_but_sched(unsig
 
 #endif
 
-static int __get_first_nonsched(unsigned int address, void *unused)
-{
-	if (in_sched_functions(address))
-		return 0;
-
-	return -1;
-}
-
 /*-------------------------------------------------------------------------
  *              APIs expected by various kernel sub-systems
  *-------------------------------------------------------------------------
@@ -244,15 +236,6 @@ void show_stack(struct task_struct *tsk,
 	show_stacktrace(tsk, NULL, loglvl);
 }
 
-/* Another API expected by schedular, shows up in "ps" as Wait Channel
- * Of course just returning schedule( ) would be pointless so unwind until
- * the function is not in schedular code
- */
-unsigned int __get_wchan(struct task_struct *tsk)
-{
-	return arc_unwind_core(tsk, NULL, __get_first_nonsched, NULL);
-}
-
 #ifdef CONFIG_STACKTRACE
 
 /*
--- a/arch/arm/include/asm/processor.h
+++ b/arch/arm/include/asm/processor.h
@@ -84,8 +84,6 @@ struct task_struct;
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-unsigned long __get_wchan(struct task_struct *p);
-
 #define task_pt_regs(p) \
 	((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1)
 
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -276,28 +276,6 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-unsigned long __get_wchan(struct task_struct *p)
-{
-	struct stackframe frame;
-	unsigned long stack_page;
-	int count = 0;
-
-	frame.fp = thread_saved_fp(p);
-	frame.sp = thread_saved_sp(p);
-	frame.lr = 0;			/* recovered from the stack */
-	frame.pc = thread_saved_pc(p);
-	stack_page = (unsigned long)task_stack_page(p);
-	do {
-		if (frame.sp < stack_page ||
-		    frame.sp >= stack_page + THREAD_SIZE ||
-		    unwind_frame(&frame) < 0)
-			return 0;
-		if (!in_sched_functions(frame.pc))
-			return frame.pc;
-	} while (count ++ < 16);
-	return 0;
-}
-
 #ifdef CONFIG_MMU
 #ifdef CONFIG_KUSER_HELPERS
 /*
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -257,8 +257,6 @@ struct task_struct;
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-unsigned long __get_wchan(struct task_struct *p);
-
 void update_sctlr_el1(u64 sctlr);
 
 /* Thread switching */
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -528,32 +528,6 @@ __notrace_funcgraph struct task_struct *
 	return last;
 }
 
-unsigned long __get_wchan(struct task_struct *p)
-{
-	struct stackframe frame;
-	unsigned long stack_page, ret = 0;
-	int count = 0;
-
-	stack_page = (unsigned long)try_get_task_stack(p);
-	if (!stack_page)
-		return 0;
-
-	start_backtrace(&frame, thread_saved_fp(p), thread_saved_pc(p));
-
-	do {
-		if (unwind_frame(p, &frame))
-			goto out;
-		if (!in_sched_functions(frame.pc)) {
-			ret = frame.pc;
-			goto out;
-		}
-	} while (count++ < 16);
-
-out:
-	put_task_stack(p);
-	return ret;
-}
-
 unsigned long arch_align_stack(unsigned long sp)
 {
 	if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
--- a/arch/csky/include/asm/processor.h
+++ b/arch/csky/include/asm/processor.h
@@ -81,8 +81,6 @@ static inline void release_thread(struct
 
 extern int kernel_thread(int (*fn)(void *), void *arg, unsigned long flags);
 
-unsigned long __get_wchan(struct task_struct *p);
-
 #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
 #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->usp)
 
--- a/arch/csky/kernel/stacktrace.c
+++ b/arch/csky/kernel/stacktrace.c
@@ -101,24 +101,6 @@ void show_stack(struct task_struct *task
 	walk_stackframe(task, NULL, print_trace_address, (void *)loglvl);
 }
 
-static bool save_wchan(unsigned long pc, void *arg)
-{
-	if (!in_sched_functions(pc)) {
-		unsigned long *p = arg;
-		*p = pc;
-		return true;
-	}
-	return false;
-}
-
-unsigned long __get_wchan(struct task_struct *task)
-{
-	unsigned long pc = 0;
-
-	walk_stackframe(task, NULL, save_wchan, &pc);
-	return pc;
-}
-
 #ifdef CONFIG_STACKTRACE
 static bool __save_trace(unsigned long pc, void *arg, bool nosched)
 {
--- a/arch/microblaze/include/asm/processor.h
+++ b/arch/microblaze/include/asm/processor.h
@@ -68,8 +68,6 @@ static inline void release_thread(struct
 {
 }
 
-unsigned long __get_wchan(struct task_struct *p);
-
 /* The size allocated for kernel stacks. This _must_ be a power of two! */
 # define KERNEL_STACK_SIZE	0x2000
 
--- a/arch/microblaze/kernel/process.c
+++ b/arch/microblaze/kernel/process.c
@@ -112,12 +112,6 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-unsigned long __get_wchan(struct task_struct *p)
-{
-/* TBD (used by procfs) */
-	return 0;
-}
-
 /* Set up a thread for executing a new program */
 void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long usp)
 {
--- a/arch/mips/include/asm/processor.h
+++ b/arch/mips/include/asm/processor.h
@@ -369,8 +369,6 @@ static inline void flush_thread(void)
 {
 }
 
-unsigned long __get_wchan(struct task_struct *p);
-
 #define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
 			 THREAD_SIZE - 32 - sizeof(struct pt_regs))
 #define task_pt_regs(tsk) ((struct pt_regs *)__KSTK_TOS(tsk))
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -651,33 +651,6 @@ unsigned long unwind_stack(struct task_s
 }
 #endif
 
-/*
- * __get_wchan - a maintenance nightmare^W^Wpain in the ass ...
- */
-unsigned long __get_wchan(struct task_struct *task)
-{
-	unsigned long pc = 0;
-#ifdef CONFIG_KALLSYMS
-	unsigned long sp;
-	unsigned long ra = 0;
-#endif
-
-	if (!task_stack_page(task))
-		goto out;
-
-	pc = thread_saved_pc(task);
-
-#ifdef CONFIG_KALLSYMS
-	sp = task->thread.reg29 + schedule_mfi.frame_size;
-
-	while (in_sched_functions(pc))
-		pc = unwind_stack(task, &sp, pc, &ra);
-#endif
-
-out:
-	return pc;
-}
-
 unsigned long mips_stack_top(void)
 {
 	unsigned long top = TASK_SIZE & PAGE_MASK;
--- a/arch/nds32/include/asm/processor.h
+++ b/arch/nds32/include/asm/processor.h
@@ -83,8 +83,6 @@ extern struct task_struct *last_task_use
 /* Prepare to copy thread state - unlazy all lazy status */
 #define prepare_to_copy(tsk)	do { } while (0)
 
-unsigned long __get_wchan(struct task_struct *p);
-
 #define cpu_relax()			barrier()
 
 #define task_pt_regs(task) \
--- a/arch/nds32/kernel/process.c
+++ b/arch/nds32/kernel/process.c
@@ -232,26 +232,3 @@ int dump_fpu(struct pt_regs *regs, elf_f
 }
 
 EXPORT_SYMBOL(dump_fpu);
-
-unsigned long __get_wchan(struct task_struct *p)
-{
-	unsigned long fp, lr;
-	unsigned long stack_start, stack_end;
-	int count = 0;
-
-	if (IS_ENABLED(CONFIG_FRAME_POINTER)) {
-		stack_start = (unsigned long)end_of_stack(p);
-		stack_end = (unsigned long)task_stack_page(p) + THREAD_SIZE;
-
-		fp = thread_saved_fp(p);
-		do {
-			if (fp < stack_start || fp > stack_end)
-				return 0;
-			lr = ((unsigned long *)fp)[0];
-			if (!in_sched_functions(lr))
-				return lr;
-			fp = *(unsigned long *)(fp + 4);
-		} while (count++ < 16);
-	}
-	return 0;
-}
--- a/arch/openrisc/include/asm/processor.h
+++ b/arch/openrisc/include/asm/processor.h
@@ -73,7 +73,6 @@ struct thread_struct {
 
 void start_thread(struct pt_regs *regs, unsigned long nip, unsigned long sp);
 void release_thread(struct task_struct *);
-unsigned long __get_wchan(struct task_struct *p);
 
 #define cpu_relax()     barrier()
 
--- a/arch/openrisc/kernel/process.c
+++ b/arch/openrisc/kernel/process.c
@@ -263,9 +263,3 @@ void dump_elf_thread(elf_greg_t *dest, s
 	dest[35] = 0;
 }
 
-unsigned long __get_wchan(struct task_struct *p)
-{
-	/* TODO */
-
-	return 0;
-}
--- a/arch/parisc/include/asm/processor.h
+++ b/arch/parisc/include/asm/processor.h
@@ -273,8 +273,6 @@ struct mm_struct;
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-extern unsigned long __get_wchan(struct task_struct *p);
-
 #define KSTK_EIP(tsk)	((tsk)->thread.regs.iaoq[0])
 #define KSTK_ESP(tsk)	((tsk)->thread.regs.gr[30])
 
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -239,30 +239,6 @@ copy_thread(unsigned long clone_flags, u
 	return 0;
 }
 
-unsigned long
-__get_wchan(struct task_struct *p)
-{
-	struct unwind_frame_info info;
-	unsigned long ip;
-	int count = 0;
-
-	/*
-	 * These bracket the sleeping functions..
-	 */
-
-	unwind_frame_init_from_blocked_task(&info, p);
-	do {
-		if (unwind_once(&info) < 0)
-			return 0;
-		if (task_is_running(p))
-                        return 0;
-		ip = info.ip;
-		if (!in_sched_functions(ip))
-			return ip;
-	} while (count++ < MAX_UNWIND_ENTRIES);
-	return 0;
-}
-
 #ifdef CONFIG_64BIT
 void *dereference_function_descriptor(void *ptr)
 {
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -300,8 +300,6 @@ struct thread_struct {
 
 #define task_pt_regs(tsk)	((tsk)->thread.regs)
 
-unsigned long __get_wchan(struct task_struct *p);
-
 #define KSTK_EIP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->nip: 0)
 #define KSTK_ESP(tsk)  ((tsk)->thread.regs? (tsk)->thread.regs->gpr[1]: 0)
 
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -2111,43 +2111,6 @@ int validate_sp(unsigned long sp, struct
 
 EXPORT_SYMBOL(validate_sp);
 
-static unsigned long ___get_wchan(struct task_struct *p)
-{
-	unsigned long ip, sp;
-	int count = 0;
-
-	sp = p->thread.ksp;
-	if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD))
-		return 0;
-
-	do {
-		sp = *(unsigned long *)sp;
-		if (!validate_sp(sp, p, STACK_FRAME_OVERHEAD) ||
-		    task_is_running(p))
-			return 0;
-		if (count > 0) {
-			ip = ((unsigned long *)sp)[STACK_FRAME_LR_SAVE];
-			if (!in_sched_functions(ip))
-				return ip;
-		}
-	} while (count++ < 16);
-	return 0;
-}
-
-unsigned long __get_wchan(struct task_struct *p)
-{
-	unsigned long ret;
-
-	if (!try_get_task_stack(p))
-		return 0;
-
-	ret = ___get_wchan(p);
-
-	put_task_stack(p);
-
-	return ret;
-}
-
 static int kstack_depth_to_print = CONFIG_PRINT_STACK_DEPTH;
 
 void __no_sanitize_address show_stack(struct task_struct *tsk,
--- a/arch/riscv/include/asm/processor.h
+++ b/arch/riscv/include/asm/processor.h
@@ -66,9 +66,6 @@ static inline void release_thread(struct
 {
 }
 
-extern unsigned long __get_wchan(struct task_struct *p);
-
-
 static inline void wait_for_interrupt(void)
 {
 	__asm__ __volatile__ ("wfi");
--- a/arch/riscv/kernel/stacktrace.c
+++ b/arch/riscv/kernel/stacktrace.c
@@ -118,27 +118,6 @@ void show_stack(struct task_struct *task
 	dump_backtrace(NULL, task, loglvl);
 }
 
-static bool save_wchan(void *arg, unsigned long pc)
-{
-	if (!in_sched_functions(pc)) {
-		unsigned long *p = arg;
-		*p = pc;
-		return false;
-	}
-	return true;
-}
-
-unsigned long __get_wchan(struct task_struct *task)
-{
-	unsigned long pc = 0;
-
-	if (!try_get_task_stack(task))
-		return 0;
-	walk_stackframe(task, NULL, save_wchan, &pc);
-	put_task_stack(task);
-	return pc;
-}
-
 #ifdef CONFIG_STACKTRACE
 
 noinline void arch_stack_walk(stack_trace_consume_fn consume_entry, void *cookie,
--- a/arch/s390/include/asm/processor.h
+++ b/arch/s390/include/asm/processor.h
@@ -192,7 +192,6 @@ static inline void release_thread(struct
 void guarded_storage_release(struct task_struct *tsk);
 void gs_load_bc_cb(struct pt_regs *regs);
 
-unsigned long __get_wchan(struct task_struct *p);
 #define task_pt_regs(tsk) ((struct pt_regs *) \
         (task_stack_page(tsk) + THREAD_SIZE) - 1)
 #define KSTK_EIP(tsk)	(task_pt_regs(tsk)->psw.addr)
--- a/arch/s390/kernel/process.c
+++ b/arch/s390/kernel/process.c
@@ -181,35 +181,6 @@ void execve_tail(void)
 	asm volatile("sfpc %0" : : "d" (0));
 }
 
-unsigned long __get_wchan(struct task_struct *p)
-{
-	struct unwind_state state;
-	unsigned long ip = 0;
-
-	if (!task_stack_page(p))
-		return 0;
-
-	if (!try_get_task_stack(p))
-		return 0;
-
-	unwind_for_each_frame(&state, p, NULL, 0) {
-		if (state.stack_info.type != STACK_TYPE_TASK) {
-			ip = 0;
-			break;
-		}
-
-		ip = unwind_get_return_address(&state);
-		if (!ip)
-			break;
-
-		if (!in_sched_functions(ip))
-			break;
-	}
-
-	put_task_stack(p);
-	return ip;
-}
-
 unsigned long arch_align_stack(unsigned long sp)
 {
 	if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space)
--- a/arch/sh/include/asm/processor_32.h
+++ b/arch/sh/include/asm/processor_32.h
@@ -180,8 +180,6 @@ static inline void show_code(struct pt_r
 }
 #endif
 
-extern unsigned long __get_wchan(struct task_struct *p);
-
 #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->pc)
 #define KSTK_ESP(tsk)  (task_pt_regs(tsk)->regs[15])
 
--- a/arch/sh/kernel/process_32.c
+++ b/arch/sh/kernel/process_32.c
@@ -181,22 +181,3 @@ __switch_to(struct task_struct *prev, st
 
 	return prev;
 }
-
-unsigned long __get_wchan(struct task_struct *p)
-{
-	unsigned long pc;
-
-	/*
-	 * The same comment as on the Alpha applies here, too ...
-	 */
-	pc = thread_saved_pc(p);
-
-#ifdef CONFIG_FRAME_POINTER
-	if (in_sched_functions(pc)) {
-		unsigned long schedule_frame = (unsigned long)p->thread.sp;
-		return ((unsigned long *)schedule_frame)[21];
-	}
-#endif
-
-	return pc;
-}
--- a/arch/sparc/include/asm/processor_64.h
+++ b/arch/sparc/include/asm/processor_64.h
@@ -183,8 +183,6 @@ do { \
 /* Free all resources held by a thread. */
 #define release_thread(tsk)		do { } while (0)
 
-unsigned long __get_wchan(struct task_struct *task);
-
 #define task_pt_regs(tsk) (task_thread_info(tsk)->kregs)
 #define KSTK_EIP(tsk)  (task_pt_regs(tsk)->tpc)
 #define KSTK_ESP(tsk)  (task_pt_regs(tsk)->u_regs[UREG_FP])
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -662,31 +662,3 @@ int arch_dup_task_struct(struct task_str
 	*dst = *src;
 	return 0;
 }
-
-unsigned long __get_wchan(struct task_struct *task)
-{
-	unsigned long pc, fp, bias = 0;
-	struct thread_info *tp;
-	struct reg_window *rw;
-        unsigned long ret = 0;
-	int count = 0; 
-
-	tp = task_thread_info(task);
-	bias = STACK_BIAS;
-	fp = task_thread_info(task)->ksp + bias;
-
-	do {
-		if (!kstack_valid(tp, fp))
-			break;
-		rw = (struct reg_window *) fp;
-		pc = rw->ins[7];
-		if (!in_sched_functions(pc)) {
-			ret = pc;
-			goto out;
-		}
-		fp = rw->ins[6] + bias;
-	} while (++count < 16);
-
-out:
-	return ret;
-}
--- a/arch/um/include/asm/processor-generic.h
+++ b/arch/um/include/asm/processor-generic.h
@@ -106,6 +106,5 @@ extern struct cpuinfo_um boot_cpu_data;
 #define cache_line_size()	(boot_cpu_data.cache_alignment)
 
 #define KSTK_REG(tsk, reg) get_thread_reg(reg, &tsk->thread.switch_buf)
-extern unsigned long __get_wchan(struct task_struct *p);
 
 #endif
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -364,38 +364,6 @@ unsigned long arch_align_stack(unsigned
 }
 #endif
 
-unsigned long __get_wchan(struct task_struct *p)
-{
-	unsigned long stack_page, sp, ip;
-	bool seen_sched = 0;
-
-	stack_page = (unsigned long) task_stack_page(p);
-	/* Bail if the process has no kernel stack for some reason */
-	if (stack_page == 0)
-		return 0;
-
-	sp = p->thread.switch_buf->JB_SP;
-	/*
-	 * Bail if the stack pointer is below the bottom of the kernel
-	 * stack for some reason
-	 */
-	if (sp < stack_page)
-		return 0;
-
-	while (sp < stack_page + THREAD_SIZE) {
-		ip = *((unsigned long *) sp);
-		if (in_sched_functions(ip))
-			/* Ignore everything until we're above the scheduler */
-			seen_sched = 1;
-		else if (kernel_text_address(ip) && seen_sched)
-			return ip;
-
-		sp += sizeof(unsigned long);
-	}
-
-	return 0;
-}
-
 int elf_core_copy_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
 {
 	int cpu = current_thread_info()->cpu;
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -590,8 +590,6 @@ static inline void load_sp0(unsigned lon
 /* Free all resources held by a thread. */
 extern void release_thread(struct task_struct *);
 
-unsigned long __get_wchan(struct task_struct *p);
-
 /*
  * Generic CPUID function
  * clear %ecx since some cpus (Cyrix MII) do not set or clear %ecx
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -937,20 +937,6 @@ unsigned long arch_randomize_brk(struct
 	return randomize_page(mm->brk, 0x02000000);
 }
 
-/*
- * Called from fs/proc with a reference on @p to find the function
- * which called into schedule(). This needs to be done carefully
- * because the task might wake up and we might look at a stack
- * changing under us.
- */
-unsigned long __get_wchan(struct task_struct *p)
-{
-	unsigned long entry = 0;
-
-	stack_trace_save_tsk(p, &entry, 1, 0);
-	return entry;
-}
-
 long do_arch_prctl_common(struct task_struct *task, int option,
 			  unsigned long cpuid_enabled)
 {
--- a/arch/xtensa/include/asm/processor.h
+++ b/arch/xtensa/include/asm/processor.h
@@ -215,8 +215,6 @@ struct mm_struct;
 /* Free all resources held by a thread. */
 #define release_thread(thread) do { } while(0)
 
-extern unsigned long __get_wchan(struct task_struct *p);
-
 #define KSTK_EIP(tsk)		(task_pt_regs(tsk)->pc)
 #define KSTK_ESP(tsk)		(task_pt_regs(tsk)->areg[1])
 
--- a/arch/xtensa/kernel/process.c
+++ b/arch/xtensa/kernel/process.c
@@ -293,32 +293,3 @@ int copy_thread(unsigned long clone_flag
 	return 0;
 }
 
-
-/*
- * These bracket the sleeping functions..
- */
-
-unsigned long __get_wchan(struct task_struct *p)
-{
-	unsigned long sp, pc;
-	unsigned long stack_page = (unsigned long) task_stack_page(p);
-	int count = 0;
-
-	sp = p->thread.sp;
-	pc = MAKE_PC_FROM_RA(p->thread.ra, p->thread.sp);
-
-	do {
-		if (sp < stack_page + sizeof(struct task_struct) ||
-		    sp >= (stack_page + THREAD_SIZE) ||
-		    pc == 0)
-			return 0;
-		if (!in_sched_functions(pc))
-			return pc;
-
-		/* Stack layout: sp-4: ra, sp-3: sp' */
-
-		pc = MAKE_PC_FROM_RA(SPILL_SLOT(sp, 0), sp);
-		sp = SPILL_SLOT(sp, 1);
-	} while (count++ < 16);
-	return 0;
-}
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1962,6 +1962,21 @@ bool sched_task_on_rq(struct task_struct
 	return task_on_rq_queued(p);
 }
 
+#ifdef CONFIG_STACKTRACE
+static unsigned long __get_wchan(struct task_struct *p)
+{
+	unsigned long entry = 0;
+
+	stack_trace_save_tsk(p, &entry, 1, 0);
+
+	return entry;
+}
+#endif
+
+/*
+ * Called from fs/proc with a reference on @p to find the function
+ * which called into schedule().
+ */
 unsigned long get_wchan(struct task_struct *p)
 {
 	unsigned long ip = 0;
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1531,13 +1531,8 @@ config DEBUG_IRQFLAGS
 	  are enabled.
 
 config STACKTRACE
-	bool "Stack backtrace support"
+	def_bool y
 	depends on STACKTRACE_SUPPORT
-	help
-	  This option causes the kernel to create a /proc/pid/stack for
-	  every process, showing its current stack trace.
-	  It is also used by various kernel debugging features that require
-	  stack trace generation.
 
 config WARN_ALL_UNSEEDED_RANDOM
 	bool "Warn for all uses of unseeded randomness"

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked
  2021-09-30  8:28     ` Peter Zijlstra
@ 2021-09-30  8:29       ` Peter Zijlstra
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2021-09-30  8:29 UTC (permalink / raw)
  To: Kees Cook
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, kernel test robot, Vito Caputo,
	Jann Horn, Andrew Morton, Christian Brauner, Anand K Mistry,
	Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, Josh Poimboeuf, linux-kernel,
	linux-fsdevel, linux-hardening, x86

On Thu, Sep 30, 2021 at 10:28:37AM +0200, Peter Zijlstra wrote:
> On Thu, Sep 30, 2021 at 10:27:07AM +0200, Peter Zijlstra wrote:
> > On Wed, Sep 29, 2021 at 03:02:14PM -0700, Kees Cook wrote:
> > > Having a stable wchan means the process must be blocked and for it to
> > > stay that way while performing stack unwinding.
> > 
> > How's this instead?
> > 
> On top of which we can do..

But that then leads to..

---
Subject: arch: Fix STACKTRACE_SUPPORT
From: Peter Zijlstra <peterz@infradead.org>
Date: Thu Sep 30 10:21:15 CEST 2021

A few archs got save_stack_trace_tsk() vs in_sched_functions() wrong.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/csky/kernel/stacktrace.c  |    7 ++++++-
 arch/mips/kernel/stacktrace.c  |   27 ++++++++++++++++-----------
 arch/nds32/kernel/stacktrace.c |   20 +++++++++++---------
 3 files changed, 33 insertions(+), 21 deletions(-)

--- a/arch/csky/kernel/stacktrace.c
+++ b/arch/csky/kernel/stacktrace.c
@@ -132,12 +132,17 @@ static bool save_trace(unsigned long pc,
 	return __save_trace(pc, arg, false);
 }
 
+static bool save_trace_nosched(unsigned long pc, void *arg)
+{
+	return __save_trace(pc, arg, true);
+}
+
 /*
  * Save stack-backtrace addresses into a stack_trace buffer.
  */
 void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
 {
-	walk_stackframe(tsk, NULL, save_trace, trace);
+	walk_stackframe(tsk, NULL, save_trace_nosched, trace);
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
 
--- a/arch/mips/kernel/stacktrace.c
+++ b/arch/mips/kernel/stacktrace.c
@@ -66,16 +66,7 @@ static void save_context_stack(struct st
 #endif
 }
 
-/*
- * Save stack-backtrace addresses into a stack_trace buffer.
- */
-void save_stack_trace(struct stack_trace *trace)
-{
-	save_stack_trace_tsk(current, trace);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace);
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+void __save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace, bool savesched)
 {
 	struct pt_regs dummyregs;
 	struct pt_regs *regs = &dummyregs;
@@ -88,6 +79,20 @@ void save_stack_trace_tsk(struct task_st
 		regs->cp0_epc = tsk->thread.reg31;
 	} else
 		prepare_frametrace(regs);
-	save_context_stack(trace, tsk, regs, tsk == current);
+	save_context_stack(trace, tsk, regs, savesched);
+}
+
+/*
+ * Save stack-backtrace addresses into a stack_trace buffer.
+ */
+void save_stack_trace(struct stack_trace *trace)
+{
+	__save_stack_trace_tsk(current, trace, true);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	__save_stack_trace_tsk(tsk, trace, false);
 }
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);
--- a/arch/nds32/kernel/stacktrace.c
+++ b/arch/nds32/kernel/stacktrace.c
@@ -6,13 +6,7 @@
 #include <linux/stacktrace.h>
 #include <linux/ftrace.h>
 
-void save_stack_trace(struct stack_trace *trace)
-{
-	save_stack_trace_tsk(current, trace);
-}
-EXPORT_SYMBOL_GPL(save_stack_trace);
-
-void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+static void __save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace, bool savesched)
 {
 	unsigned long *fpn;
 	int skip = trace->skip;
@@ -21,10 +15,8 @@ void save_stack_trace_tsk(struct task_st
 
 	if (tsk == current) {
 		__asm__ __volatile__("\tori\t%0, $fp, #0\n":"=r"(fpn));
-		savesched = 1;
 	} else {
 		fpn = (unsigned long *)thread_saved_fp(tsk);
-		savesched = 0;
 	}
 
 	while (!kstack_end(fpn) && !((unsigned long)fpn & 0x3)
@@ -50,4 +42,14 @@ void save_stack_trace_tsk(struct task_st
 		fpn = (unsigned long *)fpp;
 	}
 }
+void save_stack_trace(struct stack_trace *trace)
+{
+	__save_stack_trace_tsk(current, trace, true);
+}
+EXPORT_SYMBOL_GPL(save_stack_trace);
+
+void save_stack_trace_tsk(struct task_struct *tsk, struct stack_trace *trace)
+{
+	__save_stack_trace_tsk(tsk, trace, false);
+}
 EXPORT_SYMBOL_GPL(save_stack_trace_tsk);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback
  2021-09-30  1:01 ` [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Josh Poimboeuf
@ 2021-09-30  8:40   ` Peter Zijlstra
  2021-09-30 15:18     ` Kees Cook
  0 siblings, 1 reply; 15+ messages in thread
From: Peter Zijlstra @ 2021-09-30  8:40 UTC (permalink / raw)
  To: Josh Poimboeuf
  Cc: Kees Cook, kernel test robot, Vito Caputo, Jann Horn,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, linux-kernel, linux-fsdevel,
	linux-hardening, x86

On Wed, Sep 29, 2021 at 06:01:57PM -0700, Josh Poimboeuf wrote:

> - Should we use a similar sched wrapper for /proc/$pid/stack to make its
>   raciness go away?

Alternatively, can we make /stack go away? AFAICT the semantics of that
are far worse in that it wants the actual kernel stack of a task,
blocked or not, which is a total pain in the arse (not to mention a
giant infoleak and side-channel).

> - At the risk of triggering a much larger patch set, I suspect
>   get_wchan() can be made generic ;-)  It's just a glorified wrapper
>   around stack_trace_save_tsk().

Done that for you :-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 6/6] leaking_addresses: Always print a trailing newline
  2021-09-29 22:02 ` [PATCH v2 6/6] leaking_addresses: Always print a trailing newline Kees Cook
@ 2021-09-30 14:37   ` Tycho Andersen
  0 siblings, 0 replies; 15+ messages in thread
From: Tycho Andersen @ 2021-09-30 14:37 UTC (permalink / raw)
  To: Kees Cook
  Cc: Peter Zijlstra, Tobin C. Harding, linux-hardening,
	kernel test robot, Vito Caputo, Jann Horn, Ingo Molnar,
	Juri Lelli, Vincent Guittot, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Mel Gorman, Daniel Bristot de Oliveira,
	Andrew Morton, Christian Brauner, Anand K Mistry, Kenta.Tada,
	Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Thomas Gleixner,
	Borislav Petkov, H. Peter Anvin, Mark Rutland, Jens Axboe,
	Stefan Metzmacher, Lai Jiangshan, Andy Lutomirski, Dave Hansen,
	Eric W. Biederman, Ohhoon Kwon, Kalesh Singh, YiFei Zhu,
	Josh Poimboeuf, linux-kernel, linux-fsdevel, x86

On Wed, Sep 29, 2021 at 03:02:18PM -0700, Kees Cook wrote:
> For files that lack trailing newlines and match a leaking address (e.g.
> wchan[1]), the leaking_addresses.pl report would run together with the
> next line, making things look corrupted.
> 
> Unconditionally remove the newline on input, and write it back out on
> output.
> 
> [1] https://lore.kernel.org/all/20210103142726.GC30643@xsang-OptiPlex-9020/
> 
> Cc: "Tobin C. Harding" <me@tobin.cc>
> Cc: Tycho Andersen <tycho@tycho.pizza>

Acked-by: Tycho Andersen <tycho@tycho.pizza>

Thanks!

> Cc: linux-hardening@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  scripts/leaking_addresses.pl | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/scripts/leaking_addresses.pl b/scripts/leaking_addresses.pl
> index b2d8b8aa2d99..8f636a23bc3f 100755
> --- a/scripts/leaking_addresses.pl
> +++ b/scripts/leaking_addresses.pl
> @@ -455,8 +455,9 @@ sub parse_file
>  
>  	open my $fh, "<", $file or return;
>  	while ( <$fh> ) {
> +		chomp;
>  		if (may_leak_address($_)) {
> -			print $file . ': ' . $_;
> +			printf("$file: $_\n");
>  		}
>  	}
>  	close $fh;
> -- 
> 2.30.2
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback
  2021-09-30  8:40   ` Peter Zijlstra
@ 2021-09-30 15:18     ` Kees Cook
  2021-09-30 18:39       ` Peter Zijlstra
  0 siblings, 1 reply; 15+ messages in thread
From: Kees Cook @ 2021-09-30 15:18 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Josh Poimboeuf, kernel test robot, Vito Caputo, Jann Horn,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, linux-kernel, linux-fsdevel,
	linux-hardening, x86

On Thu, Sep 30, 2021 at 10:40:11AM +0200, Peter Zijlstra wrote:
> On Wed, Sep 29, 2021 at 06:01:57PM -0700, Josh Poimboeuf wrote:
> 
> > - Should we use a similar sched wrapper for /proc/$pid/stack to make its
> >   raciness go away?
> 
> Alternatively, can we make /stack go away? AFAICT the semantics of that
> are far worse in that it wants the actual kernel stack of a task,
> blocked or not, which is a total pain in the arse (not to mention a
> giant infoleak and side-channel).
> 
> > - At the risk of triggering a much larger patch set, I suspect
> >   get_wchan() can be made generic ;-)  It's just a glorified wrapper
> >   around stack_trace_save_tsk().
> 
> Done that for you :-)

Thanks! I wasn't sure the renaming was worth the churn, but the other
cleanups make it much nicer. :)

Do you want me to re-collect the resulting series, or do you have a tree
already for -tip for this?

-Kees

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback
  2021-09-30 15:18     ` Kees Cook
@ 2021-09-30 18:39       ` Peter Zijlstra
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2021-09-30 18:39 UTC (permalink / raw)
  To: Kees Cook
  Cc: Josh Poimboeuf, kernel test robot, Vito Caputo, Jann Horn,
	Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Andrew Morton, Christian Brauner,
	Anand K Mistry, Kenta.Tada, Alexey Gladkov, Michael Weiß,
	Michal Hocko, Helge Deller, Qi Zheng, Tobin C. Harding,
	Tycho Andersen, Thomas Gleixner, Borislav Petkov, H. Peter Anvin,
	Mark Rutland, Jens Axboe, Stefan Metzmacher, Lai Jiangshan,
	Andy Lutomirski, Dave Hansen, Eric W. Biederman, Ohhoon Kwon,
	Kalesh Singh, YiFei Zhu, linux-kernel, linux-fsdevel,
	linux-hardening, x86

On Thu, Sep 30, 2021 at 08:18:34AM -0700, Kees Cook wrote:
> Do you want me to re-collect the resulting series, or do you have a tree
> already for -tip for this?

I stuck it here:

  https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/log/?h=sched/wchan

I'm waiting on the robots to tell I wrecked the world... :-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-09-30 18:40 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-29 22:02 [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Kees Cook
2021-09-29 22:02 ` [PATCH v2 1/6] Revert "proc/wchan: use printk format instead of lookup_symbol_name()" Kees Cook
2021-09-29 22:02 ` [PATCH v2 2/6] sched: Add wrapper for get_wchan() to keep task blocked Kees Cook
2021-09-30  8:27   ` Peter Zijlstra
2021-09-30  8:28     ` Peter Zijlstra
2021-09-30  8:29       ` Peter Zijlstra
2021-09-29 22:02 ` [PATCH v2 3/6] proc: Use task_is_running() for wchan in /proc/$pid/stat Kees Cook
2021-09-29 22:02 ` [PATCH v2 4/6] proc: Only report /proc/$pid/wchan when process is blocked Kees Cook
2021-09-29 22:02 ` [PATCH v2 5/6] x86: Fix get_wchan() to support the ORC unwinder Kees Cook
2021-09-29 22:02 ` [PATCH v2 6/6] leaking_addresses: Always print a trailing newline Kees Cook
2021-09-30 14:37   ` Tycho Andersen
2021-09-30  1:01 ` [PATCH v2 0/6] wchan: Fix ORC support and leaky fallback Josh Poimboeuf
2021-09-30  8:40   ` Peter Zijlstra
2021-09-30 15:18     ` Kees Cook
2021-09-30 18:39       ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).