From: Mark Rutland <mark.rutland@arm.com>
To: Kees Cook <keescook@chromium.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
x86@kernel.org, linux-kernel@vger.kernel.org,
juri.lelli@redhat.com, vincent.guittot@linaro.org,
dietmar.eggemann@arm.com, rostedt@goodmis.org,
bsegall@google.com, mgorman@suse.de, bristot@redhat.com,
akpm@linux-foundation.org, zhengqi.arch@bytedance.com,
linux@armlinux.org.uk, catalin.marinas@arm.com, will@kernel.org,
mpe@ellerman.id.au, paul.walmsley@sifive.com, palmer@dabbelt.com,
hca@linux.ibm.com, gor@linux.ibm.com, borntraeger@de.ibm.com,
linux-arch@vger.kernel.org, ardb@kernel.org
Subject: Re: [PATCH 2/7] stacktrace,sched: Make stack_trace_save_tsk() more robust
Date: Fri, 22 Oct 2021 17:54:31 +0100 [thread overview]
Message-ID: <20211022165431.GF86184@C02TD0UTHF1T.local> (raw)
In-Reply-To: <202110220919.46F58199D@keescook>
On Fri, Oct 22, 2021 at 09:25:02AM -0700, Kees Cook wrote:
> On Fri, Oct 22, 2021 at 05:09:35PM +0200, Peter Zijlstra wrote:
> > Recent patches to get_wchan() made it more robust by only doing the
> > unwind when the task was blocked and serialized against wakeups.
> >
> > Extract this functionality as a simpler companion to task_call_func()
> > named task_try_func() that really only cares about blocked tasks. Then
> > employ this new function to implement the same robustness for
> > ARCH_STACKWALK based stack_trace_save_tsk().
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > include/linux/wait.h | 1
> > kernel/sched/core.c | 62 ++++++++++++++++++++++++++++++++++++++++++++-------
> > kernel/stacktrace.c | 13 ++++++----
> > 3 files changed, 63 insertions(+), 13 deletions(-)
> >
> > --- a/include/linux/wait.h
> > +++ b/include/linux/wait.h
> > @@ -1162,5 +1162,6 @@ int autoremove_wake_function(struct wait
> >
> > typedef int (*task_call_f)(struct task_struct *p, void *arg);
> > extern int task_call_func(struct task_struct *p, task_call_f func, void *arg);
> > +extern int task_try_func(struct task_struct *p, task_call_f func, void *arg);
> >
> > #endif /* _LINUX_WAIT_H */
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -1966,21 +1966,21 @@ bool sched_task_on_rq(struct task_struct
> > return task_on_rq_queued(p);
> > }
> >
> > +static int try_get_wchan(struct task_struct *p, void *arg)
> > +{
> > + unsigned long *wchan = arg;
ke> > + *wchan = __get_wchan(p);
> > + return 0;
> > +}
> > +
> > unsigned long get_wchan(struct task_struct *p)
> > {
> > unsigned long ip = 0;
> > - unsigned int state;
> >
> > if (!p || p == current)
> > return 0;
> >
> > - /* Only get wchan if task is blocked and we can keep it that way. */
> > - raw_spin_lock_irq(&p->pi_lock);
> > - state = READ_ONCE(p->__state);
> > - smp_rmb(); /* see try_to_wake_up() */
> > - if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
> > - ip = __get_wchan(p);
> > - raw_spin_unlock_irq(&p->pi_lock);
> > + task_try_func(p, try_get_wchan, &ip);
> >
> > return ip;
> > }
> > @@ -4184,6 +4184,52 @@ int task_call_func(struct task_struct *p
> > return ret;
> > }
> >
> > +/*
> > + * task_try_func - Invoke a function on task in blocked state
> > + * @p: Process for which the function is to be invoked
> > + * @func: Function to invoke
> > + * @arg: Argument to function
> > + *
> > + * Fix the task in a blocked state, when possible. And if so, invoke @func on it.
> > + *
> > + * Returns:
> > + * -EBUSY or whatever @func returns
> > + */
> > +int task_try_func(struct task_struct *p, task_call_f func, void *arg)
> > +{
> > + unsigned long flags;
> > + unsigned int state;
> > + int ret = -EBUSY;
> > +
> > + raw_spin_lock_irqsave(&p->pi_lock, flags);
> > +
> > + state = READ_ONCE(p->__state);
> > +
> > + /*
> > + * Ensure we load p->on_rq after p->__state, otherwise it would be
> > + * possible to, falsely, observe p->on_rq == 0.
> > + *
> > + * See try_to_wake_up() for a longer comment.
> > + */
> > + smp_rmb();
> > +
> > + /*
> > + * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
> > + * the task is blocked. Make sure to check @state since ttwu() can drop
> > + * locks at the end, see ttwu_queue_wakelist().
> > + */
> > + if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq) {
> > + /*
> > + * The task is blocked and we're holding off wakeupsr. For any
> > + * of the other task states, see task_call_func().
> > + */
> > + ret = func(p, arg);
> > + }
> > +
> > + raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> > + return ret;
> > +}
> > +
> > /**
> > * wake_up_process - Wake up a specific process
> > * @p: The process to be woken up.
> > --- a/kernel/stacktrace.c
> > +++ b/kernel/stacktrace.c
> > @@ -123,6 +123,13 @@ unsigned int stack_trace_save(unsigned l
> > }
> > EXPORT_SYMBOL_GPL(stack_trace_save);
> >
> > +static int try_arch_stack_walk_tsk(struct task_struct *tsk, void *arg)
> > +{
> > + stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched;
> > + arch_stack_walk(consume_entry, arg, tsk, NULL);
> > + return 0;
> > +}
> > +
> > /**
> > * stack_trace_save_tsk - Save a task stack trace into a storage array
> > * @task: The task to examine
> > @@ -135,7 +142,6 @@ EXPORT_SYMBOL_GPL(stack_trace_save);
> > unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
> > unsigned int size, unsigned int skipnr)
> > {
> > - stack_trace_consume_fn consume_entry = stack_trace_consume_entry_nosched;
> > struct stacktrace_cookie c = {
> > .store = store,
> > .size = size,
> > @@ -143,11 +149,8 @@ unsigned int stack_trace_save_tsk(struct
> > .skip = skipnr + (current == tsk),
> > };
> >
> > - if (!try_get_task_stack(tsk))
> > - return 0;
> > + task_try_func(tsk, try_arch_stack_walk_tsk, &c);
>
> Pardon my thin understanding of the scheduler, but I assume this change
> doesn't mean stack_trace_save_tsk() stops working for "current", right?
> In trying to answer this for myself, I couldn't convince myself what value
> current->__state have here. Is it one of TASK_(UN)INTERRUPTIBLE ?
Regardless of that, current->on_rq will be non-zero, so you're right that this
causes stack_trace_save_tsk() to not work for current, e.g.
| # cat /proc/self/stack
| # wc /proc/self/stack
| 0 0 0 /proc/self/stack
TBH, I think that (taking a step back from this issue in particular)
stack_trace_save_tsk() *shouldn't* work for current, and callers *should* be
forced to explicitly handle current separately from blocked tasks.
So we could fix this in the stacktrace code with:
| diff --git a/kernel/stacktrace.c b/kernel/stacktrace.c
| index a1cdbf8c3ef8..327af9ff2c55 100644
| --- a/kernel/stacktrace.c
| +++ b/kernel/stacktrace.c
| @@ -149,7 +149,10 @@ unsigned int stack_trace_save_tsk(struct task_struct *tsk, unsigned long *store,
| .skip = skipnr + (current == tsk),
| };
|
| - task_try_func(tsk, try_arch_stack_walk_tsk, &c);
| + if (tsk == current)
| + try_arch_stack_walk_tsk(tsk, &c);
| + else
| + task_try_func(tsk, try_arch_stack_walk_tsk, &c);
|
| return c.len;
| }
... and we could rename task_try_func() to blocked_task_try_func(), and
later push the distinction into higher-level callers.
Alternatively, we could do:
| diff --git a/kernel/sched/core.c b/kernel/sched/core.c
| index a8be6e135c57..cef9e35ecf2f 100644
| --- a/kernel/sched/core.c
| +++ b/kernel/sched/core.c
| @@ -4203,6 +4203,11 @@ int task_try_func(struct task_struct *p, task_call_f func, void *arg)
|
| raw_spin_lock_irqsave(&p->pi_lock, flags);
|
| + if (p == current) {
| + ret = func(p, arg);
| + goto out;
| + }
| +
| state = READ_ONCE(p->__state);
|
| /*
| @@ -4226,6 +4231,7 @@ int task_try_func(struct task_struct *p, task_call_f func, void *arg)
| ret = func(p, arg);
| }
|
| +out:
| raw_spin_unlock_irqrestore(&p->pi_lock, flags);
| return ret;
| }
... which perhaps is aligned with smp_call_function_single() and
generic_exec_single().
Thanks,
Mark.
next prev parent reply other threads:[~2021-10-22 16:54 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-22 15:09 [PATCH 0/7] arch: More wchan fixes Peter Zijlstra
2021-10-22 15:09 ` [PATCH 1/7] x86: Fix __get_wchan() for !STACKTRACE Peter Zijlstra
2021-10-22 16:25 ` Kees Cook
2021-10-26 19:16 ` [tip: sched/core] " tip-bot2 for Peter Zijlstra
2021-10-22 15:09 ` [PATCH 2/7] stacktrace,sched: Make stack_trace_save_tsk() more robust Peter Zijlstra
2021-10-22 16:25 ` Kees Cook
2021-10-22 16:45 ` Peter Zijlstra
2021-10-22 16:57 ` Mark Rutland
2021-10-22 16:54 ` Mark Rutland [this message]
2021-10-22 17:01 ` Peter Zijlstra
2021-10-25 20:38 ` Peter Zijlstra
2021-10-25 20:52 ` Kees Cook
2021-10-26 9:33 ` Mark Rutland
2021-10-25 16:27 ` Peter Zijlstra
2021-10-22 15:09 ` [PATCH 3/7] ARM: implement ARCH_STACKWALK Peter Zijlstra
2021-10-22 16:18 ` Kees Cook
2021-10-22 15:09 ` [PATCH 4/7] arch: Make ARCH_STACKWALK independent of STACKTRACE Peter Zijlstra
2021-10-22 16:18 ` Kees Cook
2021-10-22 16:36 ` Peter Zijlstra
2021-10-22 17:06 ` Mark Rutland
2021-10-22 15:09 ` [PATCH 5/7] powerpc, arm64: Mark __switch_to() as __sched Peter Zijlstra
2021-10-22 16:15 ` Kees Cook
2021-10-22 17:40 ` Mark Rutland
2021-10-22 15:09 ` [PATCH 6/7] arch: __get_wchan() || ARCH_STACKWALK Peter Zijlstra
2021-10-22 16:13 ` Kees Cook
2021-10-22 17:52 ` Mark Rutland
2021-10-22 15:09 ` [PATCH 7/7] selftests: proc: Make sure wchan works when it exists Peter Zijlstra
2021-10-22 15:27 ` [PATCH 0/7] arch: More wchan fixes Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211022165431.GF86184@C02TD0UTHF1T.local \
--to=mark.rutland@arm.com \
--cc=akpm@linux-foundation.org \
--cc=ardb@kernel.org \
--cc=borntraeger@de.ibm.com \
--cc=bristot@redhat.com \
--cc=bsegall@google.com \
--cc=catalin.marinas@arm.com \
--cc=dietmar.eggemann@arm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=juri.lelli@redhat.com \
--cc=keescook@chromium.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@armlinux.org.uk \
--cc=mgorman@suse.de \
--cc=mpe@ellerman.id.au \
--cc=palmer@dabbelt.com \
--cc=paul.walmsley@sifive.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=vincent.guittot@linaro.org \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=zhengqi.arch@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).