* [PATCH v3 0/4] arm64: kprobes: Fix some bugs in arm64 kprobes
@ 2019-07-25 8:15 Masami Hiramatsu
2019-07-25 8:15 ` [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler Masami Hiramatsu
` (3 more replies)
0 siblings, 4 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2019-07-25 8:15 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz
Hi,
Here are the v3 patches which fixes kprobe bugs on arm64.
In this version I fixed some issues pointed by James and add Reviewed-by
and Acked-bys.
Background:
Naresh reported that recently ftracetest crashes kernel, and I found
there are 3 different bugs around the crash. In v1 thread, we found
one another bug of RCU and debug exception.
- Kprobes on arm64 doesn't recover pstate.D mask after single stepping.
So after hitting kprobes, the debug flag status is changed.
- Some symbols which are called from blacklisted function, are not
blacklisted.
- Debug exception is not visible to RCU, thus rcu_read_lock() cause
a warning inside it.
- Debug exception handlers on arm64 is using rcu_read_lock(), but
that is not needed because interrupts are disabled.
This series includes fixes for above bugs.
Thank you,
---
Masami Hiramatsu (4):
arm64: kprobes: Recover pstate.D in single-step exception handler
arm64: unwind: Prohibit probing on return_address()
arm64: Make debug exception handlers visible from RCU
arm64: Remove unneeded rcu_read_lock from debug handlers
arch/arm64/include/asm/daifflags.h | 2 ++
arch/arm64/kernel/debug-monitors.c | 14 +++++++------
arch/arm64/kernel/probes/kprobes.c | 39 +++++------------------------------
arch/arm64/kernel/return_address.c | 3 +++
arch/arm64/kernel/stacktrace.c | 3 +++
arch/arm64/mm/fault.c | 40 ++++++++++++++++++++++++++++++++++++
6 files changed, 61 insertions(+), 40 deletions(-)
--
Masami Hiramatsu (Linaro) <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler
2019-07-25 8:15 [PATCH v3 0/4] arm64: kprobes: Fix some bugs in arm64 kprobes Masami Hiramatsu
@ 2019-07-25 8:15 ` Masami Hiramatsu
2019-08-01 12:16 ` Will Deacon
2019-07-25 8:16 ` [PATCH v3 2/4] arm64: unwind: Prohibit probing on return_address() Masami Hiramatsu
` (2 subsequent siblings)
3 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2019-07-25 8:15 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz, James Morse
kprobes manipulates the interrupted PSTATE for single step, and
doesn't restore it. Thus, if we put a kprobe where the pstate.D
(debug) masked, the mask will be cleared after the kprobe hits.
Moreover, in the most complicated case, this can lead a kernel
crash with below message when a nested kprobe hits.
[ 152.118921] Unexpected kernel single-step exception at EL1
When the 1st kprobe hits, do_debug_exception() will be called.
At this point, debug exception (= pstate.D) must be masked (=1).
But if another kprobes hits before single-step of the first kprobe
(e.g. inside user pre_handler), it unmask the debug exception
(pstate.D = 0) and return.
Then, when the 1st kprobe setting up single-step, it saves current
DAIF, mask DAIF, enable single-step, and restore DAIF.
However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
single-step exception happens soon after restoring DAIF.
This has been introduced by commit 7419333fa15e ("arm64: kprobe:
Always clear pstate.D in breakpoint exception handler")
To solve this issue, this stores all DAIF bits and restore it
after single stepping.
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Fixes: commit 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
Changes in v3:
- Update patch description
- move PSR_DAIF_MASK in daifflags.h
Changes in v2:
- Save and restore all DAIF flags.
- Operate pstate directly and remove spsr_set_debug_flag().
---
arch/arm64/include/asm/daifflags.h | 2 ++
arch/arm64/kernel/probes/kprobes.c | 39 +++++-------------------------------
2 files changed, 7 insertions(+), 34 deletions(-)
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 987926ed535e..063c964af705 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -13,6 +13,8 @@
#define DAIF_PROCCTX 0
#define DAIF_PROCCTX_NOIRQ PSR_I_BIT
#define DAIF_ERRCTX (PSR_I_BIT | PSR_A_BIT)
+#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+
/* mask/save/unmask/restore all exceptions, including interrupts. */
static inline void local_daif_mask(void)
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index bd5dfffca272..bf2259651b67 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -167,33 +167,6 @@ static void __kprobes set_current_kprobe(struct kprobe *p)
__this_cpu_write(current_kprobe, p);
}
-/*
- * When PSTATE.D is set (masked), then software step exceptions can not be
- * generated.
- * SPSR's D bit shows the value of PSTATE.D immediately before the
- * exception was taken. PSTATE.D is set while entering into any exception
- * mode, however software clears it for any normal (none-debug-exception)
- * mode in the exception entry. Therefore, when we are entering into kprobe
- * breakpoint handler from any normal mode then SPSR.D bit is already
- * cleared, however it is set when we are entering from any debug exception
- * mode.
- * Since we always need to generate single step exception after a kprobe
- * breakpoint exception therefore we need to clear it unconditionally, when
- * we become sure that the current breakpoint exception is for kprobe.
- */
-static void __kprobes
-spsr_set_debug_flag(struct pt_regs *regs, int mask)
-{
- unsigned long spsr = regs->pstate;
-
- if (mask)
- spsr |= PSR_D_BIT;
- else
- spsr &= ~PSR_D_BIT;
-
- regs->pstate = spsr;
-}
-
/*
* Interrupts need to be disabled before single-step mode is set, and not
* reenabled until after single-step mode ends.
@@ -205,17 +178,17 @@ spsr_set_debug_flag(struct pt_regs *regs, int mask)
static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- kcb->saved_irqflag = regs->pstate;
+ kcb->saved_irqflag = regs->pstate & DAIF_MASK;
regs->pstate |= PSR_I_BIT;
+ /* Unmask PSTATE.D for enabling software step exceptions. */
+ regs->pstate &= ~PSR_D_BIT;
}
static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- if (kcb->saved_irqflag & PSR_I_BIT)
- regs->pstate |= PSR_I_BIT;
- else
- regs->pstate &= ~PSR_I_BIT;
+ regs->pstate &= ~DAIF_MASK;
+ regs->pstate |= kcb->saved_irqflag;
}
static void __kprobes
@@ -252,8 +225,6 @@ static void __kprobes setup_singlestep(struct kprobe *p,
set_ss_context(kcb, slot); /* mark pending ss */
- spsr_set_debug_flag(regs, 0);
-
/* IRQs and single stepping do not mix well. */
kprobes_save_local_irqflag(kcb, regs);
kernel_enable_single_step(regs);
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 2/4] arm64: unwind: Prohibit probing on return_address()
2019-07-25 8:15 [PATCH v3 0/4] arm64: kprobes: Fix some bugs in arm64 kprobes Masami Hiramatsu
2019-07-25 8:15 ` [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler Masami Hiramatsu
@ 2019-07-25 8:16 ` Masami Hiramatsu
2019-07-25 8:16 ` [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU Masami Hiramatsu
2019-07-25 8:16 ` [PATCH v3 4/4] arm64: Remove unneeded rcu_read_lock from debug handlers Masami Hiramatsu
3 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2019-07-25 8:16 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz
Prohibit probing on return_address() and subroutines which
is called from return_address(), since the it is invoked from
trace_hardirqs_off() which is also kprobe blacklisted.
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
Changes in v3:
- Fix to use NOKPROBE_SYMBOL() for save_return_addr instead
of nokprobe_inline.
---
arch/arm64/kernel/return_address.c | 3 +++
arch/arm64/kernel/stacktrace.c | 3 +++
2 files changed, 6 insertions(+)
diff --git a/arch/arm64/kernel/return_address.c b/arch/arm64/kernel/return_address.c
index b21cba90f82d..491184a9f081 100644
--- a/arch/arm64/kernel/return_address.c
+++ b/arch/arm64/kernel/return_address.c
@@ -8,6 +8,7 @@
#include <linux/export.h>
#include <linux/ftrace.h>
+#include <linux/kprobes.h>
#include <asm/stack_pointer.h>
#include <asm/stacktrace.h>
@@ -29,6 +30,7 @@ static int save_return_addr(struct stackframe *frame, void *d)
return 0;
}
}
+NOKPROBE_SYMBOL(save_return_addr);
void *return_address(unsigned int level)
{
@@ -52,3 +54,4 @@ void *return_address(unsigned int level)
return NULL;
}
EXPORT_SYMBOL_GPL(return_address);
+NOKPROBE_SYMBOL(return_address);
diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 62d395151abe..cd7dab54d17b 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -7,6 +7,7 @@
#include <linux/kernel.h>
#include <linux/export.h>
#include <linux/ftrace.h>
+#include <linux/kprobes.h>
#include <linux/sched.h>
#include <linux/sched/debug.h>
#include <linux/sched/task_stack.h>
@@ -73,6 +74,7 @@ int notrace unwind_frame(struct task_struct *tsk, struct stackframe *frame)
return 0;
}
+NOKPROBE_SYMBOL(unwind_frame);
void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
int (*fn)(struct stackframe *, void *), void *data)
@@ -87,6 +89,7 @@ void notrace walk_stackframe(struct task_struct *tsk, struct stackframe *frame,
break;
}
}
+NOKPROBE_SYMBOL(walk_stackframe);
#ifdef CONFIG_STACKTRACE
struct stack_trace_data {
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU
2019-07-25 8:15 [PATCH v3 0/4] arm64: kprobes: Fix some bugs in arm64 kprobes Masami Hiramatsu
2019-07-25 8:15 ` [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler Masami Hiramatsu
2019-07-25 8:16 ` [PATCH v3 2/4] arm64: unwind: Prohibit probing on return_address() Masami Hiramatsu
@ 2019-07-25 8:16 ` Masami Hiramatsu
2019-07-31 17:26 ` Will Deacon
2019-07-25 8:16 ` [PATCH v3 4/4] arm64: Remove unneeded rcu_read_lock from debug handlers Masami Hiramatsu
3 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2019-07-25 8:16 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz,
Paul E . McKenney
Make debug exceptions visible from RCU so that synchronize_rcu()
correctly track the debug exception handler.
This also introduces sanity checks for user-mode exceptions as same
as x86's ist_enter()/ist_exit().
The debug exception can interrupt in idle task. For example, it warns
if we put a kprobe on a function called from idle task as below.
The warning message showed that the rcu_read_lock() caused this
problem. But actually, this means the RCU is lost the context which
is already in NMI/IRQ.
/sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events
/sys/kernel/debug/tracing # echo 1 > events/kprobes/enable
/sys/kernel/debug/tracing # [ 135.122237]
[ 135.125035] =============================
[ 135.125310] WARNING: suspicious RCU usage
[ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted
[ 135.125904] -----------------------------
[ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle!
[ 135.126839]
[ 135.126839] other info that might help us debug this:
[ 135.126839]
[ 135.127410]
[ 135.127410] RCU used illegally from idle CPU!
[ 135.127410] rcu_scheduler_active = 2, debug_locks = 1
[ 135.128114] RCU used illegally from extended quiescent state!
[ 135.128555] 1 lock held by swapper/0/0:
[ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178
[ 135.130499]
[ 135.130499] stack backtrace:
[ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20
[ 135.131841] Hardware name: linux,dummy-virt (DT)
[ 135.132224] Call trace:
[ 135.132491] dump_backtrace+0x0/0x140
[ 135.132806] show_stack+0x24/0x30
[ 135.133133] dump_stack+0xc4/0x10c
[ 135.133726] lockdep_rcu_suspicious+0xf8/0x108
[ 135.134171] call_break_hook+0x170/0x178
[ 135.134486] brk_handler+0x28/0x68
[ 135.134792] do_debug_exception+0x90/0x150
[ 135.135051] el1_dbg+0x18/0x8c
[ 135.135260] default_idle_call+0x0/0x44
[ 135.135516] cpu_startup_entry+0x2c/0x30
[ 135.135815] rest_init+0x1b0/0x280
[ 135.136044] arch_call_rest_init+0x14/0x1c
[ 135.136305] start_kernel+0x4d4/0x500
[ 135.136597]
So make debug exception visible to RCU can fix this warning.
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
Changes in v3:
- Make a comment for debug_exception_enter() clearer.
---
arch/arm64/mm/fault.c | 40 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 40 insertions(+)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9568c116ac7f..ed6c55c87fdc 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -777,6 +777,42 @@ void __init hook_debug_fault_code(int nr,
debug_fault_info[nr].name = name;
}
+/*
+ * In debug exception context, we explicitly disable preemption.
+ * This serves two purposes: it makes it much less likely that we would
+ * accidentally schedule in exception context and it will force a warning
+ * if we somehow manage to schedule by accident.
+ */
+static void debug_exception_enter(struct pt_regs *regs)
+{
+ if (user_mode(regs)) {
+ RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
+ } else {
+ /*
+ * We might have interrupted pretty much anything. In
+ * fact, if we're a debug exception, we can even interrupt
+ * NMI processing. We don't want this code makes in_nmi()
+ * to return true, but we need to notify RCU.
+ */
+ rcu_nmi_enter();
+ }
+
+ preempt_disable();
+
+ /* This code is a bit fragile. Test it. */
+ RCU_LOCKDEP_WARN(!rcu_is_watching(), "exception_enter didn't work");
+}
+NOKPROBE_SYMBOL(debug_exception_enter);
+
+static void debug_exception_exit(struct pt_regs *regs)
+{
+ preempt_enable_no_resched();
+
+ if (!user_mode(regs))
+ rcu_nmi_exit();
+}
+NOKPROBE_SYMBOL(debug_exception_exit);
+
#ifdef CONFIG_ARM64_ERRATUM_1463225
DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
@@ -824,6 +860,8 @@ asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
if (interrupts_enabled(regs))
trace_hardirqs_off();
+ debug_exception_enter(regs);
+
if (user_mode(regs) && !is_ttbr0_addr(pc))
arm64_apply_bp_hardening();
@@ -832,6 +870,8 @@ asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
inf->sig, inf->code, (void __user *)pc, esr);
}
+ debug_exception_exit(regs);
+
if (interrupts_enabled(regs))
trace_hardirqs_on();
}
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v3 4/4] arm64: Remove unneeded rcu_read_lock from debug handlers
2019-07-25 8:15 [PATCH v3 0/4] arm64: kprobes: Fix some bugs in arm64 kprobes Masami Hiramatsu
` (2 preceding siblings ...)
2019-07-25 8:16 ` [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU Masami Hiramatsu
@ 2019-07-25 8:16 ` Masami Hiramatsu
3 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2019-07-25 8:16 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz,
Paul E . McKenney
Remove rcu_read_lock()/rcu_read_unlock() from debug exception
handlers since we are sure those are not preemptible and
interrupts are off.
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
arch/arm64/kernel/debug-monitors.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index f8719bd30850..48222a4760c2 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -207,16 +207,16 @@ static int call_step_hook(struct pt_regs *regs, unsigned int esr)
list = user_mode(regs) ? &user_step_hook : &kernel_step_hook;
- rcu_read_lock();
-
+ /*
+ * Since single-step exception disables interrupt, this function is
+ * entirely not preemptible, and we can use rcu list safely here.
+ */
list_for_each_entry_rcu(hook, list, node) {
retval = hook->fn(regs, esr);
if (retval == DBG_HOOK_HANDLED)
break;
}
- rcu_read_unlock();
-
return retval;
}
NOKPROBE_SYMBOL(call_step_hook);
@@ -305,14 +305,16 @@ static int call_break_hook(struct pt_regs *regs, unsigned int esr)
list = user_mode(regs) ? &user_break_hook : &kernel_break_hook;
- rcu_read_lock();
+ /*
+ * Since brk exception disables interrupt, this function is
+ * entirely not preemptible, and we can use rcu list safely here.
+ */
list_for_each_entry_rcu(hook, list, node) {
unsigned int comment = esr & ESR_ELx_BRK64_ISS_COMMENT_MASK;
if ((comment & ~hook->mask) == hook->imm)
fn = hook->fn;
}
- rcu_read_unlock();
return fn ? fn(regs, esr) : DBG_HOOK_ERROR;
}
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU
2019-07-25 8:16 ` [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU Masami Hiramatsu
@ 2019-07-31 17:26 ` Will Deacon
2019-08-01 5:32 ` Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Will Deacon @ 2019-07-31 17:26 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Catalin Marinas, Will Deacon, Dan Rue, Daniel Diaz,
Anders Roxell, Naresh Kamboju, linux-kernel, Matt Hart,
Paul E . McKenney, linux-arm-kernel
On Thu, Jul 25, 2019 at 05:16:15PM +0900, Masami Hiramatsu wrote:
> Make debug exceptions visible from RCU so that synchronize_rcu()
> correctly track the debug exception handler.
>
> This also introduces sanity checks for user-mode exceptions as same
> as x86's ist_enter()/ist_exit().
>
> The debug exception can interrupt in idle task. For example, it warns
> if we put a kprobe on a function called from idle task as below.
> The warning message showed that the rcu_read_lock() caused this
> problem. But actually, this means the RCU is lost the context which
> is already in NMI/IRQ.
>
> /sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events
> /sys/kernel/debug/tracing # echo 1 > events/kprobes/enable
> /sys/kernel/debug/tracing # [ 135.122237]
> [ 135.125035] =============================
> [ 135.125310] WARNING: suspicious RCU usage
> [ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted
> [ 135.125904] -----------------------------
> [ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle!
> [ 135.126839]
> [ 135.126839] other info that might help us debug this:
> [ 135.126839]
> [ 135.127410]
> [ 135.127410] RCU used illegally from idle CPU!
> [ 135.127410] rcu_scheduler_active = 2, debug_locks = 1
> [ 135.128114] RCU used illegally from extended quiescent state!
> [ 135.128555] 1 lock held by swapper/0/0:
> [ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178
> [ 135.130499]
> [ 135.130499] stack backtrace:
> [ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20
> [ 135.131841] Hardware name: linux,dummy-virt (DT)
> [ 135.132224] Call trace:
> [ 135.132491] dump_backtrace+0x0/0x140
> [ 135.132806] show_stack+0x24/0x30
> [ 135.133133] dump_stack+0xc4/0x10c
> [ 135.133726] lockdep_rcu_suspicious+0xf8/0x108
> [ 135.134171] call_break_hook+0x170/0x178
> [ 135.134486] brk_handler+0x28/0x68
> [ 135.134792] do_debug_exception+0x90/0x150
> [ 135.135051] el1_dbg+0x18/0x8c
> [ 135.135260] default_idle_call+0x0/0x44
> [ 135.135516] cpu_startup_entry+0x2c/0x30
> [ 135.135815] rest_init+0x1b0/0x280
> [ 135.136044] arch_call_rest_init+0x14/0x1c
> [ 135.136305] start_kernel+0x4d4/0x500
> [ 135.136597]
>
> So make debug exception visible to RCU can fix this warning.
>
> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> ---
> Changes in v3:
> - Make a comment for debug_exception_enter() clearer.
> ---
> arch/arm64/mm/fault.c | 40 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 40 insertions(+)
>
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index 9568c116ac7f..ed6c55c87fdc 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -777,6 +777,42 @@ void __init hook_debug_fault_code(int nr,
> debug_fault_info[nr].name = name;
> }
>
> +/*
> + * In debug exception context, we explicitly disable preemption.
Maybe add "despite having interrupts disabled"?
> + * This serves two purposes: it makes it much less likely that we would
> + * accidentally schedule in exception context and it will force a warning
> + * if we somehow manage to schedule by accident.
> + */
> +static void debug_exception_enter(struct pt_regs *regs)
> +{
> + if (user_mode(regs)) {
> + RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
> + } else {
> + /*
> + * We might have interrupted pretty much anything. In
> + * fact, if we're a debug exception, we can even interrupt
> + * NMI processing. We don't want this code makes in_nmi()
> + * to return true, but we need to notify RCU.
> + */
> + rcu_nmi_enter();
> + }
> +
> + preempt_disable();
If you're addingt new functions for entry/exit, maybe move the
trace_hardirqs_{on,off}() calls in here too?
Will
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU
2019-07-31 17:26 ` Will Deacon
@ 2019-08-01 5:32 ` Masami Hiramatsu
2019-08-01 7:37 ` Will Deacon
0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2019-08-01 5:32 UTC (permalink / raw)
To: Will Deacon
Cc: Catalin Marinas, Will Deacon, Dan Rue, Daniel Diaz,
Anders Roxell, Naresh Kamboju, linux-kernel, Matt Hart,
Paul E . McKenney, linux-arm-kernel
Hi Will,
On Wed, 31 Jul 2019 18:26:03 +0100
Will Deacon <will@kernel.org> wrote:
> On Thu, Jul 25, 2019 at 05:16:15PM +0900, Masami Hiramatsu wrote:
> > Make debug exceptions visible from RCU so that synchronize_rcu()
> > correctly track the debug exception handler.
> >
> > This also introduces sanity checks for user-mode exceptions as same
> > as x86's ist_enter()/ist_exit().
> >
> > The debug exception can interrupt in idle task. For example, it warns
> > if we put a kprobe on a function called from idle task as below.
> > The warning message showed that the rcu_read_lock() caused this
> > problem. But actually, this means the RCU is lost the context which
> > is already in NMI/IRQ.
> >
> > /sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events
> > /sys/kernel/debug/tracing # echo 1 > events/kprobes/enable
> > /sys/kernel/debug/tracing # [ 135.122237]
> > [ 135.125035] =============================
> > [ 135.125310] WARNING: suspicious RCU usage
> > [ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted
> > [ 135.125904] -----------------------------
> > [ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle!
> > [ 135.126839]
> > [ 135.126839] other info that might help us debug this:
> > [ 135.126839]
> > [ 135.127410]
> > [ 135.127410] RCU used illegally from idle CPU!
> > [ 135.127410] rcu_scheduler_active = 2, debug_locks = 1
> > [ 135.128114] RCU used illegally from extended quiescent state!
> > [ 135.128555] 1 lock held by swapper/0/0:
> > [ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178
> > [ 135.130499]
> > [ 135.130499] stack backtrace:
> > [ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20
> > [ 135.131841] Hardware name: linux,dummy-virt (DT)
> > [ 135.132224] Call trace:
> > [ 135.132491] dump_backtrace+0x0/0x140
> > [ 135.132806] show_stack+0x24/0x30
> > [ 135.133133] dump_stack+0xc4/0x10c
> > [ 135.133726] lockdep_rcu_suspicious+0xf8/0x108
> > [ 135.134171] call_break_hook+0x170/0x178
> > [ 135.134486] brk_handler+0x28/0x68
> > [ 135.134792] do_debug_exception+0x90/0x150
> > [ 135.135051] el1_dbg+0x18/0x8c
> > [ 135.135260] default_idle_call+0x0/0x44
> > [ 135.135516] cpu_startup_entry+0x2c/0x30
> > [ 135.135815] rest_init+0x1b0/0x280
> > [ 135.136044] arch_call_rest_init+0x14/0x1c
> > [ 135.136305] start_kernel+0x4d4/0x500
> > [ 135.136597]
> >
> > So make debug exception visible to RCU can fix this warning.
> >
> > Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> > Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
> > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > ---
> > Changes in v3:
> > - Make a comment for debug_exception_enter() clearer.
> > ---
> > arch/arm64/mm/fault.c | 40 ++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 40 insertions(+)
> >
> > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > index 9568c116ac7f..ed6c55c87fdc 100644
> > --- a/arch/arm64/mm/fault.c
> > +++ b/arch/arm64/mm/fault.c
> > @@ -777,6 +777,42 @@ void __init hook_debug_fault_code(int nr,
> > debug_fault_info[nr].name = name;
> > }
> >
> > +/*
> > + * In debug exception context, we explicitly disable preemption.
>
> Maybe add "despite having interrupts disabled"?
OK, I'll add it.
> > + * This serves two purposes: it makes it much less likely that we would
> > + * accidentally schedule in exception context and it will force a warning
> > + * if we somehow manage to schedule by accident.
> > + */
> > +static void debug_exception_enter(struct pt_regs *regs)
> > +{
> > + if (user_mode(regs)) {
> > + RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
> > + } else {
> > + /*
> > + * We might have interrupted pretty much anything. In
> > + * fact, if we're a debug exception, we can even interrupt
> > + * NMI processing. We don't want this code makes in_nmi()
> > + * to return true, but we need to notify RCU.
> > + */
> > + rcu_nmi_enter();
> > + }
> > +
> > + preempt_disable();
>
> If you're addingt new functions for entry/exit, maybe move the
> trace_hardirqs_{on,off}() calls in here too?
OK, let's move it in these functions.
Thank you!
>
> Will
--
Masami Hiramatsu <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU
2019-08-01 5:32 ` Masami Hiramatsu
@ 2019-08-01 7:37 ` Will Deacon
2019-08-01 14:36 ` [PATCH v4] " Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Will Deacon @ 2019-08-01 7:37 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Catalin Marinas, Will Deacon, Dan Rue, Daniel Diaz,
Anders Roxell, Naresh Kamboju, linux-kernel, Matt Hart,
Paul E . McKenney, linux-arm-kernel
Hi Masami,
On Thu, Aug 01, 2019 at 02:32:25PM +0900, Masami Hiramatsu wrote:
> On Wed, 31 Jul 2019 18:26:03 +0100
> Will Deacon <will@kernel.org> wrote:
> > On Thu, Jul 25, 2019 at 05:16:15PM +0900, Masami Hiramatsu wrote:
> > > diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> > > index 9568c116ac7f..ed6c55c87fdc 100644
> > > --- a/arch/arm64/mm/fault.c
> > > +++ b/arch/arm64/mm/fault.c
> > > @@ -777,6 +777,42 @@ void __init hook_debug_fault_code(int nr,
> > > debug_fault_info[nr].name = name;
> > > }
> > >
> > > +/*
> > > + * In debug exception context, we explicitly disable preemption.
> >
> > Maybe add "despite having interrupts disabled"?
>
> OK, I'll add it.
>
> > > + * This serves two purposes: it makes it much less likely that we would
> > > + * accidentally schedule in exception context and it will force a warning
> > > + * if we somehow manage to schedule by accident.
> > > + */
> > > +static void debug_exception_enter(struct pt_regs *regs)
> > > +{
> > > + if (user_mode(regs)) {
> > > + RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
> > > + } else {
> > > + /*
> > > + * We might have interrupted pretty much anything. In
> > > + * fact, if we're a debug exception, we can even interrupt
> > > + * NMI processing. We don't want this code makes in_nmi()
> > > + * to return true, but we need to notify RCU.
> > > + */
> > > + rcu_nmi_enter();
> > > + }
> > > +
> > > + preempt_disable();
> >
> > If you're addingt new functions for entry/exit, maybe move the
> > trace_hardirqs_{on,off}() calls in here too?
>
> OK, let's move it in these functions.
Brill, thanks. Please just resend this patch, as I can pick the other three
up as they are.
Will
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler
2019-07-25 8:15 ` [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler Masami Hiramatsu
@ 2019-08-01 12:16 ` Will Deacon
2019-08-01 14:08 ` Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Will Deacon @ 2019-08-01 12:16 UTC (permalink / raw)
To: Masami Hiramatsu
Cc: Catalin Marinas, Will Deacon, Dan Rue, Daniel Diaz,
Anders Roxell, Naresh Kamboju, linux-kernel, James Morse,
Matt Hart, linux-arm-kernel
On Thu, Jul 25, 2019 at 05:15:54PM +0900, Masami Hiramatsu wrote:
> kprobes manipulates the interrupted PSTATE for single step, and
> doesn't restore it. Thus, if we put a kprobe where the pstate.D
> (debug) masked, the mask will be cleared after the kprobe hits.
>
> Moreover, in the most complicated case, this can lead a kernel
> crash with below message when a nested kprobe hits.
>
> [ 152.118921] Unexpected kernel single-step exception at EL1
>
> When the 1st kprobe hits, do_debug_exception() will be called.
> At this point, debug exception (= pstate.D) must be masked (=1).
> But if another kprobes hits before single-step of the first kprobe
> (e.g. inside user pre_handler), it unmask the debug exception
> (pstate.D = 0) and return.
> Then, when the 1st kprobe setting up single-step, it saves current
> DAIF, mask DAIF, enable single-step, and restore DAIF.
> However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
> single-step exception happens soon after restoring DAIF.
>
> This has been introduced by commit 7419333fa15e ("arm64: kprobe:
> Always clear pstate.D in breakpoint exception handler")
>
> To solve this issue, this stores all DAIF bits and restore it
> after single stepping.
>
> Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> Fixes: commit 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
> Reviewed-by: James Morse <james.morse@arm.com>
> Tested-by: James Morse <james.morse@arm.com>
> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> ---
> Changes in v3:
> - Update patch description
> - move PSR_DAIF_MASK in daifflags.h
> Changes in v2:
> - Save and restore all DAIF flags.
> - Operate pstate directly and remove spsr_set_debug_flag().
> ---
> arch/arm64/include/asm/daifflags.h | 2 ++
> arch/arm64/kernel/probes/kprobes.c | 39 +++++-------------------------------
> 2 files changed, 7 insertions(+), 34 deletions(-)
I'm seeing an allmodconfig build failure with this:
arch/arm64/kernel/probes/kprobes.c: In function ‘kprobes_save_local_irqflag’:
arch/arm64/kernel/probes/kprobes.c:181:38: error: ‘DAIF_MASK’ undeclared (first use in this function); did you mean ‘BIT_MASK’?
kcb->saved_irqflag = regs->pstate & DAIF_MASK;
^~~~~~~~~
BIT_MASK
arch/arm64/kernel/probes/kprobes.c:181:38: note: each undeclared identifier is reported only once for each function it appears in
arch/arm64/kernel/probes/kprobes.c: In function ‘kprobes_restore_local_irqflag’:
arch/arm64/kernel/probes/kprobes.c:190:19: error: ‘DAIF_MASK’ undeclared (first use in this function); did you mean ‘BIT_MASK’?
regs->pstate &= ~DAIF_MASK;
^~~~~~~~~
BIT_MASK
make[2]: *** [scripts/Makefile.build:274: arch/arm64/kernel/probes/kprobes.o] Error 1
Will
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler
2019-08-01 12:16 ` Will Deacon
@ 2019-08-01 14:08 ` Masami Hiramatsu
2019-08-01 14:25 ` [PATCH v4] " Masami Hiramatsu
0 siblings, 1 reply; 12+ messages in thread
From: Masami Hiramatsu @ 2019-08-01 14:08 UTC (permalink / raw)
To: Will Deacon
Cc: Catalin Marinas, Will Deacon, Dan Rue, Daniel Diaz,
Anders Roxell, Naresh Kamboju, linux-kernel, James Morse,
Matt Hart, linux-arm-kernel
On Thu, 1 Aug 2019 13:16:22 +0100
Will Deacon <will@kernel.org> wrote:
> On Thu, Jul 25, 2019 at 05:15:54PM +0900, Masami Hiramatsu wrote:
> > kprobes manipulates the interrupted PSTATE for single step, and
> > doesn't restore it. Thus, if we put a kprobe where the pstate.D
> > (debug) masked, the mask will be cleared after the kprobe hits.
> >
> > Moreover, in the most complicated case, this can lead a kernel
> > crash with below message when a nested kprobe hits.
> >
> > [ 152.118921] Unexpected kernel single-step exception at EL1
> >
> > When the 1st kprobe hits, do_debug_exception() will be called.
> > At this point, debug exception (= pstate.D) must be masked (=1).
> > But if another kprobes hits before single-step of the first kprobe
> > (e.g. inside user pre_handler), it unmask the debug exception
> > (pstate.D = 0) and return.
> > Then, when the 1st kprobe setting up single-step, it saves current
> > DAIF, mask DAIF, enable single-step, and restore DAIF.
> > However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
> > single-step exception happens soon after restoring DAIF.
> >
> > This has been introduced by commit 7419333fa15e ("arm64: kprobe:
> > Always clear pstate.D in breakpoint exception handler")
> >
> > To solve this issue, this stores all DAIF bits and restore it
> > after single stepping.
> >
> > Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
> > Fixes: commit 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
> > Reviewed-by: James Morse <james.morse@arm.com>
> > Tested-by: James Morse <james.morse@arm.com>
> > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > ---
> > Changes in v3:
> > - Update patch description
> > - move PSR_DAIF_MASK in daifflags.h
> > Changes in v2:
> > - Save and restore all DAIF flags.
> > - Operate pstate directly and remove spsr_set_debug_flag().
> > ---
> > arch/arm64/include/asm/daifflags.h | 2 ++
> > arch/arm64/kernel/probes/kprobes.c | 39 +++++-------------------------------
> > 2 files changed, 7 insertions(+), 34 deletions(-)
>
> I'm seeing an allmodconfig build failure with this:
>
> arch/arm64/kernel/probes/kprobes.c: In function ‘kprobes_save_local_irqflag’:
> arch/arm64/kernel/probes/kprobes.c:181:38: error: ‘DAIF_MASK’ undeclared (first use in this function); did you mean ‘BIT_MASK’?
> kcb->saved_irqflag = regs->pstate & DAIF_MASK;
> ^~~~~~~~~
> BIT_MASK
> arch/arm64/kernel/probes/kprobes.c:181:38: note: each undeclared identifier is reported only once for each function it appears in
> arch/arm64/kernel/probes/kprobes.c: In function ‘kprobes_restore_local_irqflag’:
> arch/arm64/kernel/probes/kprobes.c:190:19: error: ‘DAIF_MASK’ undeclared (first use in this function); did you mean ‘BIT_MASK’?
> regs->pstate &= ~DAIF_MASK;
> ^~~~~~~~~
> BIT_MASK
> make[2]: *** [scripts/Makefile.build:274: arch/arm64/kernel/probes/kprobes.o] Error 1
Oops, daifflags.h is included via kvm_host.h...
OK, kprobes.c must include daifflags.h directly in this case.
I'll update this patch too.
Thank you,
>
> Will
--
Masami Hiramatsu <mhiramat@kernel.org>
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v4] arm64: kprobes: Recover pstate.D in single-step exception handler
2019-08-01 14:08 ` Masami Hiramatsu
@ 2019-08-01 14:25 ` Masami Hiramatsu
0 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2019-08-01 14:25 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz, James Morse
kprobes manipulates the interrupted PSTATE for single step, and
doesn't restore it. Thus, if we put a kprobe where the pstate.D
(debug) masked, the mask will be cleared after the kprobe hits.
Moreover, in the most complicated case, this can lead a kernel
crash with below message when a nested kprobe hits.
[ 152.118921] Unexpected kernel single-step exception at EL1
When the 1st kprobe hits, do_debug_exception() will be called.
At this point, debug exception (= pstate.D) must be masked (=1).
But if another kprobes hits before single-step of the first kprobe
(e.g. inside user pre_handler), it unmask the debug exception
(pstate.D = 0) and return.
Then, when the 1st kprobe setting up single-step, it saves current
DAIF, mask DAIF, enable single-step, and restore DAIF.
However, since "D" flag in DAIF is cleared by the 2nd kprobe, the
single-step exception happens soon after restoring DAIF.
This has been introduced by commit 7419333fa15e ("arm64: kprobe:
Always clear pstate.D in breakpoint exception handler")
To solve this issue, this stores all DAIF bits and restore it
after single stepping.
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Fixes: commit 7419333fa15e ("arm64: kprobe: Always clear pstate.D in breakpoint exception handler")
Reviewed-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
Changes in v4:
- Include daifflags.h directly so that kprobe can use DAIF_MASK always.
Changes in v3:
- Update patch description
- move PSR_DAIF_MASK in daifflags.h
Changes in v2:
- Save and restore all DAIF flags.
- Operate pstate directly and remove spsr_set_debug_flag().
---
arch/arm64/include/asm/daifflags.h | 2 ++
arch/arm64/kernel/probes/kprobes.c | 40 +++++-------------------------------
2 files changed, 8 insertions(+), 34 deletions(-)
diff --git a/arch/arm64/include/asm/daifflags.h b/arch/arm64/include/asm/daifflags.h
index 987926ed535e..063c964af705 100644
--- a/arch/arm64/include/asm/daifflags.h
+++ b/arch/arm64/include/asm/daifflags.h
@@ -13,6 +13,8 @@
#define DAIF_PROCCTX 0
#define DAIF_PROCCTX_NOIRQ PSR_I_BIT
#define DAIF_ERRCTX (PSR_I_BIT | PSR_A_BIT)
+#define DAIF_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+
/* mask/save/unmask/restore all exceptions, including interrupts. */
static inline void local_daif_mask(void)
diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index bd5dfffca272..c4452827419b 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -21,6 +21,7 @@
#include <asm/ptrace.h>
#include <asm/cacheflush.h>
#include <asm/debug-monitors.h>
+#include <asm/daifflags.h>
#include <asm/system_misc.h>
#include <asm/insn.h>
#include <linux/uaccess.h>
@@ -167,33 +168,6 @@ static void __kprobes set_current_kprobe(struct kprobe *p)
__this_cpu_write(current_kprobe, p);
}
-/*
- * When PSTATE.D is set (masked), then software step exceptions can not be
- * generated.
- * SPSR's D bit shows the value of PSTATE.D immediately before the
- * exception was taken. PSTATE.D is set while entering into any exception
- * mode, however software clears it for any normal (none-debug-exception)
- * mode in the exception entry. Therefore, when we are entering into kprobe
- * breakpoint handler from any normal mode then SPSR.D bit is already
- * cleared, however it is set when we are entering from any debug exception
- * mode.
- * Since we always need to generate single step exception after a kprobe
- * breakpoint exception therefore we need to clear it unconditionally, when
- * we become sure that the current breakpoint exception is for kprobe.
- */
-static void __kprobes
-spsr_set_debug_flag(struct pt_regs *regs, int mask)
-{
- unsigned long spsr = regs->pstate;
-
- if (mask)
- spsr |= PSR_D_BIT;
- else
- spsr &= ~PSR_D_BIT;
-
- regs->pstate = spsr;
-}
-
/*
* Interrupts need to be disabled before single-step mode is set, and not
* reenabled until after single-step mode ends.
@@ -205,17 +179,17 @@ spsr_set_debug_flag(struct pt_regs *regs, int mask)
static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- kcb->saved_irqflag = regs->pstate;
+ kcb->saved_irqflag = regs->pstate & DAIF_MASK;
regs->pstate |= PSR_I_BIT;
+ /* Unmask PSTATE.D for enabling software step exceptions. */
+ regs->pstate &= ~PSR_D_BIT;
}
static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
struct pt_regs *regs)
{
- if (kcb->saved_irqflag & PSR_I_BIT)
- regs->pstate |= PSR_I_BIT;
- else
- regs->pstate &= ~PSR_I_BIT;
+ regs->pstate &= ~DAIF_MASK;
+ regs->pstate |= kcb->saved_irqflag;
}
static void __kprobes
@@ -252,8 +226,6 @@ static void __kprobes setup_singlestep(struct kprobe *p,
set_ss_context(kcb, slot); /* mark pending ss */
- spsr_set_debug_flag(regs, 0);
-
/* IRQs and single stepping do not mix well. */
kprobes_save_local_irqflag(kcb, regs);
kernel_enable_single_step(regs);
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4] arm64: Make debug exception handlers visible from RCU
2019-08-01 7:37 ` Will Deacon
@ 2019-08-01 14:36 ` Masami Hiramatsu
0 siblings, 0 replies; 12+ messages in thread
From: Masami Hiramatsu @ 2019-08-01 14:36 UTC (permalink / raw)
To: Catalin Marinas, Will Deacon
Cc: mhiramat, linux-arm-kernel, linux-kernel, Naresh Kamboju,
Dan Rue, Matt Hart, Anders Roxell, Daniel Diaz,
Paul E . McKenney
Make debug exceptions visible from RCU so that synchronize_rcu()
correctly track the debug exception handler.
This also introduces sanity checks for user-mode exceptions as same
as x86's ist_enter()/ist_exit().
The debug exception can interrupt in idle task. For example, it warns
if we put a kprobe on a function called from idle task as below.
The warning message showed that the rcu_read_lock() caused this
problem. But actually, this means the RCU is lost the context which
is already in NMI/IRQ.
/sys/kernel/debug/tracing # echo p default_idle_call >> kprobe_events
/sys/kernel/debug/tracing # echo 1 > events/kprobes/enable
/sys/kernel/debug/tracing # [ 135.122237]
[ 135.125035] =============================
[ 135.125310] WARNING: suspicious RCU usage
[ 135.125581] 5.2.0-08445-g9187c508bdc7 #20 Not tainted
[ 135.125904] -----------------------------
[ 135.126205] include/linux/rcupdate.h:594 rcu_read_lock() used illegally while idle!
[ 135.126839]
[ 135.126839] other info that might help us debug this:
[ 135.126839]
[ 135.127410]
[ 135.127410] RCU used illegally from idle CPU!
[ 135.127410] rcu_scheduler_active = 2, debug_locks = 1
[ 135.128114] RCU used illegally from extended quiescent state!
[ 135.128555] 1 lock held by swapper/0/0:
[ 135.128944] #0: (____ptrval____) (rcu_read_lock){....}, at: call_break_hook+0x0/0x178
[ 135.130499]
[ 135.130499] stack backtrace:
[ 135.131192] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.2.0-08445-g9187c508bdc7 #20
[ 135.131841] Hardware name: linux,dummy-virt (DT)
[ 135.132224] Call trace:
[ 135.132491] dump_backtrace+0x0/0x140
[ 135.132806] show_stack+0x24/0x30
[ 135.133133] dump_stack+0xc4/0x10c
[ 135.133726] lockdep_rcu_suspicious+0xf8/0x108
[ 135.134171] call_break_hook+0x170/0x178
[ 135.134486] brk_handler+0x28/0x68
[ 135.134792] do_debug_exception+0x90/0x150
[ 135.135051] el1_dbg+0x18/0x8c
[ 135.135260] default_idle_call+0x0/0x44
[ 135.135516] cpu_startup_entry+0x2c/0x30
[ 135.135815] rest_init+0x1b0/0x280
[ 135.136044] arch_call_rest_init+0x14/0x1c
[ 135.136305] start_kernel+0x4d4/0x500
[ 135.136597]
So make debug exception visible to RCU can fix this warning.
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Acked-by: Paul E. McKenney <paulmck@linux.ibm.com>
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
Changes in v4:
- Update comment of debug_exception_enter().
- Move trace_hardirqs_off/on() into debug_exception_enter/exit().
Changes in v3:
- Make a comment for debug_exception_enter() clearer.
---
arch/arm64/mm/fault.c | 57 ++++++++++++++++++++++++++++++++++++++++++-------
1 file changed, 49 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9568c116ac7f..cfd65b63f36f 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -777,6 +777,53 @@ void __init hook_debug_fault_code(int nr,
debug_fault_info[nr].name = name;
}
+/*
+ * In debug exception context, we explicitly disable preemption despite
+ * having interrupts disabled.
+ * This serves two purposes: it makes it much less likely that we would
+ * accidentally schedule in exception context and it will force a warning
+ * if we somehow manage to schedule by accident.
+ */
+static void debug_exception_enter(struct pt_regs *regs)
+{
+ /*
+ * Tell lockdep we disabled irqs in entry.S. Do nothing if they were
+ * already disabled to preserve the last enabled/disabled addresses.
+ */
+ if (interrupts_enabled(regs))
+ trace_hardirqs_off();
+
+ if (user_mode(regs)) {
+ RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
+ } else {
+ /*
+ * We might have interrupted pretty much anything. In
+ * fact, if we're a debug exception, we can even interrupt
+ * NMI processing. We don't want this code makes in_nmi()
+ * to return true, but we need to notify RCU.
+ */
+ rcu_nmi_enter();
+ }
+
+ preempt_disable();
+
+ /* This code is a bit fragile. Test it. */
+ RCU_LOCKDEP_WARN(!rcu_is_watching(), "exception_enter didn't work");
+}
+NOKPROBE_SYMBOL(debug_exception_enter);
+
+static void debug_exception_exit(struct pt_regs *regs)
+{
+ preempt_enable_no_resched();
+
+ if (!user_mode(regs))
+ rcu_nmi_exit();
+
+ if (interrupts_enabled(regs))
+ trace_hardirqs_on();
+}
+NOKPROBE_SYMBOL(debug_exception_exit);
+
#ifdef CONFIG_ARM64_ERRATUM_1463225
DECLARE_PER_CPU(int, __in_cortex_a76_erratum_1463225_wa);
@@ -817,12 +864,7 @@ asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
if (cortex_a76_erratum_1463225_debug_handler(regs))
return;
- /*
- * Tell lockdep we disabled irqs in entry.S. Do nothing if they were
- * already disabled to preserve the last enabled/disabled addresses.
- */
- if (interrupts_enabled(regs))
- trace_hardirqs_off();
+ debug_exception_enter(regs);
if (user_mode(regs) && !is_ttbr0_addr(pc))
arm64_apply_bp_hardening();
@@ -832,7 +874,6 @@ asmlinkage void __exception do_debug_exception(unsigned long addr_if_watchpoint,
inf->sig, inf->code, (void __user *)pc, esr);
}
- if (interrupts_enabled(regs))
- trace_hardirqs_on();
+ debug_exception_exit(regs);
}
NOKPROBE_SYMBOL(do_debug_exception);
^ permalink raw reply related [flat|nested] 12+ messages in thread
end of thread, other threads:[~2019-08-01 14:36 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-25 8:15 [PATCH v3 0/4] arm64: kprobes: Fix some bugs in arm64 kprobes Masami Hiramatsu
2019-07-25 8:15 ` [PATCH v3 1/4] arm64: kprobes: Recover pstate.D in single-step exception handler Masami Hiramatsu
2019-08-01 12:16 ` Will Deacon
2019-08-01 14:08 ` Masami Hiramatsu
2019-08-01 14:25 ` [PATCH v4] " Masami Hiramatsu
2019-07-25 8:16 ` [PATCH v3 2/4] arm64: unwind: Prohibit probing on return_address() Masami Hiramatsu
2019-07-25 8:16 ` [PATCH v3 3/4] arm64: Make debug exception handlers visible from RCU Masami Hiramatsu
2019-07-31 17:26 ` Will Deacon
2019-08-01 5:32 ` Masami Hiramatsu
2019-08-01 7:37 ` Will Deacon
2019-08-01 14:36 ` [PATCH v4] " Masami Hiramatsu
2019-07-25 8:16 ` [PATCH v3 4/4] arm64: Remove unneeded rcu_read_lock from debug handlers Masami Hiramatsu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).