linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] arm64: kprobes: Fix bugs in kprobes for arm64
@ 2022-12-01 14:38 Masami Hiramatsu (Google)
  2022-12-01 14:39 ` [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk() Masami Hiramatsu (Google)
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Masami Hiramatsu (Google) @ 2022-12-01 14:38 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, Mark Brown, Kalesh Singh, Masami Hiramatsu,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

Hi,

I found some bugs in kprobes for arm64. One is a critical issue, which
will cause a kernel crach easily with lockdep[1/3]. Others are minor
issues and rare cases. [2/3] let do_page_fault() fixup the page fault
in kprobes user handler, and [3/3] is more like code cleanup and
returns DBG_HOOK_ERROR if it can not handle kprobe's BRK (but that
should not happen.)

Thank you,

---

Masami Hiramatsu (Google) (3):
      arm64: Prohibit probing on arch_stack_walk()
      arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler
      arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK


 arch/arm64/kernel/probes/kprobes.c |   87 +++++++++++++++---------------------
 arch/arm64/kernel/stacktrace.c     |    7 ++-
 2 files changed, 41 insertions(+), 53 deletions(-)

--
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk()
  2022-12-01 14:38 [PATCH 0/3] arm64: kprobes: Fix bugs in kprobes for arm64 Masami Hiramatsu (Google)
@ 2022-12-01 14:39 ` Masami Hiramatsu (Google)
  2022-12-01 14:47   ` Mark Rutland
  2022-12-01 14:39 ` [PATCH 2/3] arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler Masami Hiramatsu (Google)
  2022-12-01 14:39 ` [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK Masami Hiramatsu (Google)
  2 siblings, 1 reply; 11+ messages in thread
From: Masami Hiramatsu (Google) @ 2022-12-01 14:39 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, Mark Brown, Kalesh Singh, Masami Hiramatsu,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Make arch_stack_walk() as NOKPROBE_SYMBOL and make other inline functions
called from arch_stack_walk() as nokprobe_inline so that user does not
put probe on it, because this function can be used from return_address()
which is already NOKPROBE_SYMBOL.

Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing
arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on
arm64.

 # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events
 # echo 1 > ${TRACEFS}/events/kprobes/enable
  kprobes: Failed to recover from reentered kprobes.
  kprobes: Dump kprobe:
  .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
  ------------[ cut here ]------------
  kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
  kprobes: Failed to recover from reentered kprobes.
  kprobes: Dump kprobe:
  .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
  ------------[ cut here ]------------
  kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
  PREEMPT SMP
  Modules linked in:
  CPU: 0 PID: 17 Comm: migration/0 Tainted: G                 N 6.1.0-rc5+ #6
  Hardware name: linux,dummy-virt (DT)
  Stopper: 0x0 <- 0x0
  pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
  pc : kprobe_breakpoint_handler+0x178/0x17c
  lr : kprobe_breakpoint_handler+0x178/0x17c
  sp : ffff8000080d3090
  x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774
  x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000
  x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0
  x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006
  x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d
  x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b
  x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958
  x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000
  x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
  x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064
  Call trace:
  kprobes: Failed to recover from reentered kprobes.
  kprobes: Dump kprobe:
  .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
  ------------[ cut here ]------------
  kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!

Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()")
Cc: stable@vger.kernel.org
Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/arm64/kernel/stacktrace.c |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
index 634279b3b03d..b0e913f944b4 100644
--- a/arch/arm64/kernel/stacktrace.c
+++ b/arch/arm64/kernel/stacktrace.c
@@ -23,7 +23,7 @@
  *
  * The regs must be on a stack currently owned by the calling task.
  */
-static inline void unwind_init_from_regs(struct unwind_state *state,
+static nokprobe_inline void unwind_init_from_regs(struct unwind_state *state,
 					 struct pt_regs *regs)
 {
 	unwind_init_common(state, current);
@@ -40,7 +40,7 @@ static inline void unwind_init_from_regs(struct unwind_state *state,
  *
  * The function which invokes this must be noinline.
  */
-static __always_inline void unwind_init_from_caller(struct unwind_state *state)
+static nokprobe_inline void unwind_init_from_caller(struct unwind_state *state)
 {
 	unwind_init_common(state, current);
 
@@ -58,7 +58,7 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state)
  * duration of the unwind, or the unwind will be bogus. It is never valid to
  * call this for the current task.
  */
-static inline void unwind_init_from_task(struct unwind_state *state,
+static nokprobe_inline void unwind_init_from_task(struct unwind_state *state,
 					 struct task_struct *task)
 {
 	unwind_init_common(state, task);
@@ -218,3 +218,4 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
 
 	unwind(&state, consume_entry, cookie);
 }
+NOKPROBE_SYMBOL(arch_stack_walk);


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler
  2022-12-01 14:38 [PATCH 0/3] arm64: kprobes: Fix bugs in kprobes for arm64 Masami Hiramatsu (Google)
  2022-12-01 14:39 ` [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk() Masami Hiramatsu (Google)
@ 2022-12-01 14:39 ` Masami Hiramatsu (Google)
  2022-12-01 14:56   ` Mark Rutland
  2022-12-01 14:39 ` [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK Masami Hiramatsu (Google)
  2 siblings, 1 reply; 11+ messages in thread
From: Masami Hiramatsu (Google) @ 2022-12-01 14:39 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, Mark Brown, Kalesh Singh, Masami Hiramatsu,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Since arm64's do_page_fault() can handle the page fault correctly
than kprobe_fault_handler() according to the context, let it handle
the page fault instead of simply call fixup_exception() in the
kprobe_fault_handler().

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/arm64/kernel/probes/kprobes.c |    8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index c9e4d0720285..d2ae37f89774 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -294,14 +294,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
 		}
 
 		break;
-	case KPROBE_HIT_ACTIVE:
-	case KPROBE_HIT_SSDONE:
-		/*
-		 * In case the user-specified fault handler returned
-		 * zero, try to fix up.
-		 */
-		if (fixup_exception(regs))
-			return 1;
 	}
 	return 0;
 }


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
  2022-12-01 14:38 [PATCH 0/3] arm64: kprobes: Fix bugs in kprobes for arm64 Masami Hiramatsu (Google)
  2022-12-01 14:39 ` [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk() Masami Hiramatsu (Google)
  2022-12-01 14:39 ` [PATCH 2/3] arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler Masami Hiramatsu (Google)
@ 2022-12-01 14:39 ` Masami Hiramatsu (Google)
  2022-12-01 15:08   ` Mark Rutland
  2 siblings, 1 reply; 11+ messages in thread
From: Masami Hiramatsu (Google) @ 2022-12-01 14:39 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon
  Cc: Mark Rutland, Mark Brown, Kalesh Singh, Masami Hiramatsu,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

From: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
fails to find a kprobe corresponding to the address.

Since arm64 kprobes uses stop_machine based text patching for removing
BRK, it ensures all running kprobe_break_handler() is done at that point.
And after removing the BRK, it removes the kprobe from its hash list.
Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
there is a bug.

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/arm64/kernel/probes/kprobes.c |   79 +++++++++++++++++-------------------
 1 file changed, 37 insertions(+), 42 deletions(-)

diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
index d2ae37f89774..ea56b22d4da8 100644
--- a/arch/arm64/kernel/probes/kprobes.c
+++ b/arch/arm64/kernel/probes/kprobes.c
@@ -298,7 +298,8 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
 	return 0;
 }
 
-static void __kprobes kprobe_handler(struct pt_regs *regs)
+static int __kprobes
+kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
 {
 	struct kprobe *p, *cur_kprobe;
 	struct kprobe_ctlblk *kcb;
@@ -308,39 +309,45 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
 	cur_kprobe = kprobe_running();
 
 	p = get_kprobe((kprobe_opcode_t *) addr);
+	if (WARN_ON_ONCE(!p)) {
+		/*
+		 * Something went wrong. This must be put by kprobe, but we
+		 * could not find corresponding kprobes. Let the kernel handle
+		 * this error case.
+		 */
+		return DBG_HOOK_ERROR;
+	}
 
-	if (p) {
-		if (cur_kprobe) {
-			if (reenter_kprobe(p, regs, kcb))
-				return;
-		} else {
-			/* Probe hit */
-			set_current_kprobe(p);
-			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
-
-			/*
-			 * If we have no pre-handler or it returned 0, we
-			 * continue with normal processing.  If we have a
-			 * pre-handler and it returned non-zero, it will
-			 * modify the execution path and no need to single
-			 * stepping. Let's just reset current kprobe and exit.
-			 */
-			if (!p->pre_handler || !p->pre_handler(p, regs)) {
-				setup_singlestep(p, regs, kcb, 0);
-			} else
-				reset_current_kprobe();
-		}
+	if (cur_kprobe) {
+		/* Hit a kprobe inside another kprobe */
+		if (!reenter_kprobe(p, regs, kcb))
+			return DBG_HOOK_ERROR;
+	} else {
+		/* Probe hit */
+		set_current_kprobe(p);
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+
+		/*
+		 * If we have no pre-handler or it returned 0, we
+		 * continue with normal processing.  If we have a
+		 * pre-handler and it returned non-zero, it will
+		 * modify the execution path and no need to single
+		 * stepping. Let's just reset current kprobe and exit.
+		 */
+		if (!p->pre_handler || !p->pre_handler(p, regs))
+			setup_singlestep(p, regs, kcb, 0);
+		else
+			reset_current_kprobe();
 	}
-	/*
-	 * The breakpoint instruction was removed right
-	 * after we hit it.  Another cpu has removed
-	 * either a probepoint or a debugger breakpoint
-	 * at this address.  In either case, no further
-	 * handling of this interrupt is appropriate.
-	 * Return back to original instruction, and continue.
-	 */
+
+	return DBG_HOOK_HANDLED;
 }
 
+static struct break_hook kprobes_break_hook = {
+	.imm = KPROBES_BRK_IMM,
+	.fn = kprobe_breakpoint_handler,
+};
+
 static int __kprobes
 kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
 {
@@ -365,18 +372,6 @@ static struct break_hook kprobes_break_ss_hook = {
 	.fn = kprobe_breakpoint_ss_handler,
 };
 
-static int __kprobes
-kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
-{
-	kprobe_handler(regs);
-	return DBG_HOOK_HANDLED;
-}
-
-static struct break_hook kprobes_break_hook = {
-	.imm = KPROBES_BRK_IMM,
-	.fn = kprobe_breakpoint_handler,
-};
-
 /*
  * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
  * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk()
  2022-12-01 14:39 ` [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk() Masami Hiramatsu (Google)
@ 2022-12-01 14:47   ` Mark Rutland
  2022-12-01 15:54     ` Masami Hiramatsu
  0 siblings, 1 reply; 11+ messages in thread
From: Mark Rutland @ 2022-12-01 14:47 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

Hi Masami,

On Thu, Dec 01, 2022 at 11:39:02PM +0900, Masami Hiramatsu (Google) wrote:
> From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> Make arch_stack_walk() as NOKPROBE_SYMBOL and make other inline functions
> called from arch_stack_walk() as nokprobe_inline so that user does not
> put probe on it, because this function can be used from return_address()
> which is already NOKPROBE_SYMBOL.

I think it would make sense to make this noinstr rather than NOKPROBES, since
it's also used in the bowels of ftrace, and any instrumentation is liable to
lead to some mutual recursion.

> Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing
> arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on
> arm64.
> 
>  # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events
>  # echo 1 > ${TRACEFS}/events/kprobes/enable
>   kprobes: Failed to recover from reentered kprobes.
>   kprobes: Dump kprobe:
>   .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
>   ------------[ cut here ]------------
>   kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
>   kprobes: Failed to recover from reentered kprobes.
>   kprobes: Dump kprobe:
>   .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
>   ------------[ cut here ]------------
>   kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
>   PREEMPT SMP
>   Modules linked in:
>   CPU: 0 PID: 17 Comm: migration/0 Tainted: G                 N 6.1.0-rc5+ #6
>   Hardware name: linux,dummy-virt (DT)
>   Stopper: 0x0 <- 0x0
>   pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>   pc : kprobe_breakpoint_handler+0x178/0x17c
>   lr : kprobe_breakpoint_handler+0x178/0x17c
>   sp : ffff8000080d3090
>   x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774
>   x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000
>   x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0
>   x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006
>   x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d
>   x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b
>   x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958
>   x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000
>   x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
>   x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064
>   Call trace:
>   kprobes: Failed to recover from reentered kprobes.
>   kprobes: Dump kprobe:
>   .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
>   ------------[ cut here ]------------
>   kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
> 
> Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()")
> Cc: stable@vger.kernel.org
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
>  arch/arm64/kernel/stacktrace.c |    7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> index 634279b3b03d..b0e913f944b4 100644
> --- a/arch/arm64/kernel/stacktrace.c
> +++ b/arch/arm64/kernel/stacktrace.c
> @@ -23,7 +23,7 @@
>   *
>   * The regs must be on a stack currently owned by the calling task.
>   */
> -static inline void unwind_init_from_regs(struct unwind_state *state,
> +static nokprobe_inline void unwind_init_from_regs(struct unwind_state *state,
>  					 struct pt_regs *regs)
>  {
>  	unwind_init_common(state, current);
> @@ -40,7 +40,7 @@ static inline void unwind_init_from_regs(struct unwind_state *state,
>   *
>   * The function which invokes this must be noinline.
>   */
> -static __always_inline void unwind_init_from_caller(struct unwind_state *state)
> +static nokprobe_inline void unwind_init_from_caller(struct unwind_state *state)

This code must be __always_inline to get the correct caller context, so making
this nokprobe_inline breaks a CONFIG_KPROBES=n build of the kernel.

This also shouldn't be necessary; nokprobe_inline *is* __always_inline when
CONFIG_KPROBES=y.

As above, I'd prefer that we mark these helpers as __always_inline and mark the
main entry point as noinstr.

Thanks,
Mark.

>  {
>  	unwind_init_common(state, current);
>  
> @@ -58,7 +58,7 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state)
>   * duration of the unwind, or the unwind will be bogus. It is never valid to
>   * call this for the current task.
>   */
> -static inline void unwind_init_from_task(struct unwind_state *state,
> +static nokprobe_inline void unwind_init_from_task(struct unwind_state *state,
>  					 struct task_struct *task)
>  {
>  	unwind_init_common(state, task);
> @@ -218,3 +218,4 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
>  
>  	unwind(&state, consume_entry, cookie);
>  }
> +NOKPROBE_SYMBOL(arch_stack_walk);
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler
  2022-12-01 14:39 ` [PATCH 2/3] arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler Masami Hiramatsu (Google)
@ 2022-12-01 14:56   ` Mark Rutland
  0 siblings, 0 replies; 11+ messages in thread
From: Mark Rutland @ 2022-12-01 14:56 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

On Thu, Dec 01, 2022 at 11:39:11PM +0900, Masami Hiramatsu (Google) wrote:
> From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> Since arm64's do_page_fault() can handle the page fault correctly
> than kprobe_fault_handler() according to the context, let it handle
> the page fault instead of simply call fixup_exception() in the
> kprobe_fault_handler().
> 
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/kernel/probes/kprobes.c |    8 --------
>  1 file changed, 8 deletions(-)
> 
> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
> index c9e4d0720285..d2ae37f89774 100644
> --- a/arch/arm64/kernel/probes/kprobes.c
> +++ b/arch/arm64/kernel/probes/kprobes.c
> @@ -294,14 +294,6 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
>  		}
>  
>  		break;
> -	case KPROBE_HIT_ACTIVE:
> -	case KPROBE_HIT_SSDONE:
> -		/*
> -		 * In case the user-specified fault handler returned
> -		 * zero, try to fix up.
> -		 */
> -		if (fixup_exception(regs))
> -			return 1;
>  	}
>  	return 0;
>  }
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
  2022-12-01 14:39 ` [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK Masami Hiramatsu (Google)
@ 2022-12-01 15:08   ` Mark Rutland
  2022-12-01 16:07     ` Masami Hiramatsu
  0 siblings, 1 reply; 11+ messages in thread
From: Mark Rutland @ 2022-12-01 15:08 UTC (permalink / raw)
  To: Masami Hiramatsu (Google)
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

On Thu, Dec 01, 2022 at 11:39:21PM +0900, Masami Hiramatsu (Google) wrote:
> From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> 
> Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
> fails to find a kprobe corresponding to the address.
> 
> Since arm64 kprobes uses stop_machine based text patching for removing
> BRK, it ensures all running kprobe_break_handler() is done at that point.
> And after removing the BRK, it removes the kprobe from its hash list.
> Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
> there is a bug.

IIUC this relies on BRK handling not being preemptible, which is something
we've repeatedly considered changing along with a bunch of other debug
exception handling.

In case we do try to change that in future, it would be good to have a comment
somewhere to that effect.

I think there are other ways we could synchronise against that (e.g. using RCU
tasks rude) if we ever do that, and this patch looks good to me.

> 
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
>  arch/arm64/kernel/probes/kprobes.c |   79 +++++++++++++++++-------------------
>  1 file changed, 37 insertions(+), 42 deletions(-)
> 
> diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
> index d2ae37f89774..ea56b22d4da8 100644
> --- a/arch/arm64/kernel/probes/kprobes.c
> +++ b/arch/arm64/kernel/probes/kprobes.c
> @@ -298,7 +298,8 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
>  	return 0;
>  }
>  
> -static void __kprobes kprobe_handler(struct pt_regs *regs)
> +static int __kprobes
> +kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
>  {
>  	struct kprobe *p, *cur_kprobe;
>  	struct kprobe_ctlblk *kcb;
> @@ -308,39 +309,45 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
>  	cur_kprobe = kprobe_running();
>  
>  	p = get_kprobe((kprobe_opcode_t *) addr);
> +	if (WARN_ON_ONCE(!p)) {
> +		/*
> +		 * Something went wrong. This must be put by kprobe, but we
> +		 * could not find corresponding kprobes. Let the kernel handle
> +		 * this error case.
> +		 */

Could we make this:

		/*
		 * Something went wrong. This BRK used an immediate reserved
		 * for kprobes, but we couldn't find any corresponding probe.
		 */

> +		return DBG_HOOK_ERROR;
> +	}
>  
> -	if (p) {
> -		if (cur_kprobe) {
> -			if (reenter_kprobe(p, regs, kcb))
> -				return;
> -		} else {
> -			/* Probe hit */
> -			set_current_kprobe(p);
> -			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> -
> -			/*
> -			 * If we have no pre-handler or it returned 0, we
> -			 * continue with normal processing.  If we have a
> -			 * pre-handler and it returned non-zero, it will
> -			 * modify the execution path and no need to single
> -			 * stepping. Let's just reset current kprobe and exit.
> -			 */
> -			if (!p->pre_handler || !p->pre_handler(p, regs)) {
> -				setup_singlestep(p, regs, kcb, 0);
> -			} else
> -				reset_current_kprobe();
> -		}
> +	if (cur_kprobe) {
> +		/* Hit a kprobe inside another kprobe */
> +		if (!reenter_kprobe(p, regs, kcb))
> +			return DBG_HOOK_ERROR;
> +	} else {
> +		/* Probe hit */
> +		set_current_kprobe(p);
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +
> +		/*
> +		 * If we have no pre-handler or it returned 0, we
> +		 * continue with normal processing.  If we have a
> +		 * pre-handler and it returned non-zero, it will
> +		 * modify the execution path and no need to single
> +		 * stepping. Let's just reset current kprobe and exit.
> +		 */

Minor wording nit: could we replace:

	no need to single stepping.

With:
	
	not need to single-step.

Thanks,
Mark.

> +		if (!p->pre_handler || !p->pre_handler(p, regs))
> +			setup_singlestep(p, regs, kcb, 0);
> +		else
> +			reset_current_kprobe();
>  	}
> -	/*
> -	 * The breakpoint instruction was removed right
> -	 * after we hit it.  Another cpu has removed
> -	 * either a probepoint or a debugger breakpoint
> -	 * at this address.  In either case, no further
> -	 * handling of this interrupt is appropriate.
> -	 * Return back to original instruction, and continue.
> -	 */
> +
> +	return DBG_HOOK_HANDLED;
>  }
>  
> +static struct break_hook kprobes_break_hook = {
> +	.imm = KPROBES_BRK_IMM,
> +	.fn = kprobe_breakpoint_handler,
> +};
> +
>  static int __kprobes
>  kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
>  {
> @@ -365,18 +372,6 @@ static struct break_hook kprobes_break_ss_hook = {
>  	.fn = kprobe_breakpoint_ss_handler,
>  };
>  
> -static int __kprobes
> -kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> -{
> -	kprobe_handler(regs);
> -	return DBG_HOOK_HANDLED;
> -}
> -
> -static struct break_hook kprobes_break_hook = {
> -	.imm = KPROBES_BRK_IMM,
> -	.fn = kprobe_breakpoint_handler,
> -};
> -
>  /*
>   * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
>   * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk()
  2022-12-01 14:47   ` Mark Rutland
@ 2022-12-01 15:54     ` Masami Hiramatsu
  0 siblings, 0 replies; 11+ messages in thread
From: Masami Hiramatsu @ 2022-12-01 15:54 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

On Thu, 1 Dec 2022 14:47:32 +0000
Mark Rutland <mark.rutland@arm.com> wrote:

> Hi Masami,
> 
> On Thu, Dec 01, 2022 at 11:39:02PM +0900, Masami Hiramatsu (Google) wrote:
> > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > 
> > Make arch_stack_walk() as NOKPROBE_SYMBOL and make other inline functions
> > called from arch_stack_walk() as nokprobe_inline so that user does not
> > put probe on it, because this function can be used from return_address()
> > which is already NOKPROBE_SYMBOL.
> 
> I think it would make sense to make this noinstr rather than NOKPROBES, since
> it's also used in the bowels of ftrace, and any instrumentation is liable to
> lead to some mutual recursion.

Yeah, OK. So use noinstr instead of notrace.

> 
> > Without this, if the kernel built with CONFIG_LOCKDEP=y, just probing
> > arch_stack_walk() via <tracefs>/kprobe_events will crash the kernel on
> > arm64.
> > 
> >  # echo p arch_stack_walk >> ${TRACEFS}/kprobe_events
> >  # echo 1 > ${TRACEFS}/events/kprobes/enable
> >   kprobes: Failed to recover from reentered kprobes.
> >   kprobes: Dump kprobe:
> >   .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
> >   ------------[ cut here ]------------
> >   kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
> >   kprobes: Failed to recover from reentered kprobes.
> >   kprobes: Dump kprobe:
> >   .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
> >   ------------[ cut here ]------------
> >   kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
> >   PREEMPT SMP
> >   Modules linked in:
> >   CPU: 0 PID: 17 Comm: migration/0 Tainted: G                 N 6.1.0-rc5+ #6
> >   Hardware name: linux,dummy-virt (DT)
> >   Stopper: 0x0 <- 0x0
> >   pstate: 600003c5 (nZCv DAIF -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
> >   pc : kprobe_breakpoint_handler+0x178/0x17c
> >   lr : kprobe_breakpoint_handler+0x178/0x17c
> >   sp : ffff8000080d3090
> >   x29: ffff8000080d3090 x28: ffff0df5845798c0 x27: ffffc4f59057a774
> >   x26: ffff0df5ffbba770 x25: ffff0df58f420f18 x24: ffff49006f641000
> >   x23: ffffc4f590579768 x22: ffff0df58f420f18 x21: ffff8000080d31c0
> >   x20: ffffc4f590579768 x19: ffffc4f590579770 x18: 0000000000000006
> >   x17: 5f6b636174735f68 x16: 637261203d207264 x15: 64612e202c30203d
> >   x14: 2074657366666f2e x13: 30633178302f3078 x12: 302b6b6c61775f6b
> >   x11: 636174735f686372 x10: ffffc4f590dc5bd8 x9 : ffffc4f58eb31958
> >   x8 : 00000000ffffefff x7 : ffffc4f590dc5bd8 x6 : 80000000fffff000
> >   x5 : 000000000000bff4 x4 : 0000000000000000 x3 : 0000000000000000
> >   x2 : 0000000000000000 x1 : ffff0df5845798c0 x0 : 0000000000000064
> >   Call trace:
> >   kprobes: Failed to recover from reentered kprobes.
> >   kprobes: Dump kprobe:
> >   .symbol_name = arch_stack_walk, .offset = 0, .addr = arch_stack_walk+0x0/0x1c0
> >   ------------[ cut here ]------------
> >   kernel BUG at arch/arm64/kernel/probes/kprobes.c:241!
> > 
> > Fixes: 39ef362d2d45 ("arm64: Make return_address() use arch_stack_walk()")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > ---
> >  arch/arm64/kernel/stacktrace.c |    7 ++++---
> >  1 file changed, 4 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/stacktrace.c b/arch/arm64/kernel/stacktrace.c
> > index 634279b3b03d..b0e913f944b4 100644
> > --- a/arch/arm64/kernel/stacktrace.c
> > +++ b/arch/arm64/kernel/stacktrace.c
> > @@ -23,7 +23,7 @@
> >   *
> >   * The regs must be on a stack currently owned by the calling task.
> >   */
> > -static inline void unwind_init_from_regs(struct unwind_state *state,
> > +static nokprobe_inline void unwind_init_from_regs(struct unwind_state *state,
> >  					 struct pt_regs *regs)
> >  {
> >  	unwind_init_common(state, current);
> > @@ -40,7 +40,7 @@ static inline void unwind_init_from_regs(struct unwind_state *state,
> >   *
> >   * The function which invokes this must be noinline.
> >   */
> > -static __always_inline void unwind_init_from_caller(struct unwind_state *state)
> > +static nokprobe_inline void unwind_init_from_caller(struct unwind_state *state)
> 
> This code must be __always_inline to get the correct caller context, so making
> this nokprobe_inline breaks a CONFIG_KPROBES=n build of the kernel.

Oops, indeed.

> 
> This also shouldn't be necessary; nokprobe_inline *is* __always_inline when
> CONFIG_KPROBES=y.
> 
> As above, I'd prefer that we mark these helpers as __always_inline and mark the
> main entry point as noinstr.

OK.

Thank you,

> 
> Thanks,
> Mark.
> 
> >  {
> >  	unwind_init_common(state, current);
> >  
> > @@ -58,7 +58,7 @@ static __always_inline void unwind_init_from_caller(struct unwind_state *state)
> >   * duration of the unwind, or the unwind will be bogus. It is never valid to
> >   * call this for the current task.
> >   */
> > -static inline void unwind_init_from_task(struct unwind_state *state,
> > +static nokprobe_inline void unwind_init_from_task(struct unwind_state *state,
> >  					 struct task_struct *task)
> >  {
> >  	unwind_init_common(state, task);
> > @@ -218,3 +218,4 @@ noinline notrace void arch_stack_walk(stack_trace_consume_fn consume_entry,
> >  
> >  	unwind(&state, consume_entry, cookie);
> >  }
> > +NOKPROBE_SYMBOL(arch_stack_walk);
> > 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
  2022-12-01 15:08   ` Mark Rutland
@ 2022-12-01 16:07     ` Masami Hiramatsu
  2022-12-01 17:21       ` Mark Rutland
  0 siblings, 1 reply; 11+ messages in thread
From: Masami Hiramatsu @ 2022-12-01 16:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

On Thu, 1 Dec 2022 15:08:52 +0000
Mark Rutland <mark.rutland@arm.com> wrote:

> On Thu, Dec 01, 2022 at 11:39:21PM +0900, Masami Hiramatsu (Google) wrote:
> > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > 
> > Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
> > fails to find a kprobe corresponding to the address.
> > 
> > Since arm64 kprobes uses stop_machine based text patching for removing
> > BRK, it ensures all running kprobe_break_handler() is done at that point.
> > And after removing the BRK, it removes the kprobe from its hash list.
> > Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
> > there is a bug.
> 
> IIUC this relies on BRK handling not being preemptible, which is something
> we've repeatedly considered changing along with a bunch of other debug
> exception handling.

Interesting idea... and it also need many changes in kprobe itself.

> 
> In case we do try to change that in future, it would be good to have a comment
> somewhere to that effect.

Hmm, it would fundamentally change the assumptions that kprobes relies on,
and would require a lot of thought again. (e.g. current running kprobe is
stored in per-cpu variable, it should be per-task. etc.)

> 
> I think there are other ways we could synchronise against that (e.g. using RCU
> tasks rude) if we ever do that, and this patch looks good to me.
> 
> > 
> > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > ---
> >  arch/arm64/kernel/probes/kprobes.c |   79 +++++++++++++++++-------------------
> >  1 file changed, 37 insertions(+), 42 deletions(-)
> > 
> > diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
> > index d2ae37f89774..ea56b22d4da8 100644
> > --- a/arch/arm64/kernel/probes/kprobes.c
> > +++ b/arch/arm64/kernel/probes/kprobes.c
> > @@ -298,7 +298,8 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
> >  	return 0;
> >  }
> >  
> > -static void __kprobes kprobe_handler(struct pt_regs *regs)
> > +static int __kprobes
> > +kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> >  {
> >  	struct kprobe *p, *cur_kprobe;
> >  	struct kprobe_ctlblk *kcb;
> > @@ -308,39 +309,45 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
> >  	cur_kprobe = kprobe_running();
> >  
> >  	p = get_kprobe((kprobe_opcode_t *) addr);
> > +	if (WARN_ON_ONCE(!p)) {
> > +		/*
> > +		 * Something went wrong. This must be put by kprobe, but we
> > +		 * could not find corresponding kprobes. Let the kernel handle
> > +		 * this error case.
> > +		 */
> 
> Could we make this:
> 
> 		/*
> 		 * Something went wrong. This BRK used an immediate reserved
> 		 * for kprobes, but we couldn't find any corresponding probe.
> 		 */

OK.

> 
> > +		return DBG_HOOK_ERROR;
> > +	}
> >  
> > -	if (p) {
> > -		if (cur_kprobe) {
> > -			if (reenter_kprobe(p, regs, kcb))
> > -				return;
> > -		} else {
> > -			/* Probe hit */
> > -			set_current_kprobe(p);
> > -			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > -
> > -			/*
> > -			 * If we have no pre-handler or it returned 0, we
> > -			 * continue with normal processing.  If we have a
> > -			 * pre-handler and it returned non-zero, it will
> > -			 * modify the execution path and no need to single
> > -			 * stepping. Let's just reset current kprobe and exit.
> > -			 */
> > -			if (!p->pre_handler || !p->pre_handler(p, regs)) {
> > -				setup_singlestep(p, regs, kcb, 0);
> > -			} else
> > -				reset_current_kprobe();
> > -		}
> > +	if (cur_kprobe) {
> > +		/* Hit a kprobe inside another kprobe */
> > +		if (!reenter_kprobe(p, regs, kcb))
> > +			return DBG_HOOK_ERROR;
> > +	} else {
> > +		/* Probe hit */
> > +		set_current_kprobe(p);
> > +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > +
> > +		/*
> > +		 * If we have no pre-handler or it returned 0, we
> > +		 * continue with normal processing.  If we have a
> > +		 * pre-handler and it returned non-zero, it will
> > +		 * modify the execution path and no need to single
> > +		 * stepping. Let's just reset current kprobe and exit.
> > +		 */
> 
> Minor wording nit: could we replace:
> 
> 	no need to single stepping.
> 
> With:
> 	
> 	not need to single-step.

OK, I'll update both in v2.

Thank you!

> 
> Thanks,
> Mark.
> 
> > +		if (!p->pre_handler || !p->pre_handler(p, regs))
> > +			setup_singlestep(p, regs, kcb, 0);
> > +		else
> > +			reset_current_kprobe();
> >  	}
> > -	/*
> > -	 * The breakpoint instruction was removed right
> > -	 * after we hit it.  Another cpu has removed
> > -	 * either a probepoint or a debugger breakpoint
> > -	 * at this address.  In either case, no further
> > -	 * handling of this interrupt is appropriate.
> > -	 * Return back to original instruction, and continue.
> > -	 */
> > +
> > +	return DBG_HOOK_HANDLED;
> >  }
> >  
> > +static struct break_hook kprobes_break_hook = {
> > +	.imm = KPROBES_BRK_IMM,
> > +	.fn = kprobe_breakpoint_handler,
> > +};
> > +
> >  static int __kprobes
> >  kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
> >  {
> > @@ -365,18 +372,6 @@ static struct break_hook kprobes_break_ss_hook = {
> >  	.fn = kprobe_breakpoint_ss_handler,
> >  };
> >  
> > -static int __kprobes
> > -kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> > -{
> > -	kprobe_handler(regs);
> > -	return DBG_HOOK_HANDLED;
> > -}
> > -
> > -static struct break_hook kprobes_break_hook = {
> > -	.imm = KPROBES_BRK_IMM,
> > -	.fn = kprobe_breakpoint_handler,
> > -};
> > -
> >  /*
> >   * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> >   * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> > 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
  2022-12-01 16:07     ` Masami Hiramatsu
@ 2022-12-01 17:21       ` Mark Rutland
  2022-12-02  0:42         ` Masami Hiramatsu
  0 siblings, 1 reply; 11+ messages in thread
From: Mark Rutland @ 2022-12-01 17:21 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

On Fri, Dec 02, 2022 at 01:07:13AM +0900, Masami Hiramatsu wrote:
> On Thu, 1 Dec 2022 15:08:52 +0000
> Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > On Thu, Dec 01, 2022 at 11:39:21PM +0900, Masami Hiramatsu (Google) wrote:
> > > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > 
> > > Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
> > > fails to find a kprobe corresponding to the address.
> > > 
> > > Since arm64 kprobes uses stop_machine based text patching for removing
> > > BRK, it ensures all running kprobe_break_handler() is done at that point.
> > > And after removing the BRK, it removes the kprobe from its hash list.
> > > Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
> > > there is a bug.
> > 
> > IIUC this relies on BRK handling not being preemptible, which is something
> > we've repeatedly considered changing along with a bunch of other debug
> > exception handling.
> 
> Interesting idea... and it also need many changes in kprobe itself.
> 
> > 
> > In case we do try to change that in future, it would be good to have a comment
> > somewhere to that effect.
> 
> Hmm, it would fundamentally change the assumptions that kprobes relies on,
> and would require a lot of thought again. (e.g. current running kprobe is
> stored in per-cpu variable, it should be per-task. etc.)

Ah; I had not considered that.

Feel free to ignore the above; with the comments as below:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Thanks,
Mark.

> 
> > 
> > I think there are other ways we could synchronise against that (e.g. using RCU
> > tasks rude) if we ever do that, and this patch looks good to me.
> > 
> > > 
> > > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > ---
> > >  arch/arm64/kernel/probes/kprobes.c |   79 +++++++++++++++++-------------------
> > >  1 file changed, 37 insertions(+), 42 deletions(-)
> > > 
> > > diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
> > > index d2ae37f89774..ea56b22d4da8 100644
> > > --- a/arch/arm64/kernel/probes/kprobes.c
> > > +++ b/arch/arm64/kernel/probes/kprobes.c
> > > @@ -298,7 +298,8 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
> > >  	return 0;
> > >  }
> > >  
> > > -static void __kprobes kprobe_handler(struct pt_regs *regs)
> > > +static int __kprobes
> > > +kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> > >  {
> > >  	struct kprobe *p, *cur_kprobe;
> > >  	struct kprobe_ctlblk *kcb;
> > > @@ -308,39 +309,45 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
> > >  	cur_kprobe = kprobe_running();
> > >  
> > >  	p = get_kprobe((kprobe_opcode_t *) addr);
> > > +	if (WARN_ON_ONCE(!p)) {
> > > +		/*
> > > +		 * Something went wrong. This must be put by kprobe, but we
> > > +		 * could not find corresponding kprobes. Let the kernel handle
> > > +		 * this error case.
> > > +		 */
> > 
> > Could we make this:
> > 
> > 		/*
> > 		 * Something went wrong. This BRK used an immediate reserved
> > 		 * for kprobes, but we couldn't find any corresponding probe.
> > 		 */
> 
> OK.
> 
> > 
> > > +		return DBG_HOOK_ERROR;
> > > +	}
> > >  
> > > -	if (p) {
> > > -		if (cur_kprobe) {
> > > -			if (reenter_kprobe(p, regs, kcb))
> > > -				return;
> > > -		} else {
> > > -			/* Probe hit */
> > > -			set_current_kprobe(p);
> > > -			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > > -
> > > -			/*
> > > -			 * If we have no pre-handler or it returned 0, we
> > > -			 * continue with normal processing.  If we have a
> > > -			 * pre-handler and it returned non-zero, it will
> > > -			 * modify the execution path and no need to single
> > > -			 * stepping. Let's just reset current kprobe and exit.
> > > -			 */
> > > -			if (!p->pre_handler || !p->pre_handler(p, regs)) {
> > > -				setup_singlestep(p, regs, kcb, 0);
> > > -			} else
> > > -				reset_current_kprobe();
> > > -		}
> > > +	if (cur_kprobe) {
> > > +		/* Hit a kprobe inside another kprobe */
> > > +		if (!reenter_kprobe(p, regs, kcb))
> > > +			return DBG_HOOK_ERROR;
> > > +	} else {
> > > +		/* Probe hit */
> > > +		set_current_kprobe(p);
> > > +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > > +
> > > +		/*
> > > +		 * If we have no pre-handler or it returned 0, we
> > > +		 * continue with normal processing.  If we have a
> > > +		 * pre-handler and it returned non-zero, it will
> > > +		 * modify the execution path and no need to single
> > > +		 * stepping. Let's just reset current kprobe and exit.
> > > +		 */
> > 
> > Minor wording nit: could we replace:
> > 
> > 	no need to single stepping.
> > 
> > With:
> > 	
> > 	not need to single-step.
> 
> OK, I'll update both in v2.
> 
> Thank you!
> 
> > 
> > Thanks,
> > Mark.
> > 
> > > +		if (!p->pre_handler || !p->pre_handler(p, regs))
> > > +			setup_singlestep(p, regs, kcb, 0);
> > > +		else
> > > +			reset_current_kprobe();
> > >  	}
> > > -	/*
> > > -	 * The breakpoint instruction was removed right
> > > -	 * after we hit it.  Another cpu has removed
> > > -	 * either a probepoint or a debugger breakpoint
> > > -	 * at this address.  In either case, no further
> > > -	 * handling of this interrupt is appropriate.
> > > -	 * Return back to original instruction, and continue.
> > > -	 */
> > > +
> > > +	return DBG_HOOK_HANDLED;
> > >  }
> > >  
> > > +static struct break_hook kprobes_break_hook = {
> > > +	.imm = KPROBES_BRK_IMM,
> > > +	.fn = kprobe_breakpoint_handler,
> > > +};
> > > +
> > >  static int __kprobes
> > >  kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
> > >  {
> > > @@ -365,18 +372,6 @@ static struct break_hook kprobes_break_ss_hook = {
> > >  	.fn = kprobe_breakpoint_ss_handler,
> > >  };
> > >  
> > > -static int __kprobes
> > > -kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> > > -{
> > > -	kprobe_handler(regs);
> > > -	return DBG_HOOK_HANDLED;
> > > -}
> > > -
> > > -static struct break_hook kprobes_break_hook = {
> > > -	.imm = KPROBES_BRK_IMM,
> > > -	.fn = kprobe_breakpoint_handler,
> > > -};
> > > -
> > >  /*
> > >   * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> > >   * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> > > 
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK
  2022-12-01 17:21       ` Mark Rutland
@ 2022-12-02  0:42         ` Masami Hiramatsu
  0 siblings, 0 replies; 11+ messages in thread
From: Masami Hiramatsu @ 2022-12-02  0:42 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Catalin Marinas, Will Deacon, Mark Brown, Kalesh Singh,
	Marc Zyngier, linux-arm-kernel, linux-kernel, Sandeepa Prabhu

On Thu, 1 Dec 2022 17:21:55 +0000
Mark Rutland <mark.rutland@arm.com> wrote:

> On Fri, Dec 02, 2022 at 01:07:13AM +0900, Masami Hiramatsu wrote:
> > On Thu, 1 Dec 2022 15:08:52 +0000
> > Mark Rutland <mark.rutland@arm.com> wrote:
> > 
> > > On Thu, Dec 01, 2022 at 11:39:21PM +0900, Masami Hiramatsu (Google) wrote:
> > > > From: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > > 
> > > > Return DBG_HOOK_ERROR if kprobes can not handle a BRK because it
> > > > fails to find a kprobe corresponding to the address.
> > > > 
> > > > Since arm64 kprobes uses stop_machine based text patching for removing
> > > > BRK, it ensures all running kprobe_break_handler() is done at that point.
> > > > And after removing the BRK, it removes the kprobe from its hash list.
> > > > Thus, if the kprobe_break_handler() fails to find kprobe from hash list,
> > > > there is a bug.
> > > 
> > > IIUC this relies on BRK handling not being preemptible, which is something
> > > we've repeatedly considered changing along with a bunch of other debug
> > > exception handling.
> > 
> > Interesting idea... and it also need many changes in kprobe itself.
> > 
> > > 
> > > In case we do try to change that in future, it would be good to have a comment
> > > somewhere to that effect.
> > 
> > Hmm, it would fundamentally change the assumptions that kprobes relies on,
> > and would require a lot of thought again. (e.g. current running kprobe is
> > stored in per-cpu variable, it should be per-task. etc.)
> 
> Ah; I had not considered that.
> 
> Feel free to ignore the above; with the comments as below:
> 
> Acked-by: Mark Rutland <mark.rutland@arm.com>

OK, Thanks!

> 
> Thanks,
> Mark.
> 
> > 
> > > 
> > > I think there are other ways we could synchronise against that (e.g. using RCU
> > > tasks rude) if we ever do that, and this patch looks good to me.
> > > 
> > > > 
> > > > Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > > ---
> > > >  arch/arm64/kernel/probes/kprobes.c |   79 +++++++++++++++++-------------------
> > > >  1 file changed, 37 insertions(+), 42 deletions(-)
> > > > 
> > > > diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c
> > > > index d2ae37f89774..ea56b22d4da8 100644
> > > > --- a/arch/arm64/kernel/probes/kprobes.c
> > > > +++ b/arch/arm64/kernel/probes/kprobes.c
> > > > @@ -298,7 +298,8 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int fsr)
> > > >  	return 0;
> > > >  }
> > > >  
> > > > -static void __kprobes kprobe_handler(struct pt_regs *regs)
> > > > +static int __kprobes
> > > > +kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> > > >  {
> > > >  	struct kprobe *p, *cur_kprobe;
> > > >  	struct kprobe_ctlblk *kcb;
> > > > @@ -308,39 +309,45 @@ static void __kprobes kprobe_handler(struct pt_regs *regs)
> > > >  	cur_kprobe = kprobe_running();
> > > >  
> > > >  	p = get_kprobe((kprobe_opcode_t *) addr);
> > > > +	if (WARN_ON_ONCE(!p)) {
> > > > +		/*
> > > > +		 * Something went wrong. This must be put by kprobe, but we
> > > > +		 * could not find corresponding kprobes. Let the kernel handle
> > > > +		 * this error case.
> > > > +		 */
> > > 
> > > Could we make this:
> > > 
> > > 		/*
> > > 		 * Something went wrong. This BRK used an immediate reserved
> > > 		 * for kprobes, but we couldn't find any corresponding probe.
> > > 		 */
> > 
> > OK.
> > 
> > > 
> > > > +		return DBG_HOOK_ERROR;
> > > > +	}
> > > >  
> > > > -	if (p) {
> > > > -		if (cur_kprobe) {
> > > > -			if (reenter_kprobe(p, regs, kcb))
> > > > -				return;
> > > > -		} else {
> > > > -			/* Probe hit */
> > > > -			set_current_kprobe(p);
> > > > -			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > > > -
> > > > -			/*
> > > > -			 * If we have no pre-handler or it returned 0, we
> > > > -			 * continue with normal processing.  If we have a
> > > > -			 * pre-handler and it returned non-zero, it will
> > > > -			 * modify the execution path and no need to single
> > > > -			 * stepping. Let's just reset current kprobe and exit.
> > > > -			 */
> > > > -			if (!p->pre_handler || !p->pre_handler(p, regs)) {
> > > > -				setup_singlestep(p, regs, kcb, 0);
> > > > -			} else
> > > > -				reset_current_kprobe();
> > > > -		}
> > > > +	if (cur_kprobe) {
> > > > +		/* Hit a kprobe inside another kprobe */
> > > > +		if (!reenter_kprobe(p, regs, kcb))
> > > > +			return DBG_HOOK_ERROR;
> > > > +	} else {
> > > > +		/* Probe hit */
> > > > +		set_current_kprobe(p);
> > > > +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > > > +
> > > > +		/*
> > > > +		 * If we have no pre-handler or it returned 0, we
> > > > +		 * continue with normal processing.  If we have a
> > > > +		 * pre-handler and it returned non-zero, it will
> > > > +		 * modify the execution path and no need to single
> > > > +		 * stepping. Let's just reset current kprobe and exit.
> > > > +		 */
> > > 
> > > Minor wording nit: could we replace:
> > > 
> > > 	no need to single stepping.
> > > 
> > > With:
> > > 	
> > > 	not need to single-step.
> > 
> > OK, I'll update both in v2.
> > 
> > Thank you!
> > 
> > > 
> > > Thanks,
> > > Mark.
> > > 
> > > > +		if (!p->pre_handler || !p->pre_handler(p, regs))
> > > > +			setup_singlestep(p, regs, kcb, 0);
> > > > +		else
> > > > +			reset_current_kprobe();
> > > >  	}
> > > > -	/*
> > > > -	 * The breakpoint instruction was removed right
> > > > -	 * after we hit it.  Another cpu has removed
> > > > -	 * either a probepoint or a debugger breakpoint
> > > > -	 * at this address.  In either case, no further
> > > > -	 * handling of this interrupt is appropriate.
> > > > -	 * Return back to original instruction, and continue.
> > > > -	 */
> > > > +
> > > > +	return DBG_HOOK_HANDLED;
> > > >  }
> > > >  
> > > > +static struct break_hook kprobes_break_hook = {
> > > > +	.imm = KPROBES_BRK_IMM,
> > > > +	.fn = kprobe_breakpoint_handler,
> > > > +};
> > > > +
> > > >  static int __kprobes
> > > >  kprobe_breakpoint_ss_handler(struct pt_regs *regs, unsigned long esr)
> > > >  {
> > > > @@ -365,18 +372,6 @@ static struct break_hook kprobes_break_ss_hook = {
> > > >  	.fn = kprobe_breakpoint_ss_handler,
> > > >  };
> > > >  
> > > > -static int __kprobes
> > > > -kprobe_breakpoint_handler(struct pt_regs *regs, unsigned long esr)
> > > > -{
> > > > -	kprobe_handler(regs);
> > > > -	return DBG_HOOK_HANDLED;
> > > > -}
> > > > -
> > > > -static struct break_hook kprobes_break_hook = {
> > > > -	.imm = KPROBES_BRK_IMM,
> > > > -	.fn = kprobe_breakpoint_handler,
> > > > -};
> > > > -
> > > >  /*
> > > >   * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> > > >   * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> > > > 
> > 
> > 
> > -- 
> > Masami Hiramatsu (Google) <mhiramat@kernel.org>


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-12-02  0:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-01 14:38 [PATCH 0/3] arm64: kprobes: Fix bugs in kprobes for arm64 Masami Hiramatsu (Google)
2022-12-01 14:39 ` [PATCH 1/3] arm64: Prohibit probing on arch_stack_walk() Masami Hiramatsu (Google)
2022-12-01 14:47   ` Mark Rutland
2022-12-01 15:54     ` Masami Hiramatsu
2022-12-01 14:39 ` [PATCH 2/3] arm64: kprobes: Let arch do_page_fault() fix up page fault in user handler Masami Hiramatsu (Google)
2022-12-01 14:56   ` Mark Rutland
2022-12-01 14:39 ` [PATCH 3/3] arm64: kprobes: Return DBG_HOOK_ERROR if kprobes can not handle a BRK Masami Hiramatsu (Google)
2022-12-01 15:08   ` Mark Rutland
2022-12-01 16:07     ` Masami Hiramatsu
2022-12-01 17:21       ` Mark Rutland
2022-12-02  0:42         ` Masami Hiramatsu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).