bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] Make fprobe + rethook immune to recursion
@ 2023-05-17  3:45 Ze Gao
  2023-05-17  3:45 ` [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler Ze Gao
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Ze Gao @ 2023-05-17  3:45 UTC (permalink / raw)
  To: Steven Rostedt, Masami Hiramatsu
  Cc: Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

Hi all,

This is the 3rd version of patch series that fix the ftrace rethook recursion problem.

Changes since v2:
- refine commit message of [PATCH v2 2/4] fprobe: make fprobe_kprobe_handler recursion free

v1: https://lore.kernel.org/linux-trace-kernel/cover.1684120990.git.zegao@tencent.com/T/
v2: https://lore.kernel.org/linux-trace-kernel/ZGP4ypidaxQGdp7Y@krava/T/#mb9e74fe74d5800ee424234b26c6def9d5135237b

The original discussions of this bug can be checked out here:

https://lore.kernel.org/bpf/CAD8CoPCfPmqZH6BJCk3Y1-02BLVVsbQ6OeaNOhcfGWmdF0oX8A@mail.gmail.com/T/#t

Ze Gao (4):
  rethook: use preempt_{disable, enable}_notrace in
    rethook_trampoline_handler
  fprobe: make fprobe_kprobe_handler recursion free
  fprobe: add recursion detection in fprobe_exit_handler
  rethook, fprobe: do not trace rethook related functions

 arch/riscv/kernel/probes/Makefile |  2 +
 arch/s390/kernel/Makefile         |  1 +
 arch/x86/kernel/Makefile          |  1 +
 kernel/trace/fprobe.c             | 73 ++++++++++++++++++++++++-------
 kernel/trace/rethook.c            |  4 +-
 5 files changed, 64 insertions(+), 17 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler
  2023-05-17  3:45 [PATCH v3 0/3] Make fprobe + rethook immune to recursion Ze Gao
@ 2023-05-17  3:45 ` Ze Gao
  2023-05-17 11:59   ` Masami Hiramatsu
  2023-05-17  3:45 ` [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free Ze Gao
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Ze Gao @ 2023-05-17  3:45 UTC (permalink / raw)
  To: Steven Rostedt, Masami Hiramatsu
  Cc: Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao, stable

This patch replaces preempt_{disable, enable} with its corresponding
notrace version in rethook_trampoline_handler so no worries about stack
recursion or overflow introduced by preempt_count_{add, sub} under
fprobe + rethook context.

Fixes: 54ecbe6f1ed5 ("rethook: Add a generic return hook")
Signed-off-by: Ze Gao <zegao@tencent.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-2-zegao@tencent.com
---
 kernel/trace/rethook.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
index 32c3dfdb4d6a..60f6cb2b486b 100644
--- a/kernel/trace/rethook.c
+++ b/kernel/trace/rethook.c
@@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
 	 * These loops must be protected from rethook_free_rcu() because those
 	 * are accessing 'rhn->rethook'.
 	 */
-	preempt_disable();
+	preempt_disable_notrace();
 
 	/*
 	 * Run the handler on the shadow stack. Do not unlink the list here because
@@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
 		first = first->next;
 		rethook_recycle(rhn);
 	}
-	preempt_enable();
+	preempt_enable_notrace();
 
 	return correct_ret_addr;
 }
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17  3:45 [PATCH v3 0/3] Make fprobe + rethook immune to recursion Ze Gao
  2023-05-17  3:45 ` [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler Ze Gao
@ 2023-05-17  3:45 ` Ze Gao
  2023-05-17 10:47   ` Jiri Olsa
                     ` (2 more replies)
  2023-05-17  3:45 ` [PATCH v3 3/4] fprobe: add recursion detection in fprobe_exit_handler Ze Gao
  2023-05-17  3:45 ` [PATCH v3 4/4] rethook, fprobe: do not trace rethook related functions Ze Gao
  3 siblings, 3 replies; 15+ messages in thread
From: Ze Gao @ 2023-05-17  3:45 UTC (permalink / raw)
  To: Steven Rostedt, Masami Hiramatsu
  Cc: Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

Current implementation calls kprobe related functions before doing
ftrace recursion check in fprobe_kprobe_handler, which opens door
to kernel crash due to stack recursion if preempt_count_{add, sub}
is traceable in kprobe_busy_{begin, end}.

Things goes like this without this patch quoted from Steven:
"
fprobe_kprobe_handler() {
   kprobe_busy_begin() {
      preempt_disable() {
         preempt_count_add() {  <-- trace
            fprobe_kprobe_handler() {
		[ wash, rinse, repeat, CRASH!!! ]
"

By refactoring the common part out of fprobe_kprobe_handler and
fprobe_handler and call ftrace recursion detection at the very beginning,
the whole fprobe_kprobe_handler is free from recursion.

Signed-off-by: Ze Gao <zegao@tencent.com>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
---
 kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
 1 file changed, 44 insertions(+), 15 deletions(-)

diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
index 9abb3905bc8e..097c740799ba 100644
--- a/kernel/trace/fprobe.c
+++ b/kernel/trace/fprobe.c
@@ -20,30 +20,22 @@ struct fprobe_rethook_node {
 	char data[];
 };
 
-static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
-			   struct ftrace_ops *ops, struct ftrace_regs *fregs)
+static inline void __fprobe_handler(unsigned long ip, unsigned long
+		parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
 {
 	struct fprobe_rethook_node *fpr;
 	struct rethook_node *rh = NULL;
 	struct fprobe *fp;
 	void *entry_data = NULL;
-	int bit, ret;
+	int ret;
 
 	fp = container_of(ops, struct fprobe, ops);
-	if (fprobe_disabled(fp))
-		return;
-
-	bit = ftrace_test_recursion_trylock(ip, parent_ip);
-	if (bit < 0) {
-		fp->nmissed++;
-		return;
-	}
 
 	if (fp->exit_handler) {
 		rh = rethook_try_get(fp->rethook);
 		if (!rh) {
 			fp->nmissed++;
-			goto out;
+			return;
 		}
 		fpr = container_of(rh, struct fprobe_rethook_node, node);
 		fpr->entry_ip = ip;
@@ -61,23 +53,60 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
 		else
 			rethook_hook(rh, ftrace_get_regs(fregs), true);
 	}
-out:
+}
+
+static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
+		struct ftrace_ops *ops, struct ftrace_regs *fregs)
+{
+	struct fprobe *fp;
+	int bit;
+
+	fp = container_of(ops, struct fprobe, ops);
+	if (fprobe_disabled(fp))
+		return;
+
+	/* recursion detection has to go before any traceable function and
+	 * all functions before this point should be marked as notrace
+	 */
+	bit = ftrace_test_recursion_trylock(ip, parent_ip);
+	if (bit < 0) {
+		fp->nmissed++;
+		return;
+	}
+	__fprobe_handler(ip, parent_ip, ops, fregs);
 	ftrace_test_recursion_unlock(bit);
+
 }
 NOKPROBE_SYMBOL(fprobe_handler);
 
 static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
 				  struct ftrace_ops *ops, struct ftrace_regs *fregs)
 {
-	struct fprobe *fp = container_of(ops, struct fprobe, ops);
+	struct fprobe *fp;
+	int bit;
+
+	fp = container_of(ops, struct fprobe, ops);
+	if (fprobe_disabled(fp))
+		return;
+
+	/* recursion detection has to go before any traceable function and
+	 * all functions called before this point should be marked as notrace
+	 */
+	bit = ftrace_test_recursion_trylock(ip, parent_ip);
+	if (bit < 0) {
+		fp->nmissed++;
+		return;
+	}
 
 	if (unlikely(kprobe_running())) {
 		fp->nmissed++;
 		return;
 	}
+
 	kprobe_busy_begin();
-	fprobe_handler(ip, parent_ip, ops, fregs);
+	__fprobe_handler(ip, parent_ip, ops, fregs);
 	kprobe_busy_end();
+	ftrace_test_recursion_unlock(bit);
 }
 
 static void fprobe_exit_handler(struct rethook_node *rh, void *data,
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 3/4] fprobe: add recursion detection in fprobe_exit_handler
  2023-05-17  3:45 [PATCH v3 0/3] Make fprobe + rethook immune to recursion Ze Gao
  2023-05-17  3:45 ` [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler Ze Gao
  2023-05-17  3:45 ` [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free Ze Gao
@ 2023-05-17  3:45 ` Ze Gao
  2023-05-17  3:45 ` [PATCH v3 4/4] rethook, fprobe: do not trace rethook related functions Ze Gao
  3 siblings, 0 replies; 15+ messages in thread
From: Ze Gao @ 2023-05-17  3:45 UTC (permalink / raw)
  To: Steven Rostedt, Masami Hiramatsu
  Cc: Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao, stable

fprobe_hander and fprobe_kprobe_handler has guarded ftrace recursion
detection but fprobe_exit_handler has not, which possibly introduce
recursive calls if the fprobe exit callback calls any traceable
functions. Checking in fprobe_hander or fprobe_kprobe_handler
is not enough and misses this case.

So add recursion free guard the same way as fprobe_hander. Since
ftrace recursion check does not employ ip(s), so here use entry_ip and
entry_parent_ip the same as fprobe_handler.

Fixes: 5b0ab78998e3 ("fprobe: Add exit_handler support")
Signed-off-by: Ze Gao <zegao@tencent.com>
Cc: stable@vger.kernel.org
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-4-zegao@tencent.com
---
 kernel/trace/fprobe.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
index 097c740799ba..281b58c7dd14 100644
--- a/kernel/trace/fprobe.c
+++ b/kernel/trace/fprobe.c
@@ -17,6 +17,7 @@
 struct fprobe_rethook_node {
 	struct rethook_node node;
 	unsigned long entry_ip;
+	unsigned long entry_parent_ip;
 	char data[];
 };
 
@@ -39,6 +40,7 @@ static inline void __fprobe_handler(unsigned long ip, unsigned long
 		}
 		fpr = container_of(rh, struct fprobe_rethook_node, node);
 		fpr->entry_ip = ip;
+		fpr->entry_parent_ip = parent_ip;
 		if (fp->entry_data_size)
 			entry_data = fpr->data;
 	}
@@ -114,14 +116,26 @@ static void fprobe_exit_handler(struct rethook_node *rh, void *data,
 {
 	struct fprobe *fp = (struct fprobe *)data;
 	struct fprobe_rethook_node *fpr;
+	int bit;
 
 	if (!fp || fprobe_disabled(fp))
 		return;
 
 	fpr = container_of(rh, struct fprobe_rethook_node, node);
 
+	/*
+	 * we need to assure no calls to traceable functions in-between the
+	 * end of fprobe_handler and the beginning of fprobe_exit_handler.
+	 */
+	bit = ftrace_test_recursion_trylock(fpr->entry_ip, fpr->entry_parent_ip);
+	if (bit < 0) {
+		fp->nmissed++;
+		return;
+	}
+
 	fp->exit_handler(fp, fpr->entry_ip, regs,
 			 fp->entry_data_size ? (void *)fpr->data : NULL);
+	ftrace_test_recursion_unlock(bit);
 }
 NOKPROBE_SYMBOL(fprobe_exit_handler);
 
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 4/4] rethook, fprobe: do not trace rethook related functions
  2023-05-17  3:45 [PATCH v3 0/3] Make fprobe + rethook immune to recursion Ze Gao
                   ` (2 preceding siblings ...)
  2023-05-17  3:45 ` [PATCH v3 3/4] fprobe: add recursion detection in fprobe_exit_handler Ze Gao
@ 2023-05-17  3:45 ` Ze Gao
  3 siblings, 0 replies; 15+ messages in thread
From: Ze Gao @ 2023-05-17  3:45 UTC (permalink / raw)
  To: Steven Rostedt, Masami Hiramatsu
  Cc: Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

These functions are already marked as NOKPROBE to prevent recursion and
we have the same reason to blacklist them if rethook is used with fprobe,
since they are beyond the recursion-free region ftrace can guard.

Signed-off-by: Ze Gao <zegao@tencent.com>
Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-5-zegao@tencent.com
---
 arch/riscv/kernel/probes/Makefile | 2 ++
 arch/s390/kernel/Makefile         | 1 +
 arch/x86/kernel/Makefile          | 1 +
 3 files changed, 4 insertions(+)

diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
index c40139e9ca47..8265ff497977 100644
--- a/arch/riscv/kernel/probes/Makefile
+++ b/arch/riscv/kernel/probes/Makefile
@@ -4,3 +4,5 @@ obj-$(CONFIG_RETHOOK)		+= rethook.o rethook_trampoline.o
 obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o simulate-insn.o
 CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
+CFLAGS_REMOVE_rethook.o = $(CC_FLAGS_FTRACE)
+CFLAGS_REMOVE_rethook_trampoline.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/s390/kernel/Makefile b/arch/s390/kernel/Makefile
index 8983837b3565..6b2a051e1f8a 100644
--- a/arch/s390/kernel/Makefile
+++ b/arch/s390/kernel/Makefile
@@ -10,6 +10,7 @@ CFLAGS_REMOVE_ftrace.o		= $(CC_FLAGS_FTRACE)
 
 # Do not trace early setup code
 CFLAGS_REMOVE_early.o		= $(CC_FLAGS_FTRACE)
+CFLAGS_REMOVE_rethook.o		= $(CC_FLAGS_FTRACE)
 
 endif
 
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index dd61752f4c96..4070a01c11b7 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -17,6 +17,7 @@ CFLAGS_REMOVE_ftrace.o = -pg
 CFLAGS_REMOVE_early_printk.o = -pg
 CFLAGS_REMOVE_head64.o = -pg
 CFLAGS_REMOVE_sev.o = -pg
+CFLAGS_REMOVE_rethook.o = -pg
 endif
 
 KASAN_SANITIZE_head$(BITS).o				:= n
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17  3:45 ` [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free Ze Gao
@ 2023-05-17 10:47   ` Jiri Olsa
  2023-05-17 11:42     ` Masami Hiramatsu
  2023-05-17 14:27   ` Masami Hiramatsu
  2023-06-28  7:16   ` Yafang Shao
  2 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2023-05-17 10:47 UTC (permalink / raw)
  To: Ze Gao, Masami Hiramatsu
  Cc: Steven Rostedt, Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Yonghong Song, Ze Gao

On Wed, May 17, 2023 at 11:45:07AM +0800, Ze Gao wrote:
> Current implementation calls kprobe related functions before doing
> ftrace recursion check in fprobe_kprobe_handler, which opens door
> to kernel crash due to stack recursion if preempt_count_{add, sub}
> is traceable in kprobe_busy_{begin, end}.
> 
> Things goes like this without this patch quoted from Steven:
> "
> fprobe_kprobe_handler() {
>    kprobe_busy_begin() {
>       preempt_disable() {
>          preempt_count_add() {  <-- trace
>             fprobe_kprobe_handler() {
> 		[ wash, rinse, repeat, CRASH!!! ]
> "
> 
> By refactoring the common part out of fprobe_kprobe_handler and
> fprobe_handler and call ftrace recursion detection at the very beginning,
> the whole fprobe_kprobe_handler is free from recursion.
> 
> Signed-off-by: Ze Gao <zegao@tencent.com>
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> ---
>  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
>  1 file changed, 44 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> index 9abb3905bc8e..097c740799ba 100644
> --- a/kernel/trace/fprobe.c
> +++ b/kernel/trace/fprobe.c
> @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
>  	char data[];
>  };
>  
> -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> -			   struct ftrace_ops *ops, struct ftrace_regs *fregs)
> +static inline void __fprobe_handler(unsigned long ip, unsigned long
> +		parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
>  {
>  	struct fprobe_rethook_node *fpr;
>  	struct rethook_node *rh = NULL;
>  	struct fprobe *fp;
>  	void *entry_data = NULL;
> -	int bit, ret;
> +	int ret;
>  

this change uncovered bug for me introduced by [1]

the bpf's kprobe multi uses either fprobe's entry_handler or exit_handler,
so the 'ret' value is undefined for return probe path and occasionally we
won't setup rethook and miss the return probe

we can either squash this change into your patch or I can make separate
patch for that.. but given that [1] is quite recent we could just silently
fix that ;-)

jirka


[1] 39d954200bf6 fprobe: Skip exit_handler if entry_handler returns !0

---
diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
index 9abb3905bc8e..293184227394 100644
--- a/kernel/trace/fprobe.c
+++ b/kernel/trace/fprobe.c
@@ -27,7 +27,7 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
 	struct rethook_node *rh = NULL;
 	struct fprobe *fp;
 	void *entry_data = NULL;
-	int bit, ret;
+	int bit, ret = 0;
 
 	fp = container_of(ops, struct fprobe, ops);
 	if (fprobe_disabled(fp))



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17 10:47   ` Jiri Olsa
@ 2023-05-17 11:42     ` Masami Hiramatsu
  2023-05-17 12:30       ` Jiri Olsa
  0 siblings, 1 reply; 15+ messages in thread
From: Masami Hiramatsu @ 2023-05-17 11:42 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Ze Gao, Steven Rostedt, Albert Ou, Alexander Gordeev,
	Alexei Starovoitov, Borislav Petkov, Christian Borntraeger,
	Dave Hansen, Heiko Carstens, H. Peter Anvin, Ingo Molnar,
	Palmer Dabbelt, Paul Walmsley, Sven Schnelle, Thomas Gleixner,
	Vasily Gorbik, x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Yonghong Song, Ze Gao

On Wed, 17 May 2023 12:47:42 +0200
Jiri Olsa <olsajiri@gmail.com> wrote:

> On Wed, May 17, 2023 at 11:45:07AM +0800, Ze Gao wrote:
> > Current implementation calls kprobe related functions before doing
> > ftrace recursion check in fprobe_kprobe_handler, which opens door
> > to kernel crash due to stack recursion if preempt_count_{add, sub}
> > is traceable in kprobe_busy_{begin, end}.
> > 
> > Things goes like this without this patch quoted from Steven:
> > "
> > fprobe_kprobe_handler() {
> >    kprobe_busy_begin() {
> >       preempt_disable() {
> >          preempt_count_add() {  <-- trace
> >             fprobe_kprobe_handler() {
> > 		[ wash, rinse, repeat, CRASH!!! ]
> > "
> > 
> > By refactoring the common part out of fprobe_kprobe_handler and
> > fprobe_handler and call ftrace recursion detection at the very beginning,
> > the whole fprobe_kprobe_handler is free from recursion.
> > 
> > Signed-off-by: Ze Gao <zegao@tencent.com>
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> > ---
> >  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 44 insertions(+), 15 deletions(-)
> > 
> > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > index 9abb3905bc8e..097c740799ba 100644
> > --- a/kernel/trace/fprobe.c
> > +++ b/kernel/trace/fprobe.c
> > @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
> >  	char data[];
> >  };
> >  
> > -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > -			   struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +static inline void __fprobe_handler(unsigned long ip, unsigned long
> > +		parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
> >  {
> >  	struct fprobe_rethook_node *fpr;
> >  	struct rethook_node *rh = NULL;
> >  	struct fprobe *fp;
> >  	void *entry_data = NULL;
> > -	int bit, ret;
> > +	int ret;
> >  
> 
> this change uncovered bug for me introduced by [1]
> 
> the bpf's kprobe multi uses either fprobe's entry_handler or exit_handler,
> so the 'ret' value is undefined for return probe path and occasionally we
> won't setup rethook and miss the return probe

Oops, I missed to push my fix.

https://lore.kernel.org/all/168100731160.79534.374827110083836722.stgit@devnote2/

> 
> we can either squash this change into your patch or I can make separate
> patch for that.. but given that [1] is quite recent we could just silently
> fix that ;-)

Jiri, I think the above will fix the issue, right?

> 
> jirka
> 
> 
> [1] 39d954200bf6 fprobe: Skip exit_handler if entry_handler returns !0
> 
> ---
> diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> index 9abb3905bc8e..293184227394 100644
> --- a/kernel/trace/fprobe.c
> +++ b/kernel/trace/fprobe.c
> @@ -27,7 +27,7 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
>  	struct rethook_node *rh = NULL;
>  	struct fprobe *fp;
>  	void *entry_data = NULL;
> -	int bit, ret;
> +	int bit, ret = 0;
>  
>  	fp = container_of(ops, struct fprobe, ops);
>  	if (fprobe_disabled(fp))
> 
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler
  2023-05-17  3:45 ` [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler Ze Gao
@ 2023-05-17 11:59   ` Masami Hiramatsu
  2023-05-18  2:40     ` Ze Gao
  0 siblings, 1 reply; 15+ messages in thread
From: Masami Hiramatsu @ 2023-05-17 11:59 UTC (permalink / raw)
  To: Ze Gao
  Cc: Steven Rostedt, Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao, stable

Hi Ze Gao,

On Wed, 17 May 2023 11:45:06 +0800
Ze Gao <zegao2021@gmail.com> wrote:

> This patch replaces preempt_{disable, enable} with its corresponding
> notrace version in rethook_trampoline_handler so no worries about stack
> recursion or overflow introduced by preempt_count_{add, sub} under
> fprobe + rethook context.
> 
> Fixes: 54ecbe6f1ed5 ("rethook: Add a generic return hook")
> Signed-off-by: Ze Gao <zegao@tencent.com>
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Cc: <stable@vger.kernel.org>
> Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-2-zegao@tencent.com

Note that you don't need to add Link tag of the previous version for each patch.
I'll add it when I pick it :)

Thank you,

> ---
>  kernel/trace/rethook.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
> index 32c3dfdb4d6a..60f6cb2b486b 100644
> --- a/kernel/trace/rethook.c
> +++ b/kernel/trace/rethook.c
> @@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
>  	 * These loops must be protected from rethook_free_rcu() because those
>  	 * are accessing 'rhn->rethook'.
>  	 */
> -	preempt_disable();
> +	preempt_disable_notrace();
>  
>  	/*
>  	 * Run the handler on the shadow stack. Do not unlink the list here because
> @@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
>  		first = first->next;
>  		rethook_recycle(rhn);
>  	}
> -	preempt_enable();
> +	preempt_enable_notrace();
>  
>  	return correct_ret_addr;
>  }
> -- 
> 2.40.1
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17 11:42     ` Masami Hiramatsu
@ 2023-05-17 12:30       ` Jiri Olsa
  0 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2023-05-17 12:30 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Jiri Olsa, Ze Gao, Steven Rostedt, Albert Ou, Alexander Gordeev,
	Alexei Starovoitov, Borislav Petkov, Christian Borntraeger,
	Dave Hansen, Heiko Carstens, H. Peter Anvin, Ingo Molnar,
	Palmer Dabbelt, Paul Walmsley, Sven Schnelle, Thomas Gleixner,
	Vasily Gorbik, x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Yonghong Song, Ze Gao

On Wed, May 17, 2023 at 08:42:36PM +0900, Masami Hiramatsu wrote:
> On Wed, 17 May 2023 12:47:42 +0200
> Jiri Olsa <olsajiri@gmail.com> wrote:
> 
> > On Wed, May 17, 2023 at 11:45:07AM +0800, Ze Gao wrote:
> > > Current implementation calls kprobe related functions before doing
> > > ftrace recursion check in fprobe_kprobe_handler, which opens door
> > > to kernel crash due to stack recursion if preempt_count_{add, sub}
> > > is traceable in kprobe_busy_{begin, end}.
> > > 
> > > Things goes like this without this patch quoted from Steven:
> > > "
> > > fprobe_kprobe_handler() {
> > >    kprobe_busy_begin() {
> > >       preempt_disable() {
> > >          preempt_count_add() {  <-- trace
> > >             fprobe_kprobe_handler() {
> > > 		[ wash, rinse, repeat, CRASH!!! ]
> > > "
> > > 
> > > By refactoring the common part out of fprobe_kprobe_handler and
> > > fprobe_handler and call ftrace recursion detection at the very beginning,
> > > the whole fprobe_kprobe_handler is free from recursion.
> > > 
> > > Signed-off-by: Ze Gao <zegao@tencent.com>
> > > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > > Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> > > ---
> > >  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
> > >  1 file changed, 44 insertions(+), 15 deletions(-)
> > > 
> > > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > > index 9abb3905bc8e..097c740799ba 100644
> > > --- a/kernel/trace/fprobe.c
> > > +++ b/kernel/trace/fprobe.c
> > > @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
> > >  	char data[];
> > >  };
> > >  
> > > -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > > -			   struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > > +static inline void __fprobe_handler(unsigned long ip, unsigned long
> > > +		parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > >  {
> > >  	struct fprobe_rethook_node *fpr;
> > >  	struct rethook_node *rh = NULL;
> > >  	struct fprobe *fp;
> > >  	void *entry_data = NULL;
> > > -	int bit, ret;
> > > +	int ret;
> > >  
> > 
> > this change uncovered bug for me introduced by [1]
> > 
> > the bpf's kprobe multi uses either fprobe's entry_handler or exit_handler,
> > so the 'ret' value is undefined for return probe path and occasionally we
> > won't setup rethook and miss the return probe
> 
> Oops, I missed to push my fix.
> 
> https://lore.kernel.org/all/168100731160.79534.374827110083836722.stgit@devnote2/
> 
> > 
> > we can either squash this change into your patch or I can make separate
> > patch for that.. but given that [1] is quite recent we could just silently
> > fix that ;-)
> 
> Jiri, I think the above will fix the issue, right?

yes, it's the same fix, great, thanks

jirka

> 
> > 
> > jirka
> > 
> > 
> > [1] 39d954200bf6 fprobe: Skip exit_handler if entry_handler returns !0
> > 
> > ---
> > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > index 9abb3905bc8e..293184227394 100644
> > --- a/kernel/trace/fprobe.c
> > +++ b/kernel/trace/fprobe.c
> > @@ -27,7 +27,7 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> >  	struct rethook_node *rh = NULL;
> >  	struct fprobe *fp;
> >  	void *entry_data = NULL;
> > -	int bit, ret;
> > +	int bit, ret = 0;
> >  
> >  	fp = container_of(ops, struct fprobe, ops);
> >  	if (fprobe_disabled(fp))
> > 
> > 
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17  3:45 ` [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free Ze Gao
  2023-05-17 10:47   ` Jiri Olsa
@ 2023-05-17 14:27   ` Masami Hiramatsu
  2023-05-18  0:16     ` Andrii Nakryiko
  2023-05-18  2:49     ` Ze Gao
  2023-06-28  7:16   ` Yafang Shao
  2 siblings, 2 replies; 15+ messages in thread
From: Masami Hiramatsu @ 2023-05-17 14:27 UTC (permalink / raw)
  To: Ze Gao
  Cc: Steven Rostedt, Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

On Wed, 17 May 2023 11:45:07 +0800
Ze Gao <zegao2021@gmail.com> wrote:

> Current implementation calls kprobe related functions before doing
> ftrace recursion check in fprobe_kprobe_handler, which opens door
> to kernel crash due to stack recursion if preempt_count_{add, sub}
> is traceable in kprobe_busy_{begin, end}.
> 
> Things goes like this without this patch quoted from Steven:
> "
> fprobe_kprobe_handler() {
>    kprobe_busy_begin() {
>       preempt_disable() {
>          preempt_count_add() {  <-- trace
>             fprobe_kprobe_handler() {
> 		[ wash, rinse, repeat, CRASH!!! ]
> "
> 
> By refactoring the common part out of fprobe_kprobe_handler and
> fprobe_handler and call ftrace recursion detection at the very beginning,
> the whole fprobe_kprobe_handler is free from recursion.
> 
> Signed-off-by: Ze Gao <zegao@tencent.com>
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> ---
>  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
>  1 file changed, 44 insertions(+), 15 deletions(-)
> 
> diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> index 9abb3905bc8e..097c740799ba 100644
> --- a/kernel/trace/fprobe.c
> +++ b/kernel/trace/fprobe.c
> @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
>  	char data[];
>  };
>  
> -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> -			   struct ftrace_ops *ops, struct ftrace_regs *fregs)
> +static inline void __fprobe_handler(unsigned long ip, unsigned long
> +		parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)

OK, I picked up this series to probes/fixes. Note that I fixed this line 
because the "unsigned long parent_ip" was split into 2 lines.

Thank you,


>  {
>  	struct fprobe_rethook_node *fpr;
>  	struct rethook_node *rh = NULL;
>  	struct fprobe *fp;
>  	void *entry_data = NULL;
> -	int bit, ret;
> +	int ret;
>  
>  	fp = container_of(ops, struct fprobe, ops);
> -	if (fprobe_disabled(fp))
> -		return;
> -
> -	bit = ftrace_test_recursion_trylock(ip, parent_ip);
> -	if (bit < 0) {
> -		fp->nmissed++;
> -		return;
> -	}
>  
>  	if (fp->exit_handler) {
>  		rh = rethook_try_get(fp->rethook);
>  		if (!rh) {
>  			fp->nmissed++;
> -			goto out;
> +			return;
>  		}
>  		fpr = container_of(rh, struct fprobe_rethook_node, node);
>  		fpr->entry_ip = ip;
> @@ -61,23 +53,60 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
>  		else
>  			rethook_hook(rh, ftrace_get_regs(fregs), true);
>  	}
> -out:
> +}
> +
> +static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> +		struct ftrace_ops *ops, struct ftrace_regs *fregs)
> +{
> +	struct fprobe *fp;
> +	int bit;
> +
> +	fp = container_of(ops, struct fprobe, ops);
> +	if (fprobe_disabled(fp))
> +		return;
> +
> +	/* recursion detection has to go before any traceable function and
> +	 * all functions before this point should be marked as notrace
> +	 */
> +	bit = ftrace_test_recursion_trylock(ip, parent_ip);
> +	if (bit < 0) {
> +		fp->nmissed++;
> +		return;
> +	}
> +	__fprobe_handler(ip, parent_ip, ops, fregs);
>  	ftrace_test_recursion_unlock(bit);
> +
>  }
>  NOKPROBE_SYMBOL(fprobe_handler);
>  
>  static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
>  				  struct ftrace_ops *ops, struct ftrace_regs *fregs)
>  {
> -	struct fprobe *fp = container_of(ops, struct fprobe, ops);
> +	struct fprobe *fp;
> +	int bit;
> +
> +	fp = container_of(ops, struct fprobe, ops);
> +	if (fprobe_disabled(fp))
> +		return;
> +
> +	/* recursion detection has to go before any traceable function and
> +	 * all functions called before this point should be marked as notrace
> +	 */
> +	bit = ftrace_test_recursion_trylock(ip, parent_ip);
> +	if (bit < 0) {
> +		fp->nmissed++;
> +		return;
> +	}
>  
>  	if (unlikely(kprobe_running())) {
>  		fp->nmissed++;
>  		return;
>  	}
> +
>  	kprobe_busy_begin();
> -	fprobe_handler(ip, parent_ip, ops, fregs);
> +	__fprobe_handler(ip, parent_ip, ops, fregs);
>  	kprobe_busy_end();
> +	ftrace_test_recursion_unlock(bit);
>  }
>  
>  static void fprobe_exit_handler(struct rethook_node *rh, void *data,
> -- 
> 2.40.1
> 


-- 
Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17 14:27   ` Masami Hiramatsu
@ 2023-05-18  0:16     ` Andrii Nakryiko
  2023-05-18  2:49     ` Ze Gao
  1 sibling, 0 replies; 15+ messages in thread
From: Andrii Nakryiko @ 2023-05-18  0:16 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Ze Gao, Steven Rostedt, Albert Ou, Alexander Gordeev,
	Alexei Starovoitov, Borislav Petkov, Christian Borntraeger,
	Dave Hansen, Heiko Carstens, H. Peter Anvin, Ingo Molnar,
	Palmer Dabbelt, Paul Walmsley, Sven Schnelle, Thomas Gleixner,
	Vasily Gorbik, x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

On Wed, May 17, 2023 at 7:28 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Wed, 17 May 2023 11:45:07 +0800
> Ze Gao <zegao2021@gmail.com> wrote:
>
> > Current implementation calls kprobe related functions before doing
> > ftrace recursion check in fprobe_kprobe_handler, which opens door
> > to kernel crash due to stack recursion if preempt_count_{add, sub}
> > is traceable in kprobe_busy_{begin, end}.
> >
> > Things goes like this without this patch quoted from Steven:
> > "
> > fprobe_kprobe_handler() {
> >    kprobe_busy_begin() {
> >       preempt_disable() {
> >          preempt_count_add() {  <-- trace
> >             fprobe_kprobe_handler() {
> >               [ wash, rinse, repeat, CRASH!!! ]
> > "
> >
> > By refactoring the common part out of fprobe_kprobe_handler and
> > fprobe_handler and call ftrace recursion detection at the very beginning,
> > the whole fprobe_kprobe_handler is free from recursion.
> >
> > Signed-off-by: Ze Gao <zegao@tencent.com>
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> > ---
> >  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 44 insertions(+), 15 deletions(-)
> >
> > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > index 9abb3905bc8e..097c740799ba 100644
> > --- a/kernel/trace/fprobe.c
> > +++ b/kernel/trace/fprobe.c
> > @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
> >       char data[];
> >  };
> >
> > -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > -                        struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +static inline void __fprobe_handler(unsigned long ip, unsigned long
> > +             parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
>
> OK, I picked up this series to probes/fixes. Note that I fixed this line
> because the "unsigned long parent_ip" was split into 2 lines.
>

Hey Masami,

Regarding [0], I was bisecting BPF CI failures related to
multi-kprobes, and it turned out that [0] is the fix we need. It would
be great if you can make sure this fix gets into Linus' tree ASAP, so
that we can get it back into bpf/bpf-next trees and fix BPF selftests
for everyone (we mitigated this for BPF CI as a temporary workaround
for now). Thanks!

  [0] https://lore.kernel.org/all/168100731160.79534.374827110083836722.stgit@devnote2/


> Thank you,
>
>
> >  {
> >       struct fprobe_rethook_node *fpr;
> >       struct rethook_node *rh = NULL;
> >       struct fprobe *fp;
> >       void *entry_data = NULL;
> > -     int bit, ret;
> > +     int ret;
> >
> >       fp = container_of(ops, struct fprobe, ops);
> > -     if (fprobe_disabled(fp))
> > -             return;
> > -
> > -     bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > -     if (bit < 0) {
> > -             fp->nmissed++;
> > -             return;
> > -     }
> >
> >       if (fp->exit_handler) {
> >               rh = rethook_try_get(fp->rethook);
> >               if (!rh) {
> >                       fp->nmissed++;
> > -                     goto out;
> > +                     return;
> >               }
> >               fpr = container_of(rh, struct fprobe_rethook_node, node);
> >               fpr->entry_ip = ip;
> > @@ -61,23 +53,60 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> >               else
> >                       rethook_hook(rh, ftrace_get_regs(fregs), true);
> >       }
> > -out:
> > +}
> > +
> > +static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > +             struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +{
> > +     struct fprobe *fp;
> > +     int bit;
> > +
> > +     fp = container_of(ops, struct fprobe, ops);
> > +     if (fprobe_disabled(fp))
> > +             return;
> > +
> > +     /* recursion detection has to go before any traceable function and
> > +      * all functions before this point should be marked as notrace
> > +      */
> > +     bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > +     if (bit < 0) {
> > +             fp->nmissed++;
> > +             return;
> > +     }
> > +     __fprobe_handler(ip, parent_ip, ops, fregs);
> >       ftrace_test_recursion_unlock(bit);
> > +
> >  }
> >  NOKPROBE_SYMBOL(fprobe_handler);
> >
> >  static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
> >                                 struct ftrace_ops *ops, struct ftrace_regs *fregs)
> >  {
> > -     struct fprobe *fp = container_of(ops, struct fprobe, ops);
> > +     struct fprobe *fp;
> > +     int bit;
> > +
> > +     fp = container_of(ops, struct fprobe, ops);
> > +     if (fprobe_disabled(fp))
> > +             return;
> > +
> > +     /* recursion detection has to go before any traceable function and
> > +      * all functions called before this point should be marked as notrace
> > +      */
> > +     bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > +     if (bit < 0) {
> > +             fp->nmissed++;
> > +             return;
> > +     }
> >
> >       if (unlikely(kprobe_running())) {
> >               fp->nmissed++;
> >               return;
> >       }
> > +
> >       kprobe_busy_begin();
> > -     fprobe_handler(ip, parent_ip, ops, fregs);
> > +     __fprobe_handler(ip, parent_ip, ops, fregs);
> >       kprobe_busy_end();
> > +     ftrace_test_recursion_unlock(bit);
> >  }
> >
> >  static void fprobe_exit_handler(struct rethook_node *rh, void *data,
> > --
> > 2.40.1
> >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler
  2023-05-17 11:59   ` Masami Hiramatsu
@ 2023-05-18  2:40     ` Ze Gao
  0 siblings, 0 replies; 15+ messages in thread
From: Ze Gao @ 2023-05-18  2:40 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Steven Rostedt, Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao, stable

Great, thanks!

Regards,
Ze

On Wed, May 17, 2023 at 7:59 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> Hi Ze Gao,
>
> On Wed, 17 May 2023 11:45:06 +0800
> Ze Gao <zegao2021@gmail.com> wrote:
>
> > This patch replaces preempt_{disable, enable} with its corresponding
> > notrace version in rethook_trampoline_handler so no worries about stack
> > recursion or overflow introduced by preempt_count_{add, sub} under
> > fprobe + rethook context.
> >
> > Fixes: 54ecbe6f1ed5 ("rethook: Add a generic return hook")
> > Signed-off-by: Ze Gao <zegao@tencent.com>
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Cc: <stable@vger.kernel.org>
> > Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-2-zegao@tencent.com
>
> Note that you don't need to add Link tag of the previous version for each patch.
> I'll add it when I pick it :)
>
> Thank you,
>
> > ---
> >  kernel/trace/rethook.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/kernel/trace/rethook.c b/kernel/trace/rethook.c
> > index 32c3dfdb4d6a..60f6cb2b486b 100644
> > --- a/kernel/trace/rethook.c
> > +++ b/kernel/trace/rethook.c
> > @@ -288,7 +288,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> >        * These loops must be protected from rethook_free_rcu() because those
> >        * are accessing 'rhn->rethook'.
> >        */
> > -     preempt_disable();
> > +     preempt_disable_notrace();
> >
> >       /*
> >        * Run the handler on the shadow stack. Do not unlink the list here because
> > @@ -321,7 +321,7 @@ unsigned long rethook_trampoline_handler(struct pt_regs *regs,
> >               first = first->next;
> >               rethook_recycle(rhn);
> >       }
> > -     preempt_enable();
> > +     preempt_enable_notrace();
> >
> >       return correct_ret_addr;
> >  }
> > --
> > 2.40.1
> >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17 14:27   ` Masami Hiramatsu
  2023-05-18  0:16     ` Andrii Nakryiko
@ 2023-05-18  2:49     ` Ze Gao
  1 sibling, 0 replies; 15+ messages in thread
From: Ze Gao @ 2023-05-18  2:49 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Steven Rostedt, Albert Ou, Alexander Gordeev, Alexei Starovoitov,
	Borislav Petkov, Christian Borntraeger, Dave Hansen,
	Heiko Carstens, H. Peter Anvin, Ingo Molnar, Palmer Dabbelt,
	Paul Walmsley, Sven Schnelle, Thomas Gleixner, Vasily Gorbik,
	x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

Glad to hear that, hooray!  :)

Thanks
Ze

On Wed, May 17, 2023 at 10:27 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Wed, 17 May 2023 11:45:07 +0800
> Ze Gao <zegao2021@gmail.com> wrote:
>
> > Current implementation calls kprobe related functions before doing
> > ftrace recursion check in fprobe_kprobe_handler, which opens door
> > to kernel crash due to stack recursion if preempt_count_{add, sub}
> > is traceable in kprobe_busy_{begin, end}.
> >
> > Things goes like this without this patch quoted from Steven:
> > "
> > fprobe_kprobe_handler() {
> >    kprobe_busy_begin() {
> >       preempt_disable() {
> >          preempt_count_add() {  <-- trace
> >             fprobe_kprobe_handler() {
> >               [ wash, rinse, repeat, CRASH!!! ]
> > "
> >
> > By refactoring the common part out of fprobe_kprobe_handler and
> > fprobe_handler and call ftrace recursion detection at the very beginning,
> > the whole fprobe_kprobe_handler is free from recursion.
> >
> > Signed-off-by: Ze Gao <zegao@tencent.com>
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> > ---
> >  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 44 insertions(+), 15 deletions(-)
> >
> > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > index 9abb3905bc8e..097c740799ba 100644
> > --- a/kernel/trace/fprobe.c
> > +++ b/kernel/trace/fprobe.c
> > @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
> >       char data[];
> >  };
> >
> > -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > -                        struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +static inline void __fprobe_handler(unsigned long ip, unsigned long
> > +             parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
>
> OK, I picked up this series to probes/fixes. Note that I fixed this line
> because the "unsigned long parent_ip" was split into 2 lines.
>
> Thank you,
>
>
> >  {
> >       struct fprobe_rethook_node *fpr;
> >       struct rethook_node *rh = NULL;
> >       struct fprobe *fp;
> >       void *entry_data = NULL;
> > -     int bit, ret;
> > +     int ret;
> >
> >       fp = container_of(ops, struct fprobe, ops);
> > -     if (fprobe_disabled(fp))
> > -             return;
> > -
> > -     bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > -     if (bit < 0) {
> > -             fp->nmissed++;
> > -             return;
> > -     }
> >
> >       if (fp->exit_handler) {
> >               rh = rethook_try_get(fp->rethook);
> >               if (!rh) {
> >                       fp->nmissed++;
> > -                     goto out;
> > +                     return;
> >               }
> >               fpr = container_of(rh, struct fprobe_rethook_node, node);
> >               fpr->entry_ip = ip;
> > @@ -61,23 +53,60 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> >               else
> >                       rethook_hook(rh, ftrace_get_regs(fregs), true);
> >       }
> > -out:
> > +}
> > +
> > +static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > +             struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +{
> > +     struct fprobe *fp;
> > +     int bit;
> > +
> > +     fp = container_of(ops, struct fprobe, ops);
> > +     if (fprobe_disabled(fp))
> > +             return;
> > +
> > +     /* recursion detection has to go before any traceable function and
> > +      * all functions before this point should be marked as notrace
> > +      */
> > +     bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > +     if (bit < 0) {
> > +             fp->nmissed++;
> > +             return;
> > +     }
> > +     __fprobe_handler(ip, parent_ip, ops, fregs);
> >       ftrace_test_recursion_unlock(bit);
> > +
> >  }
> >  NOKPROBE_SYMBOL(fprobe_handler);
> >
> >  static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
> >                                 struct ftrace_ops *ops, struct ftrace_regs *fregs)
> >  {
> > -     struct fprobe *fp = container_of(ops, struct fprobe, ops);
> > +     struct fprobe *fp;
> > +     int bit;
> > +
> > +     fp = container_of(ops, struct fprobe, ops);
> > +     if (fprobe_disabled(fp))
> > +             return;
> > +
> > +     /* recursion detection has to go before any traceable function and
> > +      * all functions called before this point should be marked as notrace
> > +      */
> > +     bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > +     if (bit < 0) {
> > +             fp->nmissed++;
> > +             return;
> > +     }
> >
> >       if (unlikely(kprobe_running())) {
> >               fp->nmissed++;
> >               return;
> >       }
> > +
> >       kprobe_busy_begin();
> > -     fprobe_handler(ip, parent_ip, ops, fregs);
> > +     __fprobe_handler(ip, parent_ip, ops, fregs);
> >       kprobe_busy_end();
> > +     ftrace_test_recursion_unlock(bit);
> >  }
> >
> >  static void fprobe_exit_handler(struct rethook_node *rh, void *data,
> > --
> > 2.40.1
> >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-05-17  3:45 ` [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free Ze Gao
  2023-05-17 10:47   ` Jiri Olsa
  2023-05-17 14:27   ` Masami Hiramatsu
@ 2023-06-28  7:16   ` Yafang Shao
  2023-07-03  6:52     ` Ze Gao
  2 siblings, 1 reply; 15+ messages in thread
From: Yafang Shao @ 2023-06-28  7:16 UTC (permalink / raw)
  To: Ze Gao
  Cc: Steven Rostedt, Masami Hiramatsu, Albert Ou, Alexander Gordeev,
	Alexei Starovoitov, Borislav Petkov, Christian Borntraeger,
	Dave Hansen, Heiko Carstens, H. Peter Anvin, Ingo Molnar,
	Palmer Dabbelt, Paul Walmsley, Sven Schnelle, Thomas Gleixner,
	Vasily Gorbik, x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

On Wed, May 17, 2023 at 11:45 AM Ze Gao <zegao2021@gmail.com> wrote:
>
> Current implementation calls kprobe related functions before doing
> ftrace recursion check in fprobe_kprobe_handler, which opens door
> to kernel crash due to stack recursion if preempt_count_{add, sub}
> is traceable in kprobe_busy_{begin, end}.
>
> Things goes like this without this patch quoted from Steven:
> "
> fprobe_kprobe_handler() {
>    kprobe_busy_begin() {
>       preempt_disable() {
>          preempt_count_add() {  <-- trace
>             fprobe_kprobe_handler() {
>                 [ wash, rinse, repeat, CRASH!!! ]
> "
>
> By refactoring the common part out of fprobe_kprobe_handler and
> fprobe_handler and call ftrace recursion detection at the very beginning,
> the whole fprobe_kprobe_handler is free from recursion.
>
> Signed-off-by: Ze Gao <zegao@tencent.com>
> Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> ---
>  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
>  1 file changed, 44 insertions(+), 15 deletions(-)
>
> diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> index 9abb3905bc8e..097c740799ba 100644
> --- a/kernel/trace/fprobe.c
> +++ b/kernel/trace/fprobe.c
> @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
>         char data[];
>  };
>
> -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> -                          struct ftrace_ops *ops, struct ftrace_regs *fregs)
> +static inline void __fprobe_handler(unsigned long ip, unsigned long
> +               parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
>  {
>         struct fprobe_rethook_node *fpr;
>         struct rethook_node *rh = NULL;
>         struct fprobe *fp;
>         void *entry_data = NULL;
> -       int bit, ret;
> +       int ret;
>
>         fp = container_of(ops, struct fprobe, ops);
> -       if (fprobe_disabled(fp))
> -               return;
> -
> -       bit = ftrace_test_recursion_trylock(ip, parent_ip);
> -       if (bit < 0) {
> -               fp->nmissed++;
> -               return;
> -       }
>
>         if (fp->exit_handler) {
>                 rh = rethook_try_get(fp->rethook);
>                 if (!rh) {
>                         fp->nmissed++;
> -                       goto out;
> +                       return;
>                 }
>                 fpr = container_of(rh, struct fprobe_rethook_node, node);
>                 fpr->entry_ip = ip;
> @@ -61,23 +53,60 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
>                 else
>                         rethook_hook(rh, ftrace_get_regs(fregs), true);
>         }
> -out:
> +}
> +
> +static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> +               struct ftrace_ops *ops, struct ftrace_regs *fregs)
> +{
> +       struct fprobe *fp;
> +       int bit;
> +
> +       fp = container_of(ops, struct fprobe, ops);
> +       if (fprobe_disabled(fp))
> +               return;
> +
> +       /* recursion detection has to go before any traceable function and
> +        * all functions before this point should be marked as notrace
> +        */
> +       bit = ftrace_test_recursion_trylock(ip, parent_ip);
> +       if (bit < 0) {
> +               fp->nmissed++;
> +               return;
> +       }
> +       __fprobe_handler(ip, parent_ip, ops, fregs);
>         ftrace_test_recursion_unlock(bit);
> +
>  }
>  NOKPROBE_SYMBOL(fprobe_handler);
>
>  static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
>                                   struct ftrace_ops *ops, struct ftrace_regs *fregs)
>  {
> -       struct fprobe *fp = container_of(ops, struct fprobe, ops);
> +       struct fprobe *fp;
> +       int bit;
> +
> +       fp = container_of(ops, struct fprobe, ops);
> +       if (fprobe_disabled(fp))
> +               return;
> +
> +       /* recursion detection has to go before any traceable function and
> +        * all functions called before this point should be marked as notrace
> +        */
> +       bit = ftrace_test_recursion_trylock(ip, parent_ip);
> +       if (bit < 0) {
> +               fp->nmissed++;
> +               return;
> +       }
>
>         if (unlikely(kprobe_running())) {
>                 fp->nmissed++;

I have just looked through this patchset, just out of curiosity,
shouldn't we call ftrace_test_recursion_unlock(bit) here ?
We have already locked it successfully, so why should we not unlock it?

>                 return;
>         }
> +
>         kprobe_busy_begin();
> -       fprobe_handler(ip, parent_ip, ops, fregs);
> +       __fprobe_handler(ip, parent_ip, ops, fregs);
>         kprobe_busy_end();
> +       ftrace_test_recursion_unlock(bit);
>  }
>
>  static void fprobe_exit_handler(struct rethook_node *rh, void *data,
> --
> 2.40.1
>
>


-- 
Regards
Yafang

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free
  2023-06-28  7:16   ` Yafang Shao
@ 2023-07-03  6:52     ` Ze Gao
  0 siblings, 0 replies; 15+ messages in thread
From: Ze Gao @ 2023-07-03  6:52 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Steven Rostedt, Masami Hiramatsu, Albert Ou, Alexander Gordeev,
	Alexei Starovoitov, Borislav Petkov, Christian Borntraeger,
	Dave Hansen, Heiko Carstens, H. Peter Anvin, Ingo Molnar,
	Palmer Dabbelt, Paul Walmsley, Sven Schnelle, Thomas Gleixner,
	Vasily Gorbik, x86, bpf, linux-kernel, linux-riscv, linux-s390,
	linux-trace-kernel, Conor Dooley, Jiri Olsa, Yonghong Song,
	Ze Gao

Hi, yafang.

You're right, it should do the unlock before return for the sake of
sanity. (Please
ignore the last misleading reply :)

Will send a new patch to fix it.

Thanks
Ze

On Wed, Jun 28, 2023 at 3:17 PM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> On Wed, May 17, 2023 at 11:45 AM Ze Gao <zegao2021@gmail.com> wrote:
> >
> > Current implementation calls kprobe related functions before doing
> > ftrace recursion check in fprobe_kprobe_handler, which opens door
> > to kernel crash due to stack recursion if preempt_count_{add, sub}
> > is traceable in kprobe_busy_{begin, end}.
> >
> > Things goes like this without this patch quoted from Steven:
> > "
> > fprobe_kprobe_handler() {
> >    kprobe_busy_begin() {
> >       preempt_disable() {
> >          preempt_count_add() {  <-- trace
> >             fprobe_kprobe_handler() {
> >                 [ wash, rinse, repeat, CRASH!!! ]
> > "
> >
> > By refactoring the common part out of fprobe_kprobe_handler and
> > fprobe_handler and call ftrace recursion detection at the very beginning,
> > the whole fprobe_kprobe_handler is free from recursion.
> >
> > Signed-off-by: Ze Gao <zegao@tencent.com>
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> > Link: https://lore.kernel.org/linux-trace-kernel/20230516071830.8190-3-zegao@tencent.com
> > ---
> >  kernel/trace/fprobe.c | 59 ++++++++++++++++++++++++++++++++-----------
> >  1 file changed, 44 insertions(+), 15 deletions(-)
> >
> > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > index 9abb3905bc8e..097c740799ba 100644
> > --- a/kernel/trace/fprobe.c
> > +++ b/kernel/trace/fprobe.c
> > @@ -20,30 +20,22 @@ struct fprobe_rethook_node {
> >         char data[];
> >  };
> >
> > -static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > -                          struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +static inline void __fprobe_handler(unsigned long ip, unsigned long
> > +               parent_ip, struct ftrace_ops *ops, struct ftrace_regs *fregs)
> >  {
> >         struct fprobe_rethook_node *fpr;
> >         struct rethook_node *rh = NULL;
> >         struct fprobe *fp;
> >         void *entry_data = NULL;
> > -       int bit, ret;
> > +       int ret;
> >
> >         fp = container_of(ops, struct fprobe, ops);
> > -       if (fprobe_disabled(fp))
> > -               return;
> > -
> > -       bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > -       if (bit < 0) {
> > -               fp->nmissed++;
> > -               return;
> > -       }
> >
> >         if (fp->exit_handler) {
> >                 rh = rethook_try_get(fp->rethook);
> >                 if (!rh) {
> >                         fp->nmissed++;
> > -                       goto out;
> > +                       return;
> >                 }
> >                 fpr = container_of(rh, struct fprobe_rethook_node, node);
> >                 fpr->entry_ip = ip;
> > @@ -61,23 +53,60 @@ static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> >                 else
> >                         rethook_hook(rh, ftrace_get_regs(fregs), true);
> >         }
> > -out:
> > +}
> > +
> > +static void fprobe_handler(unsigned long ip, unsigned long parent_ip,
> > +               struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +{
> > +       struct fprobe *fp;
> > +       int bit;
> > +
> > +       fp = container_of(ops, struct fprobe, ops);
> > +       if (fprobe_disabled(fp))
> > +               return;
> > +
> > +       /* recursion detection has to go before any traceable function and
> > +        * all functions before this point should be marked as notrace
> > +        */
> > +       bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > +       if (bit < 0) {
> > +               fp->nmissed++;
> > +               return;
> > +       }
> > +       __fprobe_handler(ip, parent_ip, ops, fregs);
> >         ftrace_test_recursion_unlock(bit);
> > +
> >  }
> >  NOKPROBE_SYMBOL(fprobe_handler);
> >
> >  static void fprobe_kprobe_handler(unsigned long ip, unsigned long parent_ip,
> >                                   struct ftrace_ops *ops, struct ftrace_regs *fregs)
> >  {
> > -       struct fprobe *fp = container_of(ops, struct fprobe, ops);
> > +       struct fprobe *fp;
> > +       int bit;
> > +
> > +       fp = container_of(ops, struct fprobe, ops);
> > +       if (fprobe_disabled(fp))
> > +               return;
> > +
> > +       /* recursion detection has to go before any traceable function and
> > +        * all functions called before this point should be marked as notrace
> > +        */
> > +       bit = ftrace_test_recursion_trylock(ip, parent_ip);
> > +       if (bit < 0) {
> > +               fp->nmissed++;
> > +               return;
> > +       }
> >
> >         if (unlikely(kprobe_running())) {
> >                 fp->nmissed++;
>
> I have just looked through this patchset, just out of curiosity,
> shouldn't we call ftrace_test_recursion_unlock(bit) here ?
> We have already locked it successfully, so why should we not unlock it?
>
> >                 return;
> >         }
> > +
> >         kprobe_busy_begin();
> > -       fprobe_handler(ip, parent_ip, ops, fregs);
> > +       __fprobe_handler(ip, parent_ip, ops, fregs);
> >         kprobe_busy_end();
> > +       ftrace_test_recursion_unlock(bit);
> >  }
> >
> >  static void fprobe_exit_handler(struct rethook_node *rh, void *data,
> > --
> > 2.40.1
> >
> >
>
>
> --
> Regards
> Yafang

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-07-03  6:52 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-17  3:45 [PATCH v3 0/3] Make fprobe + rethook immune to recursion Ze Gao
2023-05-17  3:45 ` [PATCH v3 1/4] rethook: use preempt_{disable, enable}_notrace in rethook_trampoline_handler Ze Gao
2023-05-17 11:59   ` Masami Hiramatsu
2023-05-18  2:40     ` Ze Gao
2023-05-17  3:45 ` [PATCH v3 2/4] fprobe: make fprobe_kprobe_handler recursion free Ze Gao
2023-05-17 10:47   ` Jiri Olsa
2023-05-17 11:42     ` Masami Hiramatsu
2023-05-17 12:30       ` Jiri Olsa
2023-05-17 14:27   ` Masami Hiramatsu
2023-05-18  0:16     ` Andrii Nakryiko
2023-05-18  2:49     ` Ze Gao
2023-06-28  7:16   ` Yafang Shao
2023-07-03  6:52     ` Ze Gao
2023-05-17  3:45 ` [PATCH v3 3/4] fprobe: add recursion detection in fprobe_exit_handler Ze Gao
2023-05-17  3:45 ` [PATCH v3 4/4] rethook, fprobe: do not trace rethook related functions Ze Gao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).