linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/3] arm64: implement KPROBES_ON_FTRACE
@ 2019-12-25  9:40 Jisheng Zhang
  2019-12-25  9:42 ` [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes Jisheng Zhang
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-25  9:40 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Steven Rostedt, Ingo Molnar,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jonathan Corbet
  Cc: linux-kernel, linux-arm-kernel, linux-doc

KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as
it eliminates the need for a trap, as well as the need to emulate or
single-step instructions.

arm64 makes use of GCC -fpatchable-function-entry=2 option to insert
two nops. When the function is traced, the first nop will be modified
to the LR saver, then the second nop to "bl <ftrace-entry>". We need
to allow kprobe on any of these two instructions.

patch1 uses ftrace_location() when [dis]arming probes.
patch2 introduces FTRACE_IP_EXTENSION to let ftrace_location()
recognise these two instructions  as being part of ftrace
patch3 implement the KPROBES_ON_FTRACE for arm64

Changes since v6:
  - add patch1 and patch2
  - fix the automatic offset as pointed out by Masami

Changes since v5:
  - rebase v5.5-rc1
  - collect Acked-by and Reviewed-by tags

Changes since v4:
  - correct reg->pc: probed on foo, then pre_handler see foo+0x4, while
    post_handler see foo+0x8

Changes since v3:
  - move kprobe_lookup_name() and arch_kprobe_on_func_entry to ftrace.c since
    we only want to choose the ftrace entry for KPROBES_ON_FTRACE.
  - only choose ftrace entry if (addr && !offset)

Changes since v2:
  - remove patch1, make it a single cleanup patch
  - remove "This patch" in the change log
  - implement arm64's kprobe_lookup_name() and arch_kprobe_on_func_entry instead
    of patching the common kprobes code

Changes since v1:
  - make the kprobes/x86: use instruction_pointer and instruction_pointer_set
    as patch1
  - add Masami's ACK to patch1
  - add some description about KPROBES_ON_FTRACE and why we need it on
    arm64
  - correct the log before the patch
  - remove the consolidation patch, make it as TODO
  - only adjust kprobe's addr when KPROBE_FLAG_FTRACE is set
  - if KPROBES_ON_FTRACE, ftrace_call_adjust() the kprobe's addr before
    calling ftrace_location()
  - update the kprobes-on-ftrace/arch-support.txt in doc

Jisheng Zhang (2):
  ftrace: introduce FTRACE_IP_EXTENSION
  arm64: implement KPROBES_ON_FTRACE

Naveen N. Rao (1):
  kprobes/ftrace: Use ftrace_location() when [dis]arming probes

 .../debug/kprobes-on-ftrace/arch-support.txt  |  2 +-
 arch/arm64/Kconfig                            |  1 +
 arch/arm64/include/asm/ftrace.h               |  1 +
 arch/arm64/kernel/probes/Makefile             |  1 +
 arch/arm64/kernel/probes/ftrace.c             | 78 +++++++++++++++++++
 include/linux/ftrace.h                        |  4 +
 kernel/kprobes.c                              |  8 +-
 kernel/trace/ftrace.c                         |  2 +-
 8 files changed, 92 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm64/kernel/probes/ftrace.c

-- 
2.24.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes
  2019-12-25  9:40 [PATCH v7 0/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
@ 2019-12-25  9:42 ` Jisheng Zhang
  2019-12-25  9:46   ` Jisheng Zhang
  2019-12-25  9:42 ` [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION Jisheng Zhang
  2019-12-25  9:44 ` [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
  2 siblings, 1 reply; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-25  9:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Steven Rostedt, Ingo Molnar,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jonathan Corbet
  Cc: linux-kernel, linux-arm-kernel, linux-doc

Ftrace location could include more than a single instruction in case
of some architectures (powerpc64, for now). In this case, kprobe is
permitted on any of those instructions, and uses ftrace infrastructure
for functioning.

However, [dis]arm_kprobe_ftrace() uses the kprobe address when setting
up ftrace filter IP. This won't work if the address points to any
instruction apart from the one that has a branch to _mcount(). To
resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
identify the filter IP.

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 kernel/kprobes.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/kernel/kprobes.c b/kernel/kprobes.c
index 53534aa258a6..5c630b424e3a 100644
--- a/kernel/kprobes.c
+++ b/kernel/kprobes.c
@@ -986,9 +986,10 @@ static int prepare_kprobe(struct kprobe *p)
 static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
 			       int *cnt)
 {
+	unsigned long ftrace_ip = ftrace_location((unsigned long)p->addr);
 	int ret = 0;
 
-	ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 0, 0);
+	ret = ftrace_set_filter_ip(ops, ftrace_ip, 0, 0);
 	if (ret) {
 		pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
 			 p->addr, ret);
@@ -1011,7 +1012,7 @@ static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
 	 * At this point, sinec ops is not registered, we should be sefe from
 	 * registering empty filter.
 	 */
-	ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
+	ftrace_set_filter_ip(ops, ftrace_ip, 1, 0);
 	return ret;
 }
 
@@ -1028,6 +1029,7 @@ static int arm_kprobe_ftrace(struct kprobe *p)
 static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
 				  int *cnt)
 {
+	unsigned long ftrace_ip = ftrace_location((unsigned long)p->addr);
 	int ret = 0;
 
 	if (*cnt == 1) {
@@ -1038,7 +1040,7 @@ static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
 
 	(*cnt)--;
 
-	ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
+	ret = ftrace_set_filter_ip(ops, ftrace_ip, 1, 0);
 	WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
 		  p->addr, ret);
 	return ret;
-- 
2.24.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION
  2019-12-25  9:40 [PATCH v7 0/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
  2019-12-25  9:42 ` [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes Jisheng Zhang
@ 2019-12-25  9:42 ` Jisheng Zhang
  2019-12-26  2:45   ` Masami Hiramatsu
  2020-01-08  0:05   ` Steven Rostedt
  2019-12-25  9:44 ` [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
  2 siblings, 2 replies; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-25  9:42 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Steven Rostedt, Ingo Molnar,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jonathan Corbet
  Cc: linux-kernel, linux-arm-kernel, linux-doc

On some architectures, the DYNAMIC_FTRACE_WITH_REGS is implemented by
gcc's -fpatchable-function-entry option. Take arm64 for example, arm64
makes use of GCC -fpatchable-function-entry=2 option to insert two
nops. When the function is traced, the first nop will be modified to
the LR saver, then the second nop to "bl <ftrace-entry>". we need to
update ftrace_location() to recognise these two instructions  as being
part of ftrace. To do this, we introduce FTRACE_IP_EXTENSION to let
ftrace_location search IP, IP + FTRACE_IP_EXTENSION range.

Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
 include/linux/ftrace.h | 4 ++++
 kernel/trace/ftrace.c  | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index 7247d35c3d16..05a03b2a2f39 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -20,6 +20,10 @@
 
 #include <asm/ftrace.h>
 
+#ifndef FTRACE_IP_EXTENSION
+#define  FTRACE_IP_EXTENSION 0
+#endif
+
 /*
  * If the arch supports passing the variable contents of
  * function_trace_op as the third parameter back from the
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 74439ab5c2b6..a8cfea502369 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -1590,7 +1590,7 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end)
  */
 unsigned long ftrace_location(unsigned long ip)
 {
-	return ftrace_location_range(ip, ip);
+	return ftrace_location_range(ip, ip + FTRACE_IP_EXTENSION);
 }
 
 /**
-- 
2.24.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-25  9:40 [PATCH v7 0/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
  2019-12-25  9:42 ` [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes Jisheng Zhang
  2019-12-25  9:42 ` [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION Jisheng Zhang
@ 2019-12-25  9:44 ` Jisheng Zhang
  2019-12-26  2:57   ` Masami Hiramatsu
  2020-02-28 15:31   ` Mark Rutland
  2 siblings, 2 replies; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-25  9:44 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Steven Rostedt, Ingo Molnar,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jonathan Corbet
  Cc: linux-kernel, linux-arm-kernel, linux-doc

KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
eliminates the need for a trap, as well as the need to emulate or
single-step instructions.

Tested on berlin arm64 platform.

~ # mount -t debugfs debugfs /sys/kernel/debug/
~ # cd /sys/kernel/debug/
/sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events

before the patch:

/sys/kernel/debug # cat kprobes/list
ffffff801009fe28  k  _do_fork+0x0    [DISABLED]

after the patch:

/sys/kernel/debug # cat kprobes/list
ffffff801009ff54  k  _do_fork+0x0    [DISABLED][FTRACE]

Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
---
 .../debug/kprobes-on-ftrace/arch-support.txt  |  2 +-
 arch/arm64/Kconfig                            |  1 +
 arch/arm64/include/asm/ftrace.h               |  1 +
 arch/arm64/kernel/probes/Makefile             |  1 +
 arch/arm64/kernel/probes/ftrace.c             | 78 +++++++++++++++++++
 5 files changed, 82 insertions(+), 1 deletion(-)
 create mode 100644 arch/arm64/kernel/probes/ftrace.c

diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
index 4fae0464ddff..f9dd9dd91e0c 100644
--- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
+++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
@@ -9,7 +9,7 @@
     |       alpha: | TODO |
     |         arc: | TODO |
     |         arm: | TODO |
-    |       arm64: | TODO |
+    |       arm64: |  ok  |
     |         c6x: | TODO |
     |        csky: | TODO |
     |       h8300: | TODO |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1b4476ddb83..92b9882889ac 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -166,6 +166,7 @@ config ARM64
 	select HAVE_STACKPROTECTOR
 	select HAVE_SYSCALL_TRACEPOINTS
 	select HAVE_KPROBES
+	select HAVE_KPROBES_ON_FTRACE
 	select HAVE_KRETPROBES
 	select HAVE_GENERIC_VDSO
 	select IOMMU_DMA if IOMMU_SUPPORT
diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
index 91fa4baa1a93..875aeb839654 100644
--- a/arch/arm64/include/asm/ftrace.h
+++ b/arch/arm64/include/asm/ftrace.h
@@ -20,6 +20,7 @@
 
 /* The BL at the callsite's adjusted rec->ip */
 #define MCOUNT_INSN_SIZE	AARCH64_INSN_SIZE
+#define FTRACE_IP_EXTENSION	MCOUNT_INSN_SIZE
 
 #define FTRACE_PLT_IDX		0
 #define FTRACE_REGS_PLT_IDX	1
diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
index 8e4be92e25b1..4020cfc66564 100644
--- a/arch/arm64/kernel/probes/Makefile
+++ b/arch/arm64/kernel/probes/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
 				   simulate-insn.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o	\
 				   simulate-insn.o
+obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
new file mode 100644
index 000000000000..0643aa2dacdb
--- /dev/null
+++ b/arch/arm64/kernel/probes/ftrace.c
@@ -0,0 +1,78 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Dynamic Ftrace based Kprobes Optimization
+ *
+ * Copyright (C) Hitachi Ltd., 2012
+ * Copyright (C) 2019 Jisheng Zhang <jszhang@kernel.org>
+ *		      Synaptics Incorporated
+ */
+
+#include <linux/kprobes.h>
+
+/*
+ * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
+ * the lr saver and bl ftrace-entry. Both these instructions are claimed
+ * by ftrace and we should allow probing on either instruction.
+ */
+int arch_check_ftrace_location(struct kprobe *p)
+{
+	if (ftrace_location((unsigned long)p->addr))
+		p->flags |= KPROBE_FLAG_FTRACE;
+	return 0;
+}
+
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
+void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
+			   struct ftrace_ops *ops, struct pt_regs *regs)
+{
+	bool lr_saver = false;
+	struct kprobe *p;
+	struct kprobe_ctlblk *kcb;
+
+	/* Preempt is disabled by ftrace */
+	p = get_kprobe((kprobe_opcode_t *)ip);
+	if (!p) {
+		p = get_kprobe((kprobe_opcode_t *)(ip - MCOUNT_INSN_SIZE));
+		if (unlikely(!p) || kprobe_disabled(p))
+			return;
+		lr_saver = true;
+	}
+
+	kcb = get_kprobe_ctlblk();
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(p);
+	} else {
+		unsigned long orig_ip = instruction_pointer(regs);
+
+		if (lr_saver)
+			ip -= MCOUNT_INSN_SIZE;
+		instruction_pointer_set(regs, ip);
+		__this_cpu_write(current_kprobe, p);
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		if (!p->pre_handler || !p->pre_handler(p, regs)) {
+			/*
+			 * Emulate singlestep (and also recover regs->pc)
+			 * as if there is a nop
+			 */
+			instruction_pointer_set(regs,
+				(unsigned long)p->addr + MCOUNT_INSN_SIZE);
+			if (unlikely(p->post_handler)) {
+				kcb->kprobe_status = KPROBE_HIT_SSDONE;
+				p->post_handler(p, regs, 0);
+			}
+			instruction_pointer_set(regs, orig_ip);
+		}
+		/*
+		 * If pre_handler returns !0, it changes regs->pc. We have to
+		 * skip emulating post_handler.
+		 */
+		__this_cpu_write(current_kprobe, NULL);
+	}
+}
+NOKPROBE_SYMBOL(kprobe_ftrace_handler);
+
+int arch_prepare_kprobe_ftrace(struct kprobe *p)
+{
+	p->ainsn.api.insn = NULL;
+	return 0;
+}
-- 
2.24.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes
  2019-12-25  9:42 ` [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes Jisheng Zhang
@ 2019-12-25  9:46   ` Jisheng Zhang
  0 siblings, 0 replies; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-25  9:46 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon, Steven Rostedt, Ingo Molnar,
	Naveen N. Rao, Anil S Keshavamurthy, David S. Miller,
	Masami Hiramatsu, Mark Rutland, Jonathan Corbet
  Cc: linux-kernel, linux-arm-kernel, linux-doc

On Wed, 25 Dec 2019 09:42:07 +0000 Jisheng Zhang wrote:
> 

oops, I missed "From: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>"

I will take care this point when posting new version.


> Ftrace location could include more than a single instruction in case
> of some architectures (powerpc64, for now). In this case, kprobe is
> permitted on any of those instructions, and uses ftrace infrastructure
> for functioning.
> 
> However, [dis]arm_kprobe_ftrace() uses the kprobe address when setting
> up ftrace filter IP. This won't work if the address points to any
> instruction apart from the one that has a branch to _mcount(). To
> resolve this, have [dis]arm_kprobe_ftrace() use ftrace_function() to
> identify the filter IP.
> 
> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
> ---
>  kernel/kprobes.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/kprobes.c b/kernel/kprobes.c
> index 53534aa258a6..5c630b424e3a 100644
> --- a/kernel/kprobes.c
> +++ b/kernel/kprobes.c
> @@ -986,9 +986,10 @@ static int prepare_kprobe(struct kprobe *p)
>  static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
>                                int *cnt)
>  {
> +       unsigned long ftrace_ip = ftrace_location((unsigned long)p->addr);
>         int ret = 0;
> 
> -       ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 0, 0);
> +       ret = ftrace_set_filter_ip(ops, ftrace_ip, 0, 0);
>         if (ret) {
>                 pr_debug("Failed to arm kprobe-ftrace at %pS (%d)\n",
>                          p->addr, ret);
> @@ -1011,7 +1012,7 @@ static int __arm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
>          * At this point, sinec ops is not registered, we should be sefe from
>          * registering empty filter.
>          */
> -       ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
> +       ftrace_set_filter_ip(ops, ftrace_ip, 1, 0);
>         return ret;
>  }
> 
> @@ -1028,6 +1029,7 @@ static int arm_kprobe_ftrace(struct kprobe *p)
>  static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
>                                   int *cnt)
>  {
> +       unsigned long ftrace_ip = ftrace_location((unsigned long)p->addr);
>         int ret = 0;
> 
>         if (*cnt == 1) {
> @@ -1038,7 +1040,7 @@ static int __disarm_kprobe_ftrace(struct kprobe *p, struct ftrace_ops *ops,
> 
>         (*cnt)--;
> 
> -       ret = ftrace_set_filter_ip(ops, (unsigned long)p->addr, 1, 0);
> +       ret = ftrace_set_filter_ip(ops, ftrace_ip, 1, 0);
>         WARN_ONCE(ret < 0, "Failed to disarm kprobe-ftrace at %pS (%d)\n",
>                   p->addr, ret);
>         return ret;
> --
> 2.24.1
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.infradead.org_mailman_listinfo_linux-2Darm-2Dkernel&d=DwICAg&c=7dfBJ8cXbWjhc0BhImu8wQ&r=wlaKTGoVCDxOzHc2QUzpzGEf9oY3eidXlAe3OF1omvo&m=BG5GowWc97yveg-i3gFbh3N7PDT3S3FwcxnpOSRpvIs&s=plWeJUToTNi5CWx998a_VbVx3eFc-MNhUNv4fLpmyiU&e=


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION
  2019-12-25  9:42 ` [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION Jisheng Zhang
@ 2019-12-26  2:45   ` Masami Hiramatsu
  2020-01-08  0:05   ` Steven Rostedt
  1 sibling, 0 replies; 15+ messages in thread
From: Masami Hiramatsu @ 2019-12-26  2:45 UTC (permalink / raw)
  To: Jisheng Zhang
  Cc: Mark Rutland, Jonathan Corbet, Catalin Marinas, linux-doc,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel

On Wed, 25 Dec 2019 09:42:52 +0000
Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:

> On some architectures, the DYNAMIC_FTRACE_WITH_REGS is implemented by
> gcc's -fpatchable-function-entry option. Take arm64 for example, arm64
> makes use of GCC -fpatchable-function-entry=2 option to insert two
> nops. When the function is traced, the first nop will be modified to
> the LR saver, then the second nop to "bl <ftrace-entry>". we need to
> update ftrace_location() to recognise these two instructions  as being
> part of ftrace. To do this, we introduce FTRACE_IP_EXTENSION to let
> ftrace_location search IP, IP + FTRACE_IP_EXTENSION range.
> 

Looks good to me.

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thanks!

> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> ---
>  include/linux/ftrace.h | 4 ++++
>  kernel/trace/ftrace.c  | 2 +-
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 7247d35c3d16..05a03b2a2f39 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -20,6 +20,10 @@
>  
>  #include <asm/ftrace.h>
>  
> +#ifndef FTRACE_IP_EXTENSION
> +#define  FTRACE_IP_EXTENSION 0
> +#endif
> +
>  /*
>   * If the arch supports passing the variable contents of
>   * function_trace_op as the third parameter back from the
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 74439ab5c2b6..a8cfea502369 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1590,7 +1590,7 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end)
>   */
>  unsigned long ftrace_location(unsigned long ip)
>  {
> -	return ftrace_location_range(ip, ip);
> +	return ftrace_location_range(ip, ip + FTRACE_IP_EXTENSION);
>  }
>  
>  /**
> -- 
> 2.24.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-25  9:44 ` [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
@ 2019-12-26  2:57   ` Masami Hiramatsu
  2019-12-26  3:18     ` Jisheng Zhang
  2020-02-28 15:31   ` Mark Rutland
  1 sibling, 1 reply; 15+ messages in thread
From: Masami Hiramatsu @ 2019-12-26  2:57 UTC (permalink / raw)
  To: Jisheng Zhang
  Cc: Mark Rutland, Jonathan Corbet, Catalin Marinas, linux-doc,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel

Hi Jisheng,

On Wed, 25 Dec 2019 09:44:21 +0000
Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:

> KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> eliminates the need for a trap, as well as the need to emulate or
> single-step instructions.
> 
> Tested on berlin arm64 platform.
> 
> ~ # mount -t debugfs debugfs /sys/kernel/debug/
> ~ # cd /sys/kernel/debug/
> /sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
> 
> before the patch:
> 
> /sys/kernel/debug # cat kprobes/list
> ffffff801009fe28  k  _do_fork+0x0    [DISABLED]
> 
> after the patch:
> 
> /sys/kernel/debug # cat kprobes/list
> ffffff801009ff54  k  _do_fork+0x0    [DISABLED][FTRACE]

What happens if user puts a probe on _do_fork+4?
Is that return -EILSEQ correctly?

> 
> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> ---
>  .../debug/kprobes-on-ftrace/arch-support.txt  |  2 +-
>  arch/arm64/Kconfig                            |  1 +
>  arch/arm64/include/asm/ftrace.h               |  1 +
>  arch/arm64/kernel/probes/Makefile             |  1 +
>  arch/arm64/kernel/probes/ftrace.c             | 78 +++++++++++++++++++
>  5 files changed, 82 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/kernel/probes/ftrace.c
> 
> diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> index 4fae0464ddff..f9dd9dd91e0c 100644
> --- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> +++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> @@ -9,7 +9,7 @@
>      |       alpha: | TODO |
>      |         arc: | TODO |
>      |         arm: | TODO |
> -    |       arm64: | TODO |
> +    |       arm64: |  ok  |
>      |         c6x: | TODO |
>      |        csky: | TODO |
>      |       h8300: | TODO |
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index b1b4476ddb83..92b9882889ac 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -166,6 +166,7 @@ config ARM64
>  	select HAVE_STACKPROTECTOR
>  	select HAVE_SYSCALL_TRACEPOINTS
>  	select HAVE_KPROBES
> +	select HAVE_KPROBES_ON_FTRACE
>  	select HAVE_KRETPROBES
>  	select HAVE_GENERIC_VDSO
>  	select IOMMU_DMA if IOMMU_SUPPORT
> diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> index 91fa4baa1a93..875aeb839654 100644
> --- a/arch/arm64/include/asm/ftrace.h
> +++ b/arch/arm64/include/asm/ftrace.h
> @@ -20,6 +20,7 @@
>  
>  /* The BL at the callsite's adjusted rec->ip */
>  #define MCOUNT_INSN_SIZE	AARCH64_INSN_SIZE
> +#define FTRACE_IP_EXTENSION	MCOUNT_INSN_SIZE
>  
>  #define FTRACE_PLT_IDX		0
>  #define FTRACE_REGS_PLT_IDX	1
> diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
> index 8e4be92e25b1..4020cfc66564 100644
> --- a/arch/arm64/kernel/probes/Makefile
> +++ b/arch/arm64/kernel/probes/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
>  				   simulate-insn.o
>  obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o	\
>  				   simulate-insn.o
> +obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
> diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
> new file mode 100644
> index 000000000000..0643aa2dacdb
> --- /dev/null
> +++ b/arch/arm64/kernel/probes/ftrace.c
> @@ -0,0 +1,78 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Dynamic Ftrace based Kprobes Optimization
> + *
> + * Copyright (C) Hitachi Ltd., 2012
> + * Copyright (C) 2019 Jisheng Zhang <jszhang@kernel.org>
> + *		      Synaptics Incorporated
> + */
> +
> +#include <linux/kprobes.h>
> +
> +/*
> + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> + * by ftrace and we should allow probing on either instruction.

No, the 2nd bl ftrace-entry must not be probed.
The pair of lr-saver and bl ftrace-entry is tightly coupled. You can not
decouple it.

> + */
> +int arch_check_ftrace_location(struct kprobe *p)
> +{
> +	if (ftrace_location((unsigned long)p->addr))
> +		p->flags |= KPROBE_FLAG_FTRACE;
> +	return 0;
> +}

Thus, this must return -EILSEQ if user puts a probe on the bl.

> +
> +/* Ftrace callback handler for kprobes -- called under preepmt disabed */
> +void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> +			   struct ftrace_ops *ops, struct pt_regs *regs)
> +{
> +	bool lr_saver = false;
> +	struct kprobe *p;
> +	struct kprobe_ctlblk *kcb;
> +
> +	/* Preempt is disabled by ftrace */
> +	p = get_kprobe((kprobe_opcode_t *)ip);
> +	if (!p) {
> +		p = get_kprobe((kprobe_opcode_t *)(ip - MCOUNT_INSN_SIZE));
> +		if (unlikely(!p) || kprobe_disabled(p))
> +			return;
> +		lr_saver = true;

Then, this can be removed.

> +	}
> +
> +	kcb = get_kprobe_ctlblk();
> +	if (kprobe_running()) {
> +		kprobes_inc_nmissed_count(p);
> +	} else {
> +		unsigned long orig_ip = instruction_pointer(regs);
> +
> +		if (lr_saver)
> +			ip -= MCOUNT_INSN_SIZE;

Ditto.

Thank you,

> +		instruction_pointer_set(regs, ip);
> +		__this_cpu_write(current_kprobe, p);
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +		if (!p->pre_handler || !p->pre_handler(p, regs)) {
> +			/*
> +			 * Emulate singlestep (and also recover regs->pc)
> +			 * as if there is a nop
> +			 */
> +			instruction_pointer_set(regs,
> +				(unsigned long)p->addr + MCOUNT_INSN_SIZE);
> +			if (unlikely(p->post_handler)) {
> +				kcb->kprobe_status = KPROBE_HIT_SSDONE;
> +				p->post_handler(p, regs, 0);
> +			}
> +			instruction_pointer_set(regs, orig_ip);
> +		}
> +		/*
> +		 * If pre_handler returns !0, it changes regs->pc. We have to
> +		 * skip emulating post_handler.
> +		 */
> +		__this_cpu_write(current_kprobe, NULL);
> +	}
> +}
> +NOKPROBE_SYMBOL(kprobe_ftrace_handler);
> +
> +int arch_prepare_kprobe_ftrace(struct kprobe *p)
> +{
> +	p->ainsn.api.insn = NULL;
> +	return 0;
> +}
> -- 
> 2.24.1
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-26  2:57   ` Masami Hiramatsu
@ 2019-12-26  3:18     ` Jisheng Zhang
  2019-12-26  4:25       ` Jisheng Zhang
  0 siblings, 1 reply; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-26  3:18 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Mark Rutland, Jonathan Corbet, Catalin Marinas, linux-doc,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel

Hi

On Thu, 26 Dec 2019 11:57:07 +0900 Masami Hiramatsu wrote:

> 
> Hi Jisheng,
> 
> On Wed, 25 Dec 2019 09:44:21 +0000
> Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:
> 
> > KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> > eliminates the need for a trap, as well as the need to emulate or
> > single-step instructions.
> >
> > Tested on berlin arm64 platform.
> >
> > ~ # mount -t debugfs debugfs /sys/kernel/debug/
> > ~ # cd /sys/kernel/debug/
> > /sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
> >
> > before the patch:
> >
> > /sys/kernel/debug # cat kprobes/list
> > ffffff801009fe28  k  _do_fork+0x0    [DISABLED]
> >
> > after the patch:
> >
> > /sys/kernel/debug # cat kprobes/list
> > ffffff801009ff54  k  _do_fork+0x0    [DISABLED][FTRACE]  
> 
> What happens if user puts a probe on _do_fork+4?
> Is that return -EILSEQ correctly?

_do_fork+4 can be probed successfully.

> 
> >
> > Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> > ---
> >  .../debug/kprobes-on-ftrace/arch-support.txt  |  2 +-
> >  arch/arm64/Kconfig                            |  1 +
> >  arch/arm64/include/asm/ftrace.h               |  1 +
> >  arch/arm64/kernel/probes/Makefile             |  1 +
> >  arch/arm64/kernel/probes/ftrace.c             | 78 +++++++++++++++++++
> >  5 files changed, 82 insertions(+), 1 deletion(-)
> >  create mode 100644 arch/arm64/kernel/probes/ftrace.c
> >
> > diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > index 4fae0464ddff..f9dd9dd91e0c 100644
> > --- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > +++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > @@ -9,7 +9,7 @@
> >      |       alpha: | TODO |
> >      |         arc: | TODO |
> >      |         arm: | TODO |
> > -    |       arm64: | TODO |
> > +    |       arm64: |  ok  |
> >      |         c6x: | TODO |
> >      |        csky: | TODO |
> >      |       h8300: | TODO |
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index b1b4476ddb83..92b9882889ac 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -166,6 +166,7 @@ config ARM64
> >       select HAVE_STACKPROTECTOR
> >       select HAVE_SYSCALL_TRACEPOINTS
> >       select HAVE_KPROBES
> > +     select HAVE_KPROBES_ON_FTRACE
> >       select HAVE_KRETPROBES
> >       select HAVE_GENERIC_VDSO
> >       select IOMMU_DMA if IOMMU_SUPPORT
> > diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> > index 91fa4baa1a93..875aeb839654 100644
> > --- a/arch/arm64/include/asm/ftrace.h
> > +++ b/arch/arm64/include/asm/ftrace.h
> > @@ -20,6 +20,7 @@
> >
> >  /* The BL at the callsite's adjusted rec->ip */
> >  #define MCOUNT_INSN_SIZE     AARCH64_INSN_SIZE
> > +#define FTRACE_IP_EXTENSION  MCOUNT_INSN_SIZE
> >
> >  #define FTRACE_PLT_IDX               0
> >  #define FTRACE_REGS_PLT_IDX  1
> > diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
> > index 8e4be92e25b1..4020cfc66564 100644
> > --- a/arch/arm64/kernel/probes/Makefile
> > +++ b/arch/arm64/kernel/probes/Makefile
> > @@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES)         += kprobes.o decode-insn.o      \
> >                                  simulate-insn.o
> >  obj-$(CONFIG_UPROBES)                += uprobes.o decode-insn.o      \
> >                                  simulate-insn.o
> > +obj-$(CONFIG_KPROBES_ON_FTRACE)      += ftrace.o
> > diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
> > new file mode 100644
> > index 000000000000..0643aa2dacdb
> > --- /dev/null
> > +++ b/arch/arm64/kernel/probes/ftrace.c
> > @@ -0,0 +1,78 @@
> > +// SPDX-License-Identifier: GPL-2.0-or-later
> > +/*
> > + * Dynamic Ftrace based Kprobes Optimization
> > + *
> > + * Copyright (C) Hitachi Ltd., 2012
> > + * Copyright (C) 2019 Jisheng Zhang <jszhang@kernel.org>
> > + *                 Synaptics Incorporated
> > + */
> > +
> > +#include <linux/kprobes.h>
> > +
> > +/*
> > + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> > + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> > + * by ftrace and we should allow probing on either instruction.  
> 
> No, the 2nd bl ftrace-entry must not be probed.
> The pair of lr-saver and bl ftrace-entry is tightly coupled. You can not
> decouple it.

This is the key. different viewing of this results in different implementation.
I'm just wondering why are the two instructions considered as coupled. I think
here we met similar situation as powerpc: https://lkml.org/lkml/2019/6/18/646
the "mflr r0" equals to lr-saver here, branch to _mcount equals to bl ftrace-entry
could you please kindly comment more?

Thanks in advance



> 
> > + */
> > +int arch_check_ftrace_location(struct kprobe *p)
> > +{
> > +     if (ftrace_location((unsigned long)p->addr))
> > +             p->flags |= KPROBE_FLAG_FTRACE;
> > +     return 0;
> > +}  
> 
> Thus, this must return -EILSEQ if user puts a probe on the bl.
> 
> > +
> > +/* Ftrace callback handler for kprobes -- called under preepmt disabed */
> > +void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> > +                        struct ftrace_ops *ops, struct pt_regs *regs)
> > +{
> > +     bool lr_saver = false;
> > +     struct kprobe *p;
> > +     struct kprobe_ctlblk *kcb;
> > +
> > +     /* Preempt is disabled by ftrace */
> > +     p = get_kprobe((kprobe_opcode_t *)ip);
> > +     if (!p) {
> > +             p = get_kprobe((kprobe_opcode_t *)(ip - MCOUNT_INSN_SIZE));
> > +             if (unlikely(!p) || kprobe_disabled(p))
> > +                     return;
> > +             lr_saver = true;  
> 
> Then, this can be removed.
> 
> > +     }
> > +
> > +     kcb = get_kprobe_ctlblk();
> > +     if (kprobe_running()) {
> > +             kprobes_inc_nmissed_count(p);
> > +     } else {
> > +             unsigned long orig_ip = instruction_pointer(regs);
> > +
> > +             if (lr_saver)
> > +                     ip -= MCOUNT_INSN_SIZE;  
> 
> Ditto.
> 
> Thank you,
> 
> > +             instruction_pointer_set(regs, ip);
> > +             __this_cpu_write(current_kprobe, p);
> > +             kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > +             if (!p->pre_handler || !p->pre_handler(p, regs)) {
> > +                     /*
> > +                      * Emulate singlestep (and also recover regs->pc)
> > +                      * as if there is a nop
> > +                      */
> > +                     instruction_pointer_set(regs,
> > +                             (unsigned long)p->addr + MCOUNT_INSN_SIZE);
> > +                     if (unlikely(p->post_handler)) {
> > +                             kcb->kprobe_status = KPROBE_HIT_SSDONE;
> > +                             p->post_handler(p, regs, 0);
> > +                     }
> > +                     instruction_pointer_set(regs, orig_ip);
> > +             }
> > +             /*
> > +              * If pre_handler returns !0, it changes regs->pc. We have to
> > +              * skip emulating post_handler.
> > +              */
> > +             __this_cpu_write(current_kprobe, NULL);
> > +     }
> > +}
> > +NOKPROBE_SYMBOL(kprobe_ftrace_handler);
> > +
> > +int arch_prepare_kprobe_ftrace(struct kprobe *p)
> > +{
> > +     p->ainsn.api.insn = NULL;
> > +     return 0;
> > +}
> > --
> > 2.24.1
> >  
> 
> 
> --
> Masami Hiramatsu <mhiramat@kernel.org>


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-26  3:18     ` Jisheng Zhang
@ 2019-12-26  4:25       ` Jisheng Zhang
  2019-12-26  9:26         ` Masami Hiramatsu
  0 siblings, 1 reply; 15+ messages in thread
From: Jisheng Zhang @ 2019-12-26  4:25 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Mark Rutland, linux-doc, Catalin Marinas, Jonathan Corbet,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel

On Thu, 26 Dec 2019 03:18:07 +0000 Jisheng Zhang wrote:


> 
> 
> Hi
> 
> On Thu, 26 Dec 2019 11:57:07 +0900 Masami Hiramatsu wrote:
> 
> >
> > Hi Jisheng,
> >
> > On Wed, 25 Dec 2019 09:44:21 +0000
> > Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:
> >  
> > > KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> > > eliminates the need for a trap, as well as the need to emulate or
> > > single-step instructions.
> > >
> > > Tested on berlin arm64 platform.
> > >
> > > ~ # mount -t debugfs debugfs /sys/kernel/debug/
> > > ~ # cd /sys/kernel/debug/
> > > /sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
> > >
> > > before the patch:
> > >
> > > /sys/kernel/debug # cat kprobes/list
> > > ffffff801009fe28  k  _do_fork+0x0    [DISABLED]
> > >
> > > after the patch:
> > >
> > > /sys/kernel/debug # cat kprobes/list
> > > ffffff801009ff54  k  _do_fork+0x0    [DISABLED][FTRACE]  
> >
> > What happens if user puts a probe on _do_fork+4?
> > Is that return -EILSEQ correctly?  
> 
> _do_fork+4 can be probed successfully.
> 
> >  
> > >
> > > Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> > > ---
> > >  .../debug/kprobes-on-ftrace/arch-support.txt  |  2 +-
> > >  arch/arm64/Kconfig                            |  1 +
> > >  arch/arm64/include/asm/ftrace.h               |  1 +
> > >  arch/arm64/kernel/probes/Makefile             |  1 +
> > >  arch/arm64/kernel/probes/ftrace.c             | 78 +++++++++++++++++++
> > >  5 files changed, 82 insertions(+), 1 deletion(-)
> > >  create mode 100644 arch/arm64/kernel/probes/ftrace.c
> > >
> > > diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > > index 4fae0464ddff..f9dd9dd91e0c 100644
> > > --- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > > +++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> > > @@ -9,7 +9,7 @@
> > >      |       alpha: | TODO |
> > >      |         arc: | TODO |
> > >      |         arm: | TODO |
> > > -    |       arm64: | TODO |
> > > +    |       arm64: |  ok  |
> > >      |         c6x: | TODO |
> > >      |        csky: | TODO |
> > >      |       h8300: | TODO |
> > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > > index b1b4476ddb83..92b9882889ac 100644
> > > --- a/arch/arm64/Kconfig
> > > +++ b/arch/arm64/Kconfig
> > > @@ -166,6 +166,7 @@ config ARM64
> > >       select HAVE_STACKPROTECTOR
> > >       select HAVE_SYSCALL_TRACEPOINTS
> > >       select HAVE_KPROBES
> > > +     select HAVE_KPROBES_ON_FTRACE
> > >       select HAVE_KRETPROBES
> > >       select HAVE_GENERIC_VDSO
> > >       select IOMMU_DMA if IOMMU_SUPPORT
> > > diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> > > index 91fa4baa1a93..875aeb839654 100644
> > > --- a/arch/arm64/include/asm/ftrace.h
> > > +++ b/arch/arm64/include/asm/ftrace.h
> > > @@ -20,6 +20,7 @@
> > >
> > >  /* The BL at the callsite's adjusted rec->ip */
> > >  #define MCOUNT_INSN_SIZE     AARCH64_INSN_SIZE
> > > +#define FTRACE_IP_EXTENSION  MCOUNT_INSN_SIZE
> > >
> > >  #define FTRACE_PLT_IDX               0
> > >  #define FTRACE_REGS_PLT_IDX  1
> > > diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
> > > index 8e4be92e25b1..4020cfc66564 100644
> > > --- a/arch/arm64/kernel/probes/Makefile
> > > +++ b/arch/arm64/kernel/probes/Makefile
> > > @@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES)         += kprobes.o decode-insn.o      \
> > >                                  simulate-insn.o
> > >  obj-$(CONFIG_UPROBES)                += uprobes.o decode-insn.o      \
> > >                                  simulate-insn.o
> > > +obj-$(CONFIG_KPROBES_ON_FTRACE)      += ftrace.o
> > > diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
> > > new file mode 100644
> > > index 000000000000..0643aa2dacdb
> > > --- /dev/null
> > > +++ b/arch/arm64/kernel/probes/ftrace.c
> > > @@ -0,0 +1,78 @@
> > > +// SPDX-License-Identifier: GPL-2.0-or-later
> > > +/*
> > > + * Dynamic Ftrace based Kprobes Optimization
> > > + *
> > > + * Copyright (C) Hitachi Ltd., 2012
> > > + * Copyright (C) 2019 Jisheng Zhang <jszhang@kernel.org>
> > > + *                 Synaptics Incorporated
> > > + */
> > > +
> > > +#include <linux/kprobes.h>
> > > +
> > > +/*
> > > + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> > > + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> > > + * by ftrace and we should allow probing on either instruction.  
> >
> > No, the 2nd bl ftrace-entry must not be probed.
> > The pair of lr-saver and bl ftrace-entry is tightly coupled. You can not
> > decouple it.  
> 
> This is the key. different viewing of this results in different implementation.
> I'm just wondering why are the two instructions considered as coupled. I think
> here we met similar situation as powerpc: https://lkml.org/lkml/2019/6/18/646
> the "mflr r0" equals to lr-saver here, branch to _mcount equals to bl ftrace-entry
> could you please kindly comment more?
> 
> Thanks in advance
> 

hmm, I think I may get some part of your opinion. In v7 implementation:

if probe on func+4, that's bl ftrace-entry, similar as mcount call on
other architectures, we allow this probe as normal.

if probe on func+0, the first param ip in kprobe_ftrace_handler() points
to func+4(this is adjusted by ftrace), regs->ip points to func+8, so in
kprobe_ftrace_handler() we modify regs->ip to func+0 to call kprobe
pre handler, then modify regs->ip to func+8 to call kprobe post handler.
As can be seen, the first two instructions are considered as a virtual
mcount call. From this point of view, lr saver and the bl <ftrace-entry>
is coupled.

If we split patch3 into two:
one to support kprobes func+4
the second to support kprobe on func+0
it would be much clearer.

Then the key here is whether we could allow both kprobes on func+0 and func+4

Thanks

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-26  4:25       ` Jisheng Zhang
@ 2019-12-26  9:26         ` Masami Hiramatsu
  2020-07-21 13:24           ` Masami Hiramatsu
  0 siblings, 1 reply; 15+ messages in thread
From: Masami Hiramatsu @ 2019-12-26  9:26 UTC (permalink / raw)
  To: Jisheng Zhang
  Cc: Mark Rutland, linux-doc, Catalin Marinas, Jonathan Corbet,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel

On Thu, 26 Dec 2019 04:25:24 +0000
Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:

> > > > +/*
> > > > + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> > > > + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> > > > + * by ftrace and we should allow probing on either instruction.  
> > >
> > > No, the 2nd bl ftrace-entry must not be probed.
> > > The pair of lr-saver and bl ftrace-entry is tightly coupled. You can not
> > > decouple it.  
> > 
> > This is the key. different viewing of this results in different implementation.
> > I'm just wondering why are the two instructions considered as coupled. I think
> > here we met similar situation as powerpc: https://lkml.org/lkml/2019/6/18/646
> > the "mflr r0" equals to lr-saver here, branch to _mcount equals to bl ftrace-entry
> > could you please kindly comment more?
> > 
> > Thanks in advance
> > 
> 
> hmm, I think I may get some part of your opinion. In v7 implementation:
> 
> if probe on func+4, that's bl ftrace-entry, similar as mcount call on
> other architectures, we allow this probe as normal.
> 
> if probe on func+0, the first param ip in kprobe_ftrace_handler() points
> to func+4(this is adjusted by ftrace), regs->ip points to func+8, so in
> kprobe_ftrace_handler() we modify regs->ip to func+0 to call kprobe
> pre handler, then modify regs->ip to func+8 to call kprobe post handler.
> As can be seen, the first two instructions are considered as a virtual
> mcount call. From this point of view, lr saver and the bl <ftrace-entry>
> is coupled.

Yes, this is good. But probing on func+4 is meaningless. Both func+0 and
func+4 call a handler with same pt_regs. And it should have the stack
pointer which is NOT modified by lr-saver and regs->lr must point original
call address. (ftrace regs caller must do this fixup for supporting live
patching correctly)

And in this case, func+4 has fake pt_regs because it skips lr-saver's
effects.

And even if you fixed up the pt_regs, there is another problem of what
user expects on the target instructions.

As you know, dynamic ftrace will fill the instruction with NOP (2 NOPs
in arm64), in this case, maybe pt_regs are same except pc on func+0 and
func+4. But if ftrace already enabled on the function, user will see
there are lr-saver and bl, oops. In this case we have to change pt_regs
between func+0 and func+4. So it depends on the current mode.

However, IMHO, it is not worth to pay such simulation cost. No one want
to probe such simulated intermediate address. It is easy to expect the
result from the code. Moreover, the func+4 will not appear on debuginfo
because those 2 special insturctions are just appended by the compiler,
not generated by the code.

So I don't think we need to support func+4. We only need func+0, or func+8
(this must be same as func+0 except regs->pc anyway)

Thank you,

> 
> If we split patch3 into two:
> one to support kprobes func+4
> the second to support kprobe on func+0
> it would be much clearer.
> 
> Then the key here is whether we could allow both kprobes on func+0 and func+4
> 
> Thanks


-- 
Masami Hiramatsu <mhiramat@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION
  2019-12-25  9:42 ` [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION Jisheng Zhang
  2019-12-26  2:45   ` Masami Hiramatsu
@ 2020-01-08  0:05   ` Steven Rostedt
  1 sibling, 0 replies; 15+ messages in thread
From: Steven Rostedt @ 2020-01-08  0:05 UTC (permalink / raw)
  To: Jisheng Zhang
  Cc: Mark Rutland, Jonathan Corbet, Catalin Marinas, linux-doc,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar,
	Masami Hiramatsu, Naveen N. Rao, Will Deacon, David S. Miller,
	linux-arm-kernel

On Wed, 25 Dec 2019 09:42:52 +0000
Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:

> On some architectures, the DYNAMIC_FTRACE_WITH_REGS is implemented by
> gcc's -fpatchable-function-entry option. Take arm64 for example, arm64
> makes use of GCC -fpatchable-function-entry=2 option to insert two
> nops. When the function is traced, the first nop will be modified to
> the LR saver, then the second nop to "bl <ftrace-entry>". we need to
> update ftrace_location() to recognise these two instructions  as being
> part of ftrace. To do this, we introduce FTRACE_IP_EXTENSION to let
> ftrace_location search IP, IP + FTRACE_IP_EXTENSION range.
> 
> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> Suggested-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

You can also add:

Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>

and when Masami is happy with your patches, it should go through the
tip tree.

Thanks!

-- Steve

> ---
>  include/linux/ftrace.h | 4 ++++
>  kernel/trace/ftrace.c  | 2 +-
>  2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
> index 7247d35c3d16..05a03b2a2f39 100644
> --- a/include/linux/ftrace.h
> +++ b/include/linux/ftrace.h
> @@ -20,6 +20,10 @@
>  
>  #include <asm/ftrace.h>
>  
> +#ifndef FTRACE_IP_EXTENSION
> +#define  FTRACE_IP_EXTENSION 0
> +#endif
> +
>  /*
>   * If the arch supports passing the variable contents of
>   * function_trace_op as the third parameter back from the
> diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
> index 74439ab5c2b6..a8cfea502369 100644
> --- a/kernel/trace/ftrace.c
> +++ b/kernel/trace/ftrace.c
> @@ -1590,7 +1590,7 @@ unsigned long ftrace_location_range(unsigned long start, unsigned long end)
>   */
>  unsigned long ftrace_location(unsigned long ip)
>  {
> -	return ftrace_location_range(ip, ip);
> +	return ftrace_location_range(ip, ip + FTRACE_IP_EXTENSION);
>  }
>  
>  /**


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-25  9:44 ` [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
  2019-12-26  2:57   ` Masami Hiramatsu
@ 2020-02-28 15:31   ` Mark Rutland
  1 sibling, 0 replies; 15+ messages in thread
From: Mark Rutland @ 2020-02-28 15:31 UTC (permalink / raw)
  To: Jisheng Zhang
  Cc: Jonathan Corbet, Catalin Marinas, linux-doc, linux-kernel,
	Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt, Naveen N. Rao,
	Will Deacon, David S. Miller, linux-arm-kernel, Masami Hiramatsu

Hi,

This has been on my list to review for a while. Given Masami's comments,
I was waiting for a new version -- is there any plan to respin this?

Otherwise, I have some comments below.

On Wed, Dec 25, 2019 at 09:44:21AM +0000, Jisheng Zhang wrote:
> KPROBES_ON_FTRACE avoids much of the overhead with regular kprobes as it
> eliminates the need for a trap, as well as the need to emulate or
> single-step instructions.

Where does this overhead matter?

> Tested on berlin arm64 platform.
> 
> ~ # mount -t debugfs debugfs /sys/kernel/debug/
> ~ # cd /sys/kernel/debug/
> /sys/kernel/debug # echo 'p _do_fork' > tracing/kprobe_events
> 
> before the patch:
> 
> /sys/kernel/debug # cat kprobes/list
> ffffff801009fe28  k  _do_fork+0x0    [DISABLED]
> 
> after the patch:
> 
> /sys/kernel/debug # cat kprobes/list
> ffffff801009ff54  k  _do_fork+0x0    [DISABLED][FTRACE]

Just to check, how is the kprobe addresss expected to relate to the
function address? For any of {mcount, mfentry, patchable-function-entry}
there are some number of instructions prior to the call instruction.
Does the user have to provide that address? 

How does this work on other architectures?

> 
> Signed-off-by: Jisheng Zhang <Jisheng.Zhang@synaptics.com>
> ---
>  .../debug/kprobes-on-ftrace/arch-support.txt  |  2 +-
>  arch/arm64/Kconfig                            |  1 +
>  arch/arm64/include/asm/ftrace.h               |  1 +
>  arch/arm64/kernel/probes/Makefile             |  1 +
>  arch/arm64/kernel/probes/ftrace.c             | 78 +++++++++++++++++++
>  5 files changed, 82 insertions(+), 1 deletion(-)
>  create mode 100644 arch/arm64/kernel/probes/ftrace.c
> 
> diff --git a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> index 4fae0464ddff..f9dd9dd91e0c 100644
> --- a/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> +++ b/Documentation/features/debug/kprobes-on-ftrace/arch-support.txt
> @@ -9,7 +9,7 @@
>      |       alpha: | TODO |
>      |         arc: | TODO |
>      |         arm: | TODO |
> -    |       arm64: | TODO |
> +    |       arm64: |  ok  |
>      |         c6x: | TODO |
>      |        csky: | TODO |
>      |       h8300: | TODO |
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index b1b4476ddb83..92b9882889ac 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -166,6 +166,7 @@ config ARM64
>  	select HAVE_STACKPROTECTOR
>  	select HAVE_SYSCALL_TRACEPOINTS
>  	select HAVE_KPROBES
> +	select HAVE_KPROBES_ON_FTRACE

The rest of the code seems to presume FTRACE_WITH_REGS, but you haven't
made that dependency explicit here.

>  	select HAVE_KRETPROBES
>  	select HAVE_GENERIC_VDSO
>  	select IOMMU_DMA if IOMMU_SUPPORT
> diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
> index 91fa4baa1a93..875aeb839654 100644
> --- a/arch/arm64/include/asm/ftrace.h
> +++ b/arch/arm64/include/asm/ftrace.h
> @@ -20,6 +20,7 @@
>  
>  /* The BL at the callsite's adjusted rec->ip */
>  #define MCOUNT_INSN_SIZE	AARCH64_INSN_SIZE
> +#define FTRACE_IP_EXTENSION	MCOUNT_INSN_SIZE

I'm confused by what exactly this is meant to represent. At runtime our
rec->ip is always the BL, so what exactly is this attempting to account
for?

How does this work when using mcount rather than
patchable-function-entry?

>  
>  #define FTRACE_PLT_IDX		0
>  #define FTRACE_REGS_PLT_IDX	1
> diff --git a/arch/arm64/kernel/probes/Makefile b/arch/arm64/kernel/probes/Makefile
> index 8e4be92e25b1..4020cfc66564 100644
> --- a/arch/arm64/kernel/probes/Makefile
> +++ b/arch/arm64/kernel/probes/Makefile
> @@ -4,3 +4,4 @@ obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o	\
>  				   simulate-insn.o
>  obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o	\
>  				   simulate-insn.o
> +obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
> diff --git a/arch/arm64/kernel/probes/ftrace.c b/arch/arm64/kernel/probes/ftrace.c
> new file mode 100644
> index 000000000000..0643aa2dacdb
> --- /dev/null
> +++ b/arch/arm64/kernel/probes/ftrace.c
> @@ -0,0 +1,78 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Dynamic Ftrace based Kprobes Optimization
> + *
> + * Copyright (C) Hitachi Ltd., 2012
> + * Copyright (C) 2019 Jisheng Zhang <jszhang@kernel.org>
> + *		      Synaptics Incorporated
> + */
> +
> +#include <linux/kprobes.h>
> +
> +/*
> + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> + * by ftrace and we should allow probing on either instruction.
> + */
> +int arch_check_ftrace_location(struct kprobe *p)
> +{
> +	if (ftrace_location((unsigned long)p->addr))
> +		p->flags |= KPROBE_FLAG_FTRACE;
> +	return 0;
> +}

What about when not using patchable-function-entry?

Why do we need to allow probing both?

> +
> +/* Ftrace callback handler for kprobes -- called under preepmt disabed */
> +void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> +			   struct ftrace_ops *ops, struct pt_regs *regs)
> +{
> +	bool lr_saver = false;
> +	struct kprobe *p;
> +	struct kprobe_ctlblk *kcb;
> +
> +	/* Preempt is disabled by ftrace */
> +	p = get_kprobe((kprobe_opcode_t *)ip);
> +	if (!p) {
> +		p = get_kprobe((kprobe_opcode_t *)(ip - MCOUNT_INSN_SIZE));
> +		if (unlikely(!p) || kprobe_disabled(p))
> +			return;
> +		lr_saver = true;
> +	}

This complexity worries me. Is it really necessary to allow kprobing on
either instruction?

> +
> +	kcb = get_kprobe_ctlblk();
> +	if (kprobe_running()) {
> +		kprobes_inc_nmissed_count(p);
> +	} else {
> +		unsigned long orig_ip = instruction_pointer(regs);
> +
> +		if (lr_saver)
> +			ip -= MCOUNT_INSN_SIZE;
> +		instruction_pointer_set(regs, ip);
> +		__this_cpu_write(current_kprobe, p);
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +		if (!p->pre_handler || !p->pre_handler(p, regs)) {
> +			/*
> +			 * Emulate singlestep (and also recover regs->pc)
> +			 * as if there is a nop
> +			 */
> +			instruction_pointer_set(regs,
> +				(unsigned long)p->addr + MCOUNT_INSN_SIZE);
> +			if (unlikely(p->post_handler)) {
> +				kcb->kprobe_status = KPROBE_HIT_SSDONE;
> +				p->post_handler(p, regs, 0);
> +			}
> +			instruction_pointer_set(regs, orig_ip);

If you're going to mess with the PC then you also need to adjust the
hardware single-step state machine.

Thanks,
Mark.

> +		}
> +		/*
> +		 * If pre_handler returns !0, it changes regs->pc. We have to
> +		 * skip emulating post_handler.
> +		 */
> +		__this_cpu_write(current_kprobe, NULL);
> +	}
> +}
> +NOKPROBE_SYMBOL(kprobe_ftrace_handler);
> +
> +int arch_prepare_kprobe_ftrace(struct kprobe *p)
> +{
> +	p->ainsn.api.insn = NULL;
> +	return 0;
> +}
> -- 
> 2.24.1
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2019-12-26  9:26         ` Masami Hiramatsu
@ 2020-07-21 13:24           ` Masami Hiramatsu
  2020-07-24  7:06             ` Jisheng Zhang
  0 siblings, 1 reply; 15+ messages in thread
From: Masami Hiramatsu @ 2020-07-21 13:24 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Mark Rutland, linux-doc, Catalin Marinas, Jonathan Corbet,
	linux-kernel, Anil S Keshavamurthy, Jisheng Zhang, Ingo Molnar,
	Steven Rostedt, Naveen N. Rao, Will Deacon, David S. Miller,
	linux-arm-kernel

Hi Jisheng,

Would you be still working on this series?

If you are still want to put a probe on func+4, it is OK if you can
completely emulate the 1st instruction. (lr save on the stack and
change the regs->sp)

Thank you,

On Thu, 26 Dec 2019 18:26:07 +0900
Masami Hiramatsu <mhiramat@kernel.org> wrote:

> On Thu, 26 Dec 2019 04:25:24 +0000
> Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:
> 
> > > > > +/*
> > > > > + * In arm64 FTRACE_WITH_REGS implementation, we patch two nop instructions:
> > > > > + * the lr saver and bl ftrace-entry. Both these instructions are claimed
> > > > > + * by ftrace and we should allow probing on either instruction.  
> > > >
> > > > No, the 2nd bl ftrace-entry must not be probed.
> > > > The pair of lr-saver and bl ftrace-entry is tightly coupled. You can not
> > > > decouple it.  
> > > 
> > > This is the key. different viewing of this results in different implementation.
> > > I'm just wondering why are the two instructions considered as coupled. I think
> > > here we met similar situation as powerpc: https://lkml.org/lkml/2019/6/18/646
> > > the "mflr r0" equals to lr-saver here, branch to _mcount equals to bl ftrace-entry
> > > could you please kindly comment more?
> > > 
> > > Thanks in advance
> > > 
> > 
> > hmm, I think I may get some part of your opinion. In v7 implementation:
> > 
> > if probe on func+4, that's bl ftrace-entry, similar as mcount call on
> > other architectures, we allow this probe as normal.
> > 
> > if probe on func+0, the first param ip in kprobe_ftrace_handler() points
> > to func+4(this is adjusted by ftrace), regs->ip points to func+8, so in
> > kprobe_ftrace_handler() we modify regs->ip to func+0 to call kprobe
> > pre handler, then modify regs->ip to func+8 to call kprobe post handler.
> > As can be seen, the first two instructions are considered as a virtual
> > mcount call. From this point of view, lr saver and the bl <ftrace-entry>
> > is coupled.
> 
> Yes, this is good. But probing on func+4 is meaningless. Both func+0 and
> func+4 call a handler with same pt_regs. And it should have the stack
> pointer which is NOT modified by lr-saver and regs->lr must point original
> call address. (ftrace regs caller must do this fixup for supporting live
> patching correctly)
> 
> And in this case, func+4 has fake pt_regs because it skips lr-saver's
> effects.
> 
> And even if you fixed up the pt_regs, there is another problem of what
> user expects on the target instructions.
> 
> As you know, dynamic ftrace will fill the instruction with NOP (2 NOPs
> in arm64), in this case, maybe pt_regs are same except pc on func+0 and
> func+4. But if ftrace already enabled on the function, user will see
> there are lr-saver and bl, oops. In this case we have to change pt_regs
> between func+0 and func+4. So it depends on the current mode.
> 
> However, IMHO, it is not worth to pay such simulation cost. No one want
> to probe such simulated intermediate address. It is easy to expect the
> result from the code. Moreover, the func+4 will not appear on debuginfo
> because those 2 special insturctions are just appended by the compiler,
> not generated by the code.
> 
> So I don't think we need to support func+4. We only need func+0, or func+8
> (this must be same as func+0 except regs->pc anyway)
> 
> Thank you,
> 
> > 
> > If we split patch3 into two:
> > one to support kprobes func+4
> > the second to support kprobe on func+0
> > it would be much clearer.
> > 
> > Then the key here is whether we could allow both kprobes on func+0 and func+4
> > 
> > Thanks
> 
> 
> -- 
> Masami Hiramatsu <mhiramat@kernel.org>


-- 
Masami Hiramatsu <mhiramat@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2020-07-21 13:24           ` Masami Hiramatsu
@ 2020-07-24  7:06             ` Jisheng Zhang
  2020-07-24 16:54               ` Masami Hiramatsu
  0 siblings, 1 reply; 15+ messages in thread
From: Jisheng Zhang @ 2020-07-24  7:06 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Mark Rutland, linux-doc, Catalin Marinas, Jonathan Corbet,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel


On Tue, 21 Jul 2020 22:24:55 +0900 Masami Hiramatsu wrote:

> 
> 
> Hi Jisheng,

Hi,

> 
> Would you be still working on this series?

I will rebase the implementation on the latest code, then try to address
your comments and Mark's comments. I will send out patches in this weekend.

> 
> If you are still want to put a probe on func+4, it is OK if you can
> completely emulate the 1st instruction. (lr save on the stack and
> change the regs->sp)

Will check which is the better solution.

Thank you very much


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE
  2020-07-24  7:06             ` Jisheng Zhang
@ 2020-07-24 16:54               ` Masami Hiramatsu
  0 siblings, 0 replies; 15+ messages in thread
From: Masami Hiramatsu @ 2020-07-24 16:54 UTC (permalink / raw)
  To: Jisheng Zhang
  Cc: Mark Rutland, linux-doc, Catalin Marinas, Jonathan Corbet,
	linux-kernel, Anil S Keshavamurthy, Ingo Molnar, Steven Rostedt,
	Naveen N. Rao, Will Deacon, David S. Miller, linux-arm-kernel

On Fri, 24 Jul 2020 15:06:11 +0800
Jisheng Zhang <Jisheng.Zhang@synaptics.com> wrote:

> 
> On Tue, 21 Jul 2020 22:24:55 +0900 Masami Hiramatsu wrote:
> 
> > 
> > 
> > Hi Jisheng,
> 
> Hi,
> 
> > 
> > Would you be still working on this series?
> 
> I will rebase the implementation on the latest code, then try to address
> your comments and Mark's comments. I will send out patches in this weekend.
> 
> > 
> > If you are still want to put a probe on func+4, it is OK if you can
> > completely emulate the 1st instruction. (lr save on the stack and
> > change the regs->sp)
> 
> Will check which is the better solution.

Thanks Jisheng!

What I'm considering is the consistency of pre_handler()@addr and 
post_handler()@addr+4. Also, whether the value of regs (and stacks) is
same as the user expected.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-07-24 16:56 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-25  9:40 [PATCH v7 0/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
2019-12-25  9:42 ` [PATCH v7 1/3] kprobes/ftrace: Use ftrace_location() when [dis]arming probes Jisheng Zhang
2019-12-25  9:46   ` Jisheng Zhang
2019-12-25  9:42 ` [PATCH v7 2/3] ftrace: introduce FTRACE_IP_EXTENSION Jisheng Zhang
2019-12-26  2:45   ` Masami Hiramatsu
2020-01-08  0:05   ` Steven Rostedt
2019-12-25  9:44 ` [PATCH v7 3/3] arm64: implement KPROBES_ON_FTRACE Jisheng Zhang
2019-12-26  2:57   ` Masami Hiramatsu
2019-12-26  3:18     ` Jisheng Zhang
2019-12-26  4:25       ` Jisheng Zhang
2019-12-26  9:26         ` Masami Hiramatsu
2020-07-21 13:24           ` Masami Hiramatsu
2020-07-24  7:06             ` Jisheng Zhang
2020-07-24 16:54               ` Masami Hiramatsu
2020-02-28 15:31   ` Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).