linux-csky.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/7] riscv: Add k/uprobe supported
@ 2020-07-13 23:39 guoren
  2020-07-13 23:39 ` [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API guoren
                   ` (7 more replies)
  0 siblings, 8 replies; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

The patchset includes kprobe/uprobe support and some related fixups.
Patrick provides HAVE_REGS_AND_STACK_ACCESS_API support and some
kprobe's code. The framework of k/uprobe is from csky but also refers
to other arches'. kprobes on ftrace is also supported in the patchset.

There is no single step exception in riscv ISA, only single-step
facility for jtag. See riscv-Privileged spec:

Interrupt Exception Code-Description
1 0 Reserved
1 1 Supervisor software interrupt
1 2–4 Reserved
1 5 Supervisor timer interrupt
1 6–8 Reserved
1 9 Supervisor external interrupt
1 10–15 Reserved
1 ≥16 Available for platform use
0 0 Instruction address misaligned
0 1 Instruction access fault
0 2 Illegal instruction
0 3 Breakpoint
0 4 Load address misaligned
0 5 Load access fault
0 6 Store/AMO address misaligned
0 7 Store/AMO access fault
0 8 Environment call from U-mode
0 9 Environment call from S-mode
0 10–11 Reserved
0 12 Instruction page fault
0 13 Load page fault
0 14 Reserved
0 15 Store/AMO page fault
0 16–23 Reserved
0 24–31 Available for custom use
0 32–47 Reserved
0 48–63 Available for custom use
0 ≥64 Reserved

No single step!

Other arches use hardware single-step exception for k/uprobe,  eg:
 - powerpc: regs->msr |= MSR_SINGLESTEP
 - arm/arm64: PSTATE.D for enabling software step exceptions
 - s390: Set PER control regs, turns on single step for the given address
 - x86: regs->flags |= X86_EFLAGS_TF
 - csky: of course use hw single step :)

All the above arches use a hardware single-step exception
mechanism to execute the instruction that was replaced with a probe
breakpoint. So utilize ebreak to simulate.

Some pc related instructions couldn't be executed out of line and some
system/fence instructions couldn't be a trace site at all. So we give
out a reject list and simulate list in decode-insn.c.

You could use uprobe to test simulate code like this:

 echo 'p:enter_current_state_one /hello:0x6e4 a0=%a0 a1=%a1' >> /sys/kernel/debug/tracing/uprobe_events
 echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable
 /hello
 ^C
 cat /sys/kernel/debug/tracing/trace
 tracer: nop

 entries-in-buffer/entries-written: 1/1   #P:1

                              _-----=> irqs-off
                             / _----=> need-resched
                            | / _---=> hardirq/softirq
                            || / _--=> preempt-depth
                            ||| /     delay
           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
              | |       |   ||||       |         |
          hello-94    [000] d...    55.404242: enter_current_state_one: (0x106e4) a0=0x1 a1=0x3fffa8ada8

Be care /hello:0x6e4 is the file offset in elf and it relate to 0x106e4
in memory and hello is your target elf program.

Try kprobe like this:

 echo 'p:myprobe _do_fork dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
 echo 'r:myretprobe _do_fork $retval' >> /sys/kernel/debug/tracing/kprobe_event

 echo 1 >/sys/kernel/debug/tracing/events/kprobes/enable
 cat /sys/kernel/debug/tracing/trace
 tracer: nop

 entries-in-buffer/entries-written: 2/2   #P:1

                              _-----=> irqs-off
                             / _----=> need-resched
                            | / _---=> hardirq/softirq
                            || / _--=> preempt-depth
                            ||| /     delay
           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
              | |       |   ||||       |         |
             sh-92    [000] .n..   131.804230: myprobe: (_do_fork+0x0/0x2e6) dfd=0xffffffe03929fdf8 filename=0x0 flags=0x101000 mode=0x1200000ffffffe0
             sh-92    [000] d...   131.806607: myretprobe: (__do_sys_clone+0x70/0x82 <- _do_fork) arg1=0x5f
 cat /sys/kernel/debug/tracing/trace

Changlog v3:
 - Add upport for function error injection
 - Fixup kprobes handler couldn't change pc

Changlog v2:
 - Add Reviewed-by, Tested-by, Acked-by, thx for all of you
 - Add kprobes on ftrace feature
 - Use __always_inline as same as fix_to_virt for fixup
   BUILD_BUG_ON
 - Use const "const unsigned int" for 2th param for fixup
   BUILD_BUG_ON

Guo Ren (6):
  riscv: Fixup compile error BUILD_BUG_ON failed
  riscv: Fixup kprobes handler couldn't change pc
  riscv: Add kprobes supported
  riscv: Add uprobes supported
  riscv: Add KPROBES_ON_FTRACE supported
  riscv: Add support for function error injection

Patrick Stählin (1):
  RISC-V: Implement ptrace regs and stack API

 arch/riscv/Kconfig                            |   8 +
 arch/riscv/include/asm/kprobes.h              |  40 +++
 arch/riscv/include/asm/probes.h               |  24 ++
 arch/riscv/include/asm/processor.h            |   1 +
 arch/riscv/include/asm/ptrace.h               |  35 ++
 arch/riscv/include/asm/thread_info.h          |   4 +-
 arch/riscv/include/asm/uprobes.h              |  40 +++
 arch/riscv/kernel/Makefile                    |   1 +
 arch/riscv/kernel/mcount-dyn.S                |   3 +-
 arch/riscv/kernel/patch.c                     |   8 +-
 arch/riscv/kernel/probes/Makefile             |   6 +
 arch/riscv/kernel/probes/decode-insn.c        |  48 +++
 arch/riscv/kernel/probes/decode-insn.h        |  18 +
 arch/riscv/kernel/probes/ftrace.c             |  52 +++
 arch/riscv/kernel/probes/kprobes.c            | 471 ++++++++++++++++++++++++++
 arch/riscv/kernel/probes/kprobes_trampoline.S |  93 +++++
 arch/riscv/kernel/probes/simulate-insn.c      |  85 +++++
 arch/riscv/kernel/probes/simulate-insn.h      |  47 +++
 arch/riscv/kernel/probes/uprobes.c            | 186 ++++++++++
 arch/riscv/kernel/ptrace.c                    |  99 ++++++
 arch/riscv/kernel/signal.c                    |   3 +
 arch/riscv/kernel/traps.c                     |  19 ++
 arch/riscv/lib/Makefile                       |   2 +
 arch/riscv/lib/error-inject.c                 |  10 +
 arch/riscv/mm/fault.c                         |  11 +
 25 files changed, 1310 insertions(+), 4 deletions(-)
 create mode 100644 arch/riscv/include/asm/probes.h
 create mode 100644 arch/riscv/include/asm/uprobes.h
 create mode 100644 arch/riscv/kernel/probes/Makefile
 create mode 100644 arch/riscv/kernel/probes/decode-insn.c
 create mode 100644 arch/riscv/kernel/probes/decode-insn.h
 create mode 100644 arch/riscv/kernel/probes/ftrace.c
 create mode 100644 arch/riscv/kernel/probes/kprobes.c
 create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
 create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
 create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
 create mode 100644 arch/riscv/kernel/probes/uprobes.c
 create mode 100644 arch/riscv/lib/error-inject.c

-- 
2.7.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
@ 2020-07-13 23:39 ` guoren
  2020-07-14 11:25   ` Masami Hiramatsu
  2020-07-13 23:39 ` [PATCH v3 2/7] riscv: Fixup compile error BUILD_BUG_ON failed guoren
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren

From: Patrick Stählin <me@packi.ch>

Needed for kprobes support. Copied and adapted from arm64 code.

Guo Ren fixup pt_regs type for linux-5.8-rc1.

Signed-off-by: Patrick Stählin <me@packi.ch>
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Reviewed-by: Zong Li <zong.li@sifive.com>
---
 arch/riscv/Kconfig              |  1 +
 arch/riscv/include/asm/ptrace.h | 29 ++++++++++++
 arch/riscv/kernel/ptrace.c      | 99 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 129 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 3230c1d..e70449a 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -78,6 +78,7 @@ config RISCV
 	select SPARSE_IRQ
 	select SYSCTL_EXCEPTION_TRACE
 	select THREAD_INFO_IN_TASK
+	select HAVE_REGS_AND_STACK_ACCESS_API
 
 config ARCH_MMAP_RND_BITS_MIN
 	default 18 if 64BIT
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
index ee49f80..23372bb 100644
--- a/arch/riscv/include/asm/ptrace.h
+++ b/arch/riscv/include/asm/ptrace.h
@@ -8,6 +8,7 @@
 
 #include <uapi/asm/ptrace.h>
 #include <asm/csr.h>
+#include <linux/compiler.h>
 
 #ifndef __ASSEMBLY__
 
@@ -60,6 +61,7 @@ struct pt_regs {
 
 #define user_mode(regs) (((regs)->status & SR_PP) == 0)
 
+#define MAX_REG_OFFSET offsetof(struct pt_regs, orig_a0)
 
 /* Helpers for working with the instruction pointer */
 static inline unsigned long instruction_pointer(struct pt_regs *regs)
@@ -85,6 +87,12 @@ static inline void user_stack_pointer_set(struct pt_regs *regs,
 	regs->sp =  val;
 }
 
+/* Valid only for Kernel mode traps. */
+static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
+{
+	return regs->sp;
+}
+
 /* Helpers for working with the frame pointer */
 static inline unsigned long frame_pointer(struct pt_regs *regs)
 {
@@ -101,6 +109,27 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
 	return regs->a0;
 }
 
+extern int regs_query_register_offset(const char *name);
+extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
+					       unsigned int n);
+
+/**
+ * regs_get_register() - get register value from its offset
+ * @regs:	pt_regs from which register value is gotten
+ * @offset:	offset of the register.
+ *
+ * regs_get_register returns the value of a register whose offset from @regs.
+ * The @offset is the offset of the register in struct pt_regs.
+ * If @offset is bigger than MAX_REG_OFFSET, this returns 0.
+ */
+static inline unsigned long regs_get_register(struct pt_regs *regs,
+					      unsigned int offset)
+{
+	if (unlikely(offset > MAX_REG_OFFSET))
+		return 0;
+
+	return *(unsigned long *)((unsigned long)regs + offset);
+}
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_RISCV_PTRACE_H */
diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
index 444dc7b..a11c692 100644
--- a/arch/riscv/kernel/ptrace.c
+++ b/arch/riscv/kernel/ptrace.c
@@ -125,6 +125,105 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
 	return &riscv_user_native_view;
 }
 
+struct pt_regs_offset {
+	const char *name;
+	int offset;
+};
+
+#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
+#define REG_OFFSET_END {.name = NULL, .offset = 0}
+
+static const struct pt_regs_offset regoffset_table[] = {
+	REG_OFFSET_NAME(epc),
+	REG_OFFSET_NAME(ra),
+	REG_OFFSET_NAME(sp),
+	REG_OFFSET_NAME(gp),
+	REG_OFFSET_NAME(tp),
+	REG_OFFSET_NAME(t0),
+	REG_OFFSET_NAME(t1),
+	REG_OFFSET_NAME(t2),
+	REG_OFFSET_NAME(s0),
+	REG_OFFSET_NAME(s1),
+	REG_OFFSET_NAME(a0),
+	REG_OFFSET_NAME(a1),
+	REG_OFFSET_NAME(a2),
+	REG_OFFSET_NAME(a3),
+	REG_OFFSET_NAME(a4),
+	REG_OFFSET_NAME(a5),
+	REG_OFFSET_NAME(a6),
+	REG_OFFSET_NAME(a7),
+	REG_OFFSET_NAME(s2),
+	REG_OFFSET_NAME(s3),
+	REG_OFFSET_NAME(s4),
+	REG_OFFSET_NAME(s5),
+	REG_OFFSET_NAME(s6),
+	REG_OFFSET_NAME(s7),
+	REG_OFFSET_NAME(s8),
+	REG_OFFSET_NAME(s9),
+	REG_OFFSET_NAME(s10),
+	REG_OFFSET_NAME(s11),
+	REG_OFFSET_NAME(t3),
+	REG_OFFSET_NAME(t4),
+	REG_OFFSET_NAME(t5),
+	REG_OFFSET_NAME(t6),
+	REG_OFFSET_NAME(status),
+	REG_OFFSET_NAME(badaddr),
+	REG_OFFSET_NAME(cause),
+	REG_OFFSET_NAME(orig_a0),
+	REG_OFFSET_END,
+};
+
+/**
+ * regs_query_register_offset() - query register offset from its name
+ * @name:	the name of a register
+ *
+ * regs_query_register_offset() returns the offset of a register in struct
+ * pt_regs from its name. If the name is invalid, this returns -EINVAL;
+ */
+int regs_query_register_offset(const char *name)
+{
+	const struct pt_regs_offset *roff;
+
+	for (roff = regoffset_table; roff->name != NULL; roff++)
+		if (!strcmp(roff->name, name))
+			return roff->offset;
+	return -EINVAL;
+}
+
+/**
+ * regs_within_kernel_stack() - check the address in the stack
+ * @regs:      pt_regs which contains kernel stack pointer.
+ * @addr:      address which is checked.
+ *
+ * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
+ * If @addr is within the kernel stack, it returns true. If not, returns false.
+ */
+static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
+{
+	return (addr & ~(THREAD_SIZE - 1))  ==
+		(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1));
+}
+
+/**
+ * regs_get_kernel_stack_nth() - get Nth entry of the stack
+ * @regs:	pt_regs which contains kernel stack pointer.
+ * @n:		stack entry number.
+ *
+ * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
+ * is specified by @regs. If the @n th entry is NOT in the kernel stack,
+ * this returns 0.
+ */
+unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
+{
+	unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
+
+	addr += n;
+	if (regs_within_kernel_stack(regs, (unsigned long)addr))
+		return *addr;
+	else
+		return 0;
+}
+
 void ptrace_disable(struct task_struct *child)
 {
 	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 2/7] riscv: Fixup compile error BUILD_BUG_ON failed
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
  2020-07-13 23:39 ` [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API guoren
@ 2020-07-13 23:39 ` guoren
  2020-07-13 23:39 ` [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc guoren
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren, Palmer Dabbelt

From: Guo Ren <guoren@linux.alibaba.com>

Unfortunately, the current code couldn't be compiled:

  CC      arch/riscv/kernel/patch.o
In file included from ./include/linux/kernel.h:11,
                 from ./include/linux/list.h:9,
                 from ./include/linux/preempt.h:11,
                 from ./include/linux/spinlock.h:51,
                 from arch/riscv/kernel/patch.c:6:
In function ‘fix_to_virt’,
    inlined from ‘patch_map’ at arch/riscv/kernel/patch.c:37:17:
./include/linux/compiler.h:392:38: error: call to ‘__compiletime_assert_205’ declared with attribute error: BUILD_BUG_ON failed: idx >= __end_of_fixed_addresses
  _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
                                      ^
./include/linux/compiler.h:373:4: note: in definition of macro ‘__compiletime_assert’
    prefix ## suffix();    \
    ^~~~~~
./include/linux/compiler.h:392:2: note: in expansion of macro ‘_compiletime_assert’
  _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
  ^~~~~~~~~~~~~~~~~~~
./include/linux/build_bug.h:39:37: note: in expansion of macro ‘compiletime_assert’
 #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
                                     ^~~~~~~~~~~~~~~~~~
./include/linux/build_bug.h:50:2: note: in expansion of macro ‘BUILD_BUG_ON_MSG’
  BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
  ^~~~~~~~~~~~~~~~
./include/asm-generic/fixmap.h:32:2: note: in expansion of macro ‘BUILD_BUG_ON’
  BUILD_BUG_ON(idx >= __end_of_fixed_addresses);
  ^~~~~~~~~~~~

Because fix_to_virt(, idx) needs a const value, not a dynamic variable of
reg-a0 or BUILD_BUG_ON failed with "idx >= __end_of_fixed_addresses".

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
---
 arch/riscv/kernel/patch.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/kernel/patch.c b/arch/riscv/kernel/patch.c
index 3fe7a52..0b55287 100644
--- a/arch/riscv/kernel/patch.c
+++ b/arch/riscv/kernel/patch.c
@@ -20,7 +20,12 @@ struct patch_insn {
 };
 
 #ifdef CONFIG_MMU
-static void *patch_map(void *addr, int fixmap)
+/*
+ * The fix_to_virt(, idx) needs a const value (not a dynamic variable of
+ * reg-a0) or BUILD_BUG_ON failed with "idx >= __end_of_fixed_addresses".
+ * So use '__always_inline' and 'const unsigned int fixmap' here.
+ */
+static __always_inline void *patch_map(void *addr, const unsigned int fixmap)
 {
 	uintptr_t uintaddr = (uintptr_t) addr;
 	struct page *page;
@@ -37,7 +42,6 @@ static void *patch_map(void *addr, int fixmap)
 	return (void *)set_fixmap_offset(fixmap, page_to_phys(page) +
 					 (uintaddr & ~PAGE_MASK));
 }
-NOKPROBE_SYMBOL(patch_map);
 
 static void patch_unmap(int fixmap)
 {
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
  2020-07-13 23:39 ` [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API guoren
  2020-07-13 23:39 ` [PATCH v3 2/7] riscv: Fixup compile error BUILD_BUG_ON failed guoren
@ 2020-07-13 23:39 ` guoren
  2020-07-14 11:32   ` Masami Hiramatsu
  2020-08-14 22:36   ` Palmer Dabbelt
  2020-07-13 23:39 ` [PATCH v3 4/7] riscv: Add kprobes supported guoren
                   ` (4 subsequent siblings)
  7 siblings, 2 replies; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

The "Changing Execution Path" section in the Documentation/kprobes.txt
said:

Since kprobes can probe into a running kernel code, it can change the
register set, including instruction pointer.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
 arch/riscv/kernel/mcount-dyn.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 35a6ed7..4b58b54 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -123,6 +123,7 @@ ENDPROC(ftrace_caller)
 	sd	ra, (PT_SIZE_ON_STACK+8)(sp)
 	addi	s0, sp, (PT_SIZE_ON_STACK+16)
 
+	sd ra,  PT_EPC(sp)
 	sd x1,  PT_RA(sp)
 	sd x2,  PT_SP(sp)
 	sd x3,  PT_GP(sp)
@@ -157,6 +158,7 @@ ENDPROC(ftrace_caller)
 	.endm
 
 	.macro RESTORE_ALL
+	ld ra,  PT_EPC(sp)
 	ld x1,  PT_RA(sp)
 	ld x2,  PT_SP(sp)
 	ld x3,  PT_GP(sp)
@@ -190,7 +192,6 @@ ENDPROC(ftrace_caller)
 	ld x31, PT_T6(sp)
 
 	ld	s0, (PT_SIZE_ON_STACK)(sp)
-	ld	ra, (PT_SIZE_ON_STACK+8)(sp)
 	addi	sp, sp, (PT_SIZE_ON_STACK+16)
 	.endm
 
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 4/7] riscv: Add kprobes supported
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
                   ` (2 preceding siblings ...)
  2020-07-13 23:39 ` [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc guoren
@ 2020-07-13 23:39 ` guoren
  2020-08-14 22:36   ` Palmer Dabbelt
  2020-07-13 23:39 ` [PATCH v3 5/7] riscv: Add uprobes supported guoren
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

This patch enables "kprobe & kretprobe" to work with ftrace
interface. It utilized software breakpoint as single-step
mechanism.

Some instructions which can't be single-step executed must be
simulated in kernel execution slot, such as: branch, jal, auipc,
la ...

Some instructions should be rejected for probing and we use a
blacklist to filter, such as: ecall, ebreak, ...

We use ebreak & c.ebreak to replace origin instruction and the
kprobe handler prepares an executable memory slot for out-of-line
execution with a copy of the original instruction being probed.
In execution slot we add ebreak behind original instruction to
simulate a single-setp mechanism.

The patch is based on packi's work [1] and csky's work [2].
 - The kprobes_trampoline.S is all from packi's patch
 - The single-step mechanism is new designed for riscv without hw
   single-step trap
 - The simulation codes are from csky
 - Frankly, all codes refer to other archs' implementation

 [1] https://lore.kernel.org/linux-riscv/20181113195804.22825-1-me@packi.ch/
 [2] https://lore.kernel.org/linux-csky/20200403044150.20562-9-guoren@kernel.org/

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Co-Developed-by: Patrick Stählin <me@packi.ch>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Tested-by: Zong Li <zong.li@sifive.com>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Cc: Patrick Stählin <me@packi.ch>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Björn Töpel <bjorn.topel@gmail.com>
---
 arch/riscv/Kconfig                            |   2 +
 arch/riscv/include/asm/kprobes.h              |  40 +++
 arch/riscv/include/asm/probes.h               |  24 ++
 arch/riscv/kernel/Makefile                    |   1 +
 arch/riscv/kernel/probes/Makefile             |   4 +
 arch/riscv/kernel/probes/decode-insn.c        |  48 +++
 arch/riscv/kernel/probes/decode-insn.h        |  18 +
 arch/riscv/kernel/probes/kprobes.c            | 471 ++++++++++++++++++++++++++
 arch/riscv/kernel/probes/kprobes_trampoline.S |  93 +++++
 arch/riscv/kernel/probes/simulate-insn.c      |  85 +++++
 arch/riscv/kernel/probes/simulate-insn.h      |  47 +++
 arch/riscv/kernel/traps.c                     |   9 +
 arch/riscv/mm/fault.c                         |   4 +
 13 files changed, 846 insertions(+)
 create mode 100644 arch/riscv/include/asm/probes.h
 create mode 100644 arch/riscv/kernel/probes/Makefile
 create mode 100644 arch/riscv/kernel/probes/decode-insn.c
 create mode 100644 arch/riscv/kernel/probes/decode-insn.h
 create mode 100644 arch/riscv/kernel/probes/kprobes.c
 create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
 create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
 create mode 100644 arch/riscv/kernel/probes/simulate-insn.h

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index e70449a..b86b2a2 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -59,6 +59,8 @@ config RISCV
 	select HAVE_EBPF_JIT if MMU
 	select HAVE_FUTEX_CMPXCHG if FUTEX
 	select HAVE_GENERIC_VDSO if MMU && 64BIT
+	select HAVE_KPROBES
+	select HAVE_KRETPROBES
 	select HAVE_PCI
 	select HAVE_PERF_EVENTS
 	select HAVE_PERF_REGS
diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
index 56a98ea3..4647d38 100644
--- a/arch/riscv/include/asm/kprobes.h
+++ b/arch/riscv/include/asm/kprobes.h
@@ -11,4 +11,44 @@
 
 #include <asm-generic/kprobes.h>
 
+#ifdef CONFIG_KPROBES
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/percpu.h>
+
+#define __ARCH_WANT_KPROBES_INSN_SLOT
+#define MAX_INSN_SIZE			2
+
+#define flush_insn_slot(p)		do { } while (0)
+#define kretprobe_blacklist_size	0
+
+#include <asm/probes.h>
+
+struct prev_kprobe {
+	struct kprobe *kp;
+	unsigned int status;
+};
+
+/* Single step context for kprobe */
+struct kprobe_step_ctx {
+	unsigned long ss_pending;
+	unsigned long match_addr;
+};
+
+/* per-cpu kprobe control block */
+struct kprobe_ctlblk {
+	unsigned int kprobe_status;
+	unsigned long saved_status;
+	struct prev_kprobe prev_kprobe;
+	struct kprobe_step_ctx ss_ctx;
+};
+
+void arch_remove_kprobe(struct kprobe *p);
+int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
+bool kprobe_breakpoint_handler(struct pt_regs *regs);
+bool kprobe_single_step_handler(struct pt_regs *regs);
+void kretprobe_trampoline(void);
+void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
+
+#endif /* CONFIG_KPROBES */
 #endif /* _ASM_RISCV_KPROBES_H */
diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
new file mode 100644
index 00000000..a787e6d
--- /dev/null
+++ b/arch/riscv/include/asm/probes.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _ASM_RISCV_PROBES_H
+#define _ASM_RISCV_PROBES_H
+
+typedef u32 probe_opcode_t;
+typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
+
+/* architecture specific copy of original instruction */
+struct arch_probe_insn {
+	probe_opcode_t *insn;
+	probes_handler_t *handler;
+	/* restore address after simulation */
+	unsigned long restore;
+};
+
+#ifdef CONFIG_KPROBES
+typedef u32 kprobe_opcode_t;
+struct arch_specific_insn {
+	struct arch_probe_insn api;
+};
+#endif
+
+#endif /* _ASM_RISCV_PROBES_H */
diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
index b355cf4..c3fff3e 100644
--- a/arch/riscv/kernel/Makefile
+++ b/arch/riscv/kernel/Makefile
@@ -29,6 +29,7 @@ obj-y	+= riscv_ksyms.o
 obj-y	+= stacktrace.o
 obj-y	+= cacheinfo.o
 obj-y	+= patch.o
+obj-y	+= probes/
 obj-$(CONFIG_MMU) += vdso.o vdso/
 
 obj-$(CONFIG_RISCV_M_MODE)	+= clint.o traps_misaligned.o
diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
new file mode 100644
index 00000000..8a39507
--- /dev/null
+++ b/arch/riscv/kernel/probes/Makefile
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o simulate-insn.o
+obj-$(CONFIG_KPROBES)		+= kprobes_trampoline.o
+CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
new file mode 100644
index 00000000..0876c30
--- /dev/null
+++ b/arch/riscv/kernel/probes/decode-insn.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+#include <linux/module.h>
+#include <linux/kallsyms.h>
+#include <asm/sections.h>
+
+#include "decode-insn.h"
+#include "simulate-insn.h"
+
+/* Return:
+ *   INSN_REJECTED     If instruction is one not allowed to kprobe,
+ *   INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
+ */
+enum probe_insn __kprobes
+riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
+{
+	probe_opcode_t insn = le32_to_cpu(*addr);
+
+	/*
+	 * Reject instructions list:
+	 */
+	RISCV_INSN_REJECTED(system,		insn);
+	RISCV_INSN_REJECTED(fence,		insn);
+
+	/*
+	 * Simulate instructions list:
+	 * TODO: the REJECTED ones below need to be implemented
+	 */
+#ifdef CONFIG_RISCV_ISA_C
+	RISCV_INSN_REJECTED(c_j,		insn);
+	RISCV_INSN_REJECTED(c_jr,		insn);
+	RISCV_INSN_REJECTED(c_jal,		insn);
+	RISCV_INSN_REJECTED(c_jalr,		insn);
+	RISCV_INSN_REJECTED(c_beqz,		insn);
+	RISCV_INSN_REJECTED(c_bnez,		insn);
+	RISCV_INSN_REJECTED(c_ebreak,		insn);
+#endif
+
+	RISCV_INSN_REJECTED(auipc,		insn);
+	RISCV_INSN_REJECTED(branch,		insn);
+
+	RISCV_INSN_SET_SIMULATE(jal,		insn);
+	RISCV_INSN_SET_SIMULATE(jalr,		insn);
+
+	return INSN_GOOD;
+}
diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
new file mode 100644
index 00000000..42269a7
--- /dev/null
+++ b/arch/riscv/kernel/probes/decode-insn.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+
+#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
+#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
+
+#include <asm/sections.h>
+#include <asm/kprobes.h>
+
+enum probe_insn {
+	INSN_REJECTED,
+	INSN_GOOD_NO_SLOT,
+	INSN_GOOD,
+};
+
+enum probe_insn __kprobes
+riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
+
+#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
new file mode 100644
index 00000000..31b6196
--- /dev/null
+++ b/arch/riscv/kernel/probes/kprobes.c
@@ -0,0 +1,471 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/kprobes.h>
+#include <linux/extable.h>
+#include <linux/slab.h>
+#include <linux/stop_machine.h>
+#include <asm/ptrace.h>
+#include <linux/uaccess.h>
+#include <asm/sections.h>
+#include <asm/cacheflush.h>
+#include <asm/bug.h>
+#include <asm/patch.h>
+
+#include "decode-insn.h"
+
+DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
+DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
+
+static void __kprobes
+post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
+
+static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
+{
+	unsigned long offset = GET_INSN_LENGTH(p->opcode);
+
+	p->ainsn.api.restore = (unsigned long)p->addr + offset;
+
+	patch_text(p->ainsn.api.insn, p->opcode);
+	patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
+		   __BUG_INSN_32);
+}
+
+static void __kprobes arch_prepare_simulate(struct kprobe *p)
+{
+	p->ainsn.api.restore = 0;
+}
+
+static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	if (p->ainsn.api.handler)
+		p->ainsn.api.handler((u32)p->opcode,
+					(unsigned long)p->addr, regs);
+
+	post_kprobe_handler(kcb, regs);
+}
+
+int __kprobes arch_prepare_kprobe(struct kprobe *p)
+{
+	unsigned long probe_addr = (unsigned long)p->addr;
+
+	if (probe_addr & 0x1) {
+		pr_warn("Address not aligned.\n");
+
+		return -EINVAL;
+	}
+
+	/* copy instruction */
+	p->opcode = le32_to_cpu(*p->addr);
+
+	/* decode instruction */
+	switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
+	case INSN_REJECTED:	/* insn not supported */
+		return -EINVAL;
+
+	case INSN_GOOD_NO_SLOT:	/* insn need simulation */
+		p->ainsn.api.insn = NULL;
+		break;
+
+	case INSN_GOOD:	/* instruction uses slot */
+		p->ainsn.api.insn = get_insn_slot();
+		if (!p->ainsn.api.insn)
+			return -ENOMEM;
+		break;
+	}
+
+	/* prepare the instruction */
+	if (p->ainsn.api.insn)
+		arch_prepare_ss_slot(p);
+	else
+		arch_prepare_simulate(p);
+
+	return 0;
+}
+
+/* install breakpoint in text */
+void __kprobes arch_arm_kprobe(struct kprobe *p)
+{
+	if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
+		patch_text(p->addr, __BUG_INSN_32);
+	else
+		patch_text(p->addr, __BUG_INSN_16);
+}
+
+/* remove breakpoint from text */
+void __kprobes arch_disarm_kprobe(struct kprobe *p)
+{
+	patch_text(p->addr, p->opcode);
+}
+
+void __kprobes arch_remove_kprobe(struct kprobe *p)
+{
+}
+
+static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+	kcb->prev_kprobe.kp = kprobe_running();
+	kcb->prev_kprobe.status = kcb->kprobe_status;
+}
+
+static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
+{
+	__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
+	kcb->kprobe_status = kcb->prev_kprobe.status;
+}
+
+static void __kprobes set_current_kprobe(struct kprobe *p)
+{
+	__this_cpu_write(current_kprobe, p);
+}
+
+/*
+ * Interrupts need to be disabled before single-step mode is set, and not
+ * reenabled until after single-step mode ends.
+ * Without disabling interrupt on local CPU, there is a chance of
+ * interrupt occurrence in the period of exception return and  start of
+ * out-of-line single-step, that result in wrongly single stepping
+ * into the interrupt handler.
+ */
+static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
+						struct pt_regs *regs)
+{
+	kcb->saved_status = regs->status;
+	regs->status &= ~SR_SPIE;
+}
+
+static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
+						struct pt_regs *regs)
+{
+	regs->status = kcb->saved_status;
+}
+
+static void __kprobes
+set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
+{
+	unsigned long offset = GET_INSN_LENGTH(p->opcode);
+
+	kcb->ss_ctx.ss_pending = true;
+	kcb->ss_ctx.match_addr = addr + offset;
+}
+
+static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
+{
+	kcb->ss_ctx.ss_pending = false;
+	kcb->ss_ctx.match_addr = 0;
+}
+
+static void __kprobes setup_singlestep(struct kprobe *p,
+				       struct pt_regs *regs,
+				       struct kprobe_ctlblk *kcb, int reenter)
+{
+	unsigned long slot;
+
+	if (reenter) {
+		save_previous_kprobe(kcb);
+		set_current_kprobe(p);
+		kcb->kprobe_status = KPROBE_REENTER;
+	} else {
+		kcb->kprobe_status = KPROBE_HIT_SS;
+	}
+
+	if (p->ainsn.api.insn) {
+		/* prepare for single stepping */
+		slot = (unsigned long)p->ainsn.api.insn;
+
+		set_ss_context(kcb, slot, p);	/* mark pending ss */
+
+		/* IRQs and single stepping do not mix well. */
+		kprobes_save_local_irqflag(kcb, regs);
+
+		instruction_pointer_set(regs, slot);
+	} else {
+		/* insn simulation */
+		arch_simulate_insn(p, regs);
+	}
+}
+
+static int __kprobes reenter_kprobe(struct kprobe *p,
+				    struct pt_regs *regs,
+				    struct kprobe_ctlblk *kcb)
+{
+	switch (kcb->kprobe_status) {
+	case KPROBE_HIT_SSDONE:
+	case KPROBE_HIT_ACTIVE:
+		kprobes_inc_nmissed_count(p);
+		setup_singlestep(p, regs, kcb, 1);
+		break;
+	case KPROBE_HIT_SS:
+	case KPROBE_REENTER:
+		pr_warn("Unrecoverable kprobe detected.\n");
+		dump_kprobe(p);
+		BUG();
+		break;
+	default:
+		WARN_ON(1);
+		return 0;
+	}
+
+	return 1;
+}
+
+static void __kprobes
+post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
+{
+	struct kprobe *cur = kprobe_running();
+
+	if (!cur)
+		return;
+
+	/* return addr restore if non-branching insn */
+	if (cur->ainsn.api.restore != 0)
+		regs->epc = cur->ainsn.api.restore;
+
+	/* restore back original saved kprobe variables and continue */
+	if (kcb->kprobe_status == KPROBE_REENTER) {
+		restore_previous_kprobe(kcb);
+		return;
+	}
+
+	/* call post handler */
+	kcb->kprobe_status = KPROBE_HIT_SSDONE;
+	if (cur->post_handler)	{
+		/* post_handler can hit breakpoint and single step
+		 * again, so we enable D-flag for recursive exception.
+		 */
+		cur->post_handler(cur, regs, 0);
+	}
+
+	reset_current_kprobe();
+}
+
+int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
+{
+	struct kprobe *cur = kprobe_running();
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	switch (kcb->kprobe_status) {
+	case KPROBE_HIT_SS:
+	case KPROBE_REENTER:
+		/*
+		 * We are here because the instruction being single
+		 * stepped caused a page fault. We reset the current
+		 * kprobe and the ip points back to the probe address
+		 * and allow the page fault handler to continue as a
+		 * normal page fault.
+		 */
+		regs->epc = (unsigned long) cur->addr;
+		if (!instruction_pointer(regs))
+			BUG();
+
+		if (kcb->kprobe_status == KPROBE_REENTER)
+			restore_previous_kprobe(kcb);
+		else
+			reset_current_kprobe();
+
+		break;
+	case KPROBE_HIT_ACTIVE:
+	case KPROBE_HIT_SSDONE:
+		/*
+		 * We increment the nmissed count for accounting,
+		 * we can also use npre/npostfault count for accounting
+		 * these specific fault cases.
+		 */
+		kprobes_inc_nmissed_count(cur);
+
+		/*
+		 * We come here because instructions in the pre/post
+		 * handler caused the page_fault, this could happen
+		 * if handler tries to access user space by
+		 * copy_from_user(), get_user() etc. Let the
+		 * user-specified handler try to fix it first.
+		 */
+		if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
+			return 1;
+
+		/*
+		 * In case the user-specified fault handler returned
+		 * zero, try to fix up.
+		 */
+		if (fixup_exception(regs))
+			return 1;
+	}
+	return 0;
+}
+
+bool __kprobes
+kprobe_breakpoint_handler(struct pt_regs *regs)
+{
+	struct kprobe *p, *cur_kprobe;
+	struct kprobe_ctlblk *kcb;
+	unsigned long addr = instruction_pointer(regs);
+
+	kcb = get_kprobe_ctlblk();
+	cur_kprobe = kprobe_running();
+
+	p = get_kprobe((kprobe_opcode_t *) addr);
+
+	if (p) {
+		if (cur_kprobe) {
+			if (reenter_kprobe(p, regs, kcb))
+				return true;
+		} else {
+			/* Probe hit */
+			set_current_kprobe(p);
+			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+
+			/*
+			 * If we have no pre-handler or it returned 0, we
+			 * continue with normal processing.  If we have a
+			 * pre-handler and it returned non-zero, it will
+			 * modify the execution path and no need to single
+			 * stepping. Let's just reset current kprobe and exit.
+			 *
+			 * pre_handler can hit a breakpoint and can step thru
+			 * before return.
+			 */
+			if (!p->pre_handler || !p->pre_handler(p, regs))
+				setup_singlestep(p, regs, kcb, 0);
+			else
+				reset_current_kprobe();
+		}
+		return true;
+	}
+
+	/*
+	 * The breakpoint instruction was removed right
+	 * after we hit it.  Another cpu has removed
+	 * either a probepoint or a debugger breakpoint
+	 * at this address.  In either case, no further
+	 * handling of this interrupt is appropriate.
+	 * Return back to original instruction, and continue.
+	 */
+	return false;
+}
+
+bool __kprobes
+kprobe_single_step_handler(struct pt_regs *regs)
+{
+	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
+
+	if ((kcb->ss_ctx.ss_pending)
+	    && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
+		clear_ss_context(kcb);	/* clear pending ss */
+
+		kprobes_restore_local_irqflag(kcb, regs);
+
+		post_kprobe_handler(kcb, regs);
+		return true;
+	}
+	return false;
+}
+
+/*
+ * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
+ * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
+ */
+int __init arch_populate_kprobe_blacklist(void)
+{
+	int ret;
+
+	ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
+					(unsigned long)__irqentry_text_end);
+	return ret;
+}
+
+void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
+{
+	struct kretprobe_instance *ri = NULL;
+	struct hlist_head *head, empty_rp;
+	struct hlist_node *tmp;
+	unsigned long flags, orig_ret_address = 0;
+	unsigned long trampoline_address =
+		(unsigned long)&kretprobe_trampoline;
+	kprobe_opcode_t *correct_ret_addr = NULL;
+
+	INIT_HLIST_HEAD(&empty_rp);
+	kretprobe_hash_lock(current, &head, &flags);
+
+	/*
+	 * It is possible to have multiple instances associated with a given
+	 * task either because multiple functions in the call path have
+	 * return probes installed on them, and/or more than one
+	 * return probe was registered for a target function.
+	 *
+	 * We can handle this because:
+	 *     - instances are always pushed into the head of the list
+	 *     - when multiple return probes are registered for the same
+	 *	 function, the (chronologically) first instance's ret_addr
+	 *	 will be the real return address, and all the rest will
+	 *	 point to kretprobe_trampoline.
+	 */
+	hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+		if (ri->task != current)
+			/* another task is sharing our hash bucket */
+			continue;
+
+		orig_ret_address = (unsigned long)ri->ret_addr;
+
+		if (orig_ret_address != trampoline_address)
+			/*
+			 * This is the real return address. Any other
+			 * instances associated with this task are for
+			 * other calls deeper on the call stack
+			 */
+			break;
+	}
+
+	kretprobe_assert(ri, orig_ret_address, trampoline_address);
+
+	correct_ret_addr = ri->ret_addr;
+	hlist_for_each_entry_safe(ri, tmp, head, hlist) {
+		if (ri->task != current)
+			/* another task is sharing our hash bucket */
+			continue;
+
+		orig_ret_address = (unsigned long)ri->ret_addr;
+		if (ri->rp && ri->rp->handler) {
+			__this_cpu_write(current_kprobe, &ri->rp->kp);
+			get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
+			ri->ret_addr = correct_ret_addr;
+			ri->rp->handler(ri, regs);
+			__this_cpu_write(current_kprobe, NULL);
+		}
+
+		recycle_rp_inst(ri, &empty_rp);
+
+		if (orig_ret_address != trampoline_address)
+			/*
+			 * This is the real return address. Any other
+			 * instances associated with this task are for
+			 * other calls deeper on the call stack
+			 */
+			break;
+	}
+
+	kretprobe_hash_unlock(current, &flags);
+
+	hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
+		hlist_del(&ri->hlist);
+		kfree(ri);
+	}
+	return (void *)orig_ret_address;
+}
+
+void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
+				      struct pt_regs *regs)
+{
+	ri->ret_addr = (kprobe_opcode_t *)regs->ra;
+	regs->ra = (unsigned long) &kretprobe_trampoline;
+}
+
+int __kprobes arch_trampoline_kprobe(struct kprobe *p)
+{
+	return 0;
+}
+
+int __init arch_init_kprobes(void)
+{
+	return 0;
+}
diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
new file mode 100644
index 00000000..6e85d02
--- /dev/null
+++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
@@ -0,0 +1,93 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Author: Patrick Stählin <me@packi.ch>
+ */
+#include <linux/linkage.h>
+
+#include <asm/asm.h>
+#include <asm/asm-offsets.h>
+
+	.text
+	.altmacro
+
+	.macro save_all_base_regs
+	REG_S x1,  PT_RA(sp)
+	REG_S x3,  PT_GP(sp)
+	REG_S x4,  PT_TP(sp)
+	REG_S x5,  PT_T0(sp)
+	REG_S x6,  PT_T1(sp)
+	REG_S x7,  PT_T2(sp)
+	REG_S x8,  PT_S0(sp)
+	REG_S x9,  PT_S1(sp)
+	REG_S x10, PT_A0(sp)
+	REG_S x11, PT_A1(sp)
+	REG_S x12, PT_A2(sp)
+	REG_S x13, PT_A3(sp)
+	REG_S x14, PT_A4(sp)
+	REG_S x15, PT_A5(sp)
+	REG_S x16, PT_A6(sp)
+	REG_S x17, PT_A7(sp)
+	REG_S x18, PT_S2(sp)
+	REG_S x19, PT_S3(sp)
+	REG_S x20, PT_S4(sp)
+	REG_S x21, PT_S5(sp)
+	REG_S x22, PT_S6(sp)
+	REG_S x23, PT_S7(sp)
+	REG_S x24, PT_S8(sp)
+	REG_S x25, PT_S9(sp)
+	REG_S x26, PT_S10(sp)
+	REG_S x27, PT_S11(sp)
+	REG_S x28, PT_T3(sp)
+	REG_S x29, PT_T4(sp)
+	REG_S x30, PT_T5(sp)
+	REG_S x31, PT_T6(sp)
+	.endm
+
+	.macro restore_all_base_regs
+	REG_L x3,  PT_GP(sp)
+	REG_L x4,  PT_TP(sp)
+	REG_L x5,  PT_T0(sp)
+	REG_L x6,  PT_T1(sp)
+	REG_L x7,  PT_T2(sp)
+	REG_L x8,  PT_S0(sp)
+	REG_L x9,  PT_S1(sp)
+	REG_L x10, PT_A0(sp)
+	REG_L x11, PT_A1(sp)
+	REG_L x12, PT_A2(sp)
+	REG_L x13, PT_A3(sp)
+	REG_L x14, PT_A4(sp)
+	REG_L x15, PT_A5(sp)
+	REG_L x16, PT_A6(sp)
+	REG_L x17, PT_A7(sp)
+	REG_L x18, PT_S2(sp)
+	REG_L x19, PT_S3(sp)
+	REG_L x20, PT_S4(sp)
+	REG_L x21, PT_S5(sp)
+	REG_L x22, PT_S6(sp)
+	REG_L x23, PT_S7(sp)
+	REG_L x24, PT_S8(sp)
+	REG_L x25, PT_S9(sp)
+	REG_L x26, PT_S10(sp)
+	REG_L x27, PT_S11(sp)
+	REG_L x28, PT_T3(sp)
+	REG_L x29, PT_T4(sp)
+	REG_L x30, PT_T5(sp)
+	REG_L x31, PT_T6(sp)
+	.endm
+
+ENTRY(kretprobe_trampoline)
+	addi sp, sp, -(PT_SIZE_ON_STACK)
+	save_all_base_regs
+
+	move a0, sp /* pt_regs */
+
+	call trampoline_probe_handler
+
+	/* use the result as the return-address */
+	move ra, a0
+
+	restore_all_base_regs
+	addi sp, sp, PT_SIZE_ON_STACK
+
+	ret
+ENDPROC(kretprobe_trampoline)
diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
new file mode 100644
index 00000000..2519ce2
--- /dev/null
+++ b/arch/riscv/kernel/probes/simulate-insn.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0+
+
+#include <linux/bitops.h>
+#include <linux/kernel.h>
+#include <linux/kprobes.h>
+
+#include "decode-insn.h"
+#include "simulate-insn.h"
+
+static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
+				       unsigned long *ptr)
+{
+	if (index == 0)
+		*ptr = 0;
+	else if (index <= 31)
+		*ptr = *((unsigned long *)regs + index);
+	else
+		return false;
+
+	return true;
+}
+
+static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
+				       unsigned long val)
+{
+	if (index == 0)
+		return false;
+	else if (index <= 31)
+		*((unsigned long *)regs + index) = val;
+	else
+		return false;
+
+	return true;
+}
+
+bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
+{
+	/*
+	 *     31    30       21    20     19        12 11 7 6      0
+	 * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
+	 *     1         10          1           8       5    JAL/J
+	 */
+	bool ret;
+	u32 imm;
+	u32 index = (opcode >> 7) & 0x1f;
+
+	ret = rv_insn_reg_set_val(regs, index, addr + 4);
+	if (!ret)
+		return ret;
+
+	imm  = ((opcode >> 21) & 0x3ff) << 1;
+	imm |= ((opcode >> 20) & 0x1)   << 11;
+	imm |= ((opcode >> 12) & 0xff)  << 12;
+	imm |= ((opcode >> 31) & 0x1)   << 20;
+
+	instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
+
+	return ret;
+}
+
+bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
+{
+	/*
+	 * 31          20 19 15 14 12 11 7 6      0
+	 *  offset[11:0] | rs1 | 010 | rd | opcode
+	 *      12         5      3    5    JALR/JR
+	 */
+	bool ret;
+	unsigned long base_addr;
+	u32 imm = (opcode >> 20) & 0xfff;
+	u32 rd_index = (opcode >> 7) & 0x1f;
+	u32 rs1_index = (opcode >> 15) & 0x1f;
+
+	ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
+	if (!ret)
+		return ret;
+
+	ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
+	if (!ret)
+		return ret;
+
+	instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
+
+	return ret;
+}
diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
new file mode 100644
index 00000000..a62d784
--- /dev/null
+++ b/arch/riscv/kernel/probes/simulate-insn.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+
+#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
+#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
+
+#define __RISCV_INSN_FUNCS(name, mask, val)				\
+static __always_inline bool riscv_insn_is_##name(probe_opcode_t code)	\
+{									\
+	BUILD_BUG_ON(~(mask) & (val));					\
+	return (code & (mask)) == (val);				\
+}									\
+bool simulate_##name(u32 opcode, unsigned long addr,			\
+		     struct pt_regs *regs);
+
+#define RISCV_INSN_REJECTED(name, code)					\
+	do {								\
+		if (riscv_insn_is_##name(code)) {			\
+			return INSN_REJECTED;				\
+		}							\
+	} while (0)
+
+__RISCV_INSN_FUNCS(system,	0x7f, 0x73)
+__RISCV_INSN_FUNCS(fence,	0x7f, 0x0f)
+
+#define RISCV_INSN_SET_SIMULATE(name, code)				\
+	do {								\
+		if (riscv_insn_is_##name(code)) {			\
+			api->handler = simulate_##name;			\
+			return INSN_GOOD_NO_SLOT;			\
+		}							\
+	} while (0)
+
+__RISCV_INSN_FUNCS(c_j,		0xe003, 0xa001)
+__RISCV_INSN_FUNCS(c_jr,	0xf007, 0x8002)
+__RISCV_INSN_FUNCS(c_jal,	0xe003, 0x2001)
+__RISCV_INSN_FUNCS(c_jalr,	0xf007, 0x9002)
+__RISCV_INSN_FUNCS(c_beqz,	0xe003, 0xc001)
+__RISCV_INSN_FUNCS(c_bnez,	0xe003, 0xe001)
+__RISCV_INSN_FUNCS(c_ebreak,	0xffff, 0x9002)
+
+__RISCV_INSN_FUNCS(auipc,	0x7f, 0x17)
+__RISCV_INSN_FUNCS(branch,	0x7f, 0x63)
+
+__RISCV_INSN_FUNCS(jal,		0x7f, 0x6f)
+__RISCV_INSN_FUNCS(jalr,	0x707f, 0x67)
+
+#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index 7d95cce..c6846dd 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -12,6 +12,7 @@
 #include <linux/signal.h>
 #include <linux/kdebug.h>
 #include <linux/uaccess.h>
+#include <linux/kprobes.h>
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/irq.h>
@@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
 
 asmlinkage __visible void do_trap_break(struct pt_regs *regs)
 {
+#ifdef CONFIG_KPROBES
+	if (kprobe_single_step_handler(regs))
+		return;
+
+	if (kprobe_breakpoint_handler(regs))
+		return;
+#endif
+
 	if (user_mode(regs))
 		force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
 #ifdef CONFIG_KGDB
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index ae7b7fe..da0c08c 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -13,6 +13,7 @@
 #include <linux/perf_event.h>
 #include <linux/signal.h>
 #include <linux/uaccess.h>
+#include <linux/kprobes.h>
 
 #include <asm/pgalloc.h>
 #include <asm/ptrace.h>
@@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
 	tsk = current;
 	mm = tsk->mm;
 
+	if (kprobe_page_fault(regs, cause))
+		return;
+
 	/*
 	 * Fault-in kernel-space virtual memory on-demand.
 	 * The 'reference' page table is init_mm.pgd.
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 5/7] riscv: Add uprobes supported
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
                   ` (3 preceding siblings ...)
  2020-07-13 23:39 ` [PATCH v3 4/7] riscv: Add kprobes supported guoren
@ 2020-07-13 23:39 ` guoren
  2020-07-13 23:39 ` [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported guoren
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

This patch adds support for uprobes on riscv architecture.

Just like kprobe, it support single-step and simulate instructions.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
 arch/riscv/Kconfig                   |   3 +
 arch/riscv/include/asm/processor.h   |   1 +
 arch/riscv/include/asm/thread_info.h |   4 +-
 arch/riscv/include/asm/uprobes.h     |  40 ++++++++
 arch/riscv/kernel/probes/Makefile    |   1 +
 arch/riscv/kernel/probes/uprobes.c   | 186 +++++++++++++++++++++++++++++++++++
 arch/riscv/kernel/signal.c           |   3 +
 arch/riscv/kernel/traps.c            |  10 ++
 arch/riscv/mm/fault.c                |   7 ++
 9 files changed, 254 insertions(+), 1 deletion(-)
 create mode 100644 arch/riscv/include/asm/uprobes.h
 create mode 100644 arch/riscv/kernel/probes/uprobes.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index b86b2a2..a41b785 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -148,6 +148,9 @@ config ARCH_WANT_GENERAL_HUGETLB
 config ARCH_SUPPORTS_DEBUG_PAGEALLOC
 	def_bool y
 
+config ARCH_SUPPORTS_UPROBES
+	def_bool y
+
 config SYS_SUPPORTS_HUGETLBFS
 	depends on MMU
 	def_bool y
diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h
index bdddcd5..3a24003 100644
--- a/arch/riscv/include/asm/processor.h
+++ b/arch/riscv/include/asm/processor.h
@@ -34,6 +34,7 @@ struct thread_struct {
 	unsigned long sp;	/* Kernel mode stack */
 	unsigned long s[12];	/* s[0]: frame pointer */
 	struct __riscv_d_ext_state fstate;
+	unsigned long bad_cause;
 };
 
 #define INIT_THREAD {					\
diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h
index 1dd12a0..b3a7eb6 100644
--- a/arch/riscv/include/asm/thread_info.h
+++ b/arch/riscv/include/asm/thread_info.h
@@ -76,6 +76,7 @@ struct thread_info {
 #define TIF_SYSCALL_TRACEPOINT  6       /* syscall tracepoint instrumentation */
 #define TIF_SYSCALL_AUDIT	7	/* syscall auditing */
 #define TIF_SECCOMP		8	/* syscall secure computing */
+#define TIF_UPROBE		9	/* uprobe breakpoint or singlestep */
 
 #define _TIF_SYSCALL_TRACE	(1 << TIF_SYSCALL_TRACE)
 #define _TIF_NOTIFY_RESUME	(1 << TIF_NOTIFY_RESUME)
@@ -84,9 +85,10 @@ struct thread_info {
 #define _TIF_SYSCALL_TRACEPOINT	(1 << TIF_SYSCALL_TRACEPOINT)
 #define _TIF_SYSCALL_AUDIT	(1 << TIF_SYSCALL_AUDIT)
 #define _TIF_SECCOMP		(1 << TIF_SECCOMP)
+#define _TIF_UPROBE		(1 << TIF_UPROBE)
 
 #define _TIF_WORK_MASK \
-	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED)
+	(_TIF_NOTIFY_RESUME | _TIF_SIGPENDING | _TIF_NEED_RESCHED | _TIF_UPROBE)
 
 #define _TIF_SYSCALL_WORK \
 	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_TRACEPOINT | _TIF_SYSCALL_AUDIT | \
diff --git a/arch/riscv/include/asm/uprobes.h b/arch/riscv/include/asm/uprobes.h
new file mode 100644
index 00000000..f2183e0
--- /dev/null
+++ b/arch/riscv/include/asm/uprobes.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_RISCV_UPROBES_H
+#define _ASM_RISCV_UPROBES_H
+
+#include <asm/probes.h>
+#include <asm/patch.h>
+#include <asm/bug.h>
+
+#define MAX_UINSN_BYTES		8
+
+#ifdef CONFIG_RISCV_ISA_C
+#define UPROBE_SWBP_INSN	__BUG_INSN_16
+#define UPROBE_SWBP_INSN_SIZE	2
+#else
+#define UPROBE_SWBP_INSN	__BUG_INSN_32
+#define UPROBE_SWBP_INSN_SIZE	4
+#endif
+#define UPROBE_XOL_SLOT_BYTES	MAX_UINSN_BYTES
+
+typedef u32 uprobe_opcode_t;
+
+struct arch_uprobe_task {
+	unsigned long   saved_cause;
+};
+
+struct arch_uprobe {
+	union {
+		u8 insn[MAX_UINSN_BYTES];
+		u8 ixol[MAX_UINSN_BYTES];
+	};
+	struct arch_probe_insn api;
+	unsigned long insn_size;
+	bool simulate;
+};
+
+bool uprobe_breakpoint_handler(struct pt_regs *regs);
+bool uprobe_single_step_handler(struct pt_regs *regs);
+
+#endif /* _ASM_RISCV_UPROBES_H */
diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
index 8a39507..cb62991 100644
--- a/arch/riscv/kernel/probes/Makefile
+++ b/arch/riscv/kernel/probes/Makefile
@@ -1,4 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o simulate-insn.o
 obj-$(CONFIG_KPROBES)		+= kprobes_trampoline.o
+obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o simulate-insn.o
 CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/kernel/probes/uprobes.c b/arch/riscv/kernel/probes/uprobes.c
new file mode 100644
index 00000000..7a057b5
--- /dev/null
+++ b/arch/riscv/kernel/probes/uprobes.c
@@ -0,0 +1,186 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/highmem.h>
+#include <linux/ptrace.h>
+#include <linux/uprobes.h>
+
+#include "decode-insn.h"
+
+#define UPROBE_TRAP_NR	UINT_MAX
+
+bool is_swbp_insn(uprobe_opcode_t *insn)
+{
+#ifdef CONFIG_RISCV_ISA_C
+	return (*insn & 0xffff) == UPROBE_SWBP_INSN;
+#else
+	return *insn == UPROBE_SWBP_INSN;
+#endif
+}
+
+unsigned long uprobe_get_swbp_addr(struct pt_regs *regs)
+{
+	return instruction_pointer(regs);
+}
+
+int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm,
+			     unsigned long addr)
+{
+	probe_opcode_t opcode;
+
+	opcode = *(probe_opcode_t *)(&auprobe->insn[0]);
+
+	auprobe->insn_size = GET_INSN_LENGTH(opcode);
+
+	switch (riscv_probe_decode_insn(&opcode, &auprobe->api)) {
+	case INSN_REJECTED:
+		return -EINVAL;
+
+	case INSN_GOOD_NO_SLOT:
+		auprobe->simulate = true;
+		break;
+
+	case INSN_GOOD:
+		auprobe->simulate = false;
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	utask->autask.saved_cause = current->thread.bad_cause;
+	current->thread.bad_cause = UPROBE_TRAP_NR;
+
+	instruction_pointer_set(regs, utask->xol_vaddr);
+
+	regs->status &= ~SR_SPIE;
+
+	return 0;
+}
+
+int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	WARN_ON_ONCE(current->thread.bad_cause != UPROBE_TRAP_NR);
+
+	instruction_pointer_set(regs, utask->vaddr + auprobe->insn_size);
+
+	regs->status |= SR_SPIE;
+
+	return 0;
+}
+
+bool arch_uprobe_xol_was_trapped(struct task_struct *t)
+{
+	if (t->thread.bad_cause != UPROBE_TRAP_NR)
+		return true;
+
+	return false;
+}
+
+bool arch_uprobe_skip_sstep(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	probe_opcode_t insn;
+	unsigned long addr;
+
+	if (!auprobe->simulate)
+		return false;
+
+	insn = *(probe_opcode_t *)(&auprobe->insn[0]);
+	addr = instruction_pointer(regs);
+
+	if (auprobe->api.handler)
+		auprobe->api.handler(insn, addr, regs);
+
+	return true;
+}
+
+void arch_uprobe_abort_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
+{
+	struct uprobe_task *utask = current->utask;
+
+	/*
+	 * Task has received a fatal signal, so reset back to probbed
+	 * address.
+	 */
+	instruction_pointer_set(regs, utask->vaddr);
+
+	regs->status &= ~SR_SPIE;
+}
+
+bool arch_uretprobe_is_alive(struct return_instance *ret, enum rp_check ctx,
+		struct pt_regs *regs)
+{
+	if (ctx == RP_CHECK_CHAIN_CALL)
+		return regs->sp <= ret->stack;
+	else
+		return regs->sp < ret->stack;
+}
+
+unsigned long
+arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr,
+				  struct pt_regs *regs)
+{
+	unsigned long ra;
+
+	ra = regs->ra;
+
+	regs->ra = trampoline_vaddr;
+
+	return ra;
+}
+
+int arch_uprobe_exception_notify(struct notifier_block *self,
+				 unsigned long val, void *data)
+{
+	return NOTIFY_DONE;
+}
+
+bool uprobe_breakpoint_handler(struct pt_regs *regs)
+{
+	if (uprobe_pre_sstep_notifier(regs))
+		return true;
+
+	return false;
+}
+
+bool uprobe_single_step_handler(struct pt_regs *regs)
+{
+	if (uprobe_post_sstep_notifier(regs))
+		return true;
+
+	return false;
+}
+
+void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr,
+			   void *src, unsigned long len)
+{
+	/* Initialize the slot */
+	void *kaddr = kmap_atomic(page);
+	void *dst = kaddr + (vaddr & ~PAGE_MASK);
+
+	memcpy(dst, src, len);
+
+	/* Add ebreak behind opcode to simulate singlestep */
+	if (vaddr) {
+		dst += GET_INSN_LENGTH(*(probe_opcode_t *)src);
+		*(uprobe_opcode_t *)dst = __BUG_INSN_32;
+	}
+
+	kunmap_atomic(kaddr);
+
+	/*
+	 * We probably need flush_icache_user_page() but it needs vma.
+	 * This should work on most of architectures by default. If
+	 * architecture needs to do something different it can define
+	 * its own version of the function.
+	 */
+	flush_dcache_page(page);
+}
diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
index 17ba190..a96db83b 100644
--- a/arch/riscv/kernel/signal.c
+++ b/arch/riscv/kernel/signal.c
@@ -309,6 +309,9 @@ static void do_signal(struct pt_regs *regs)
 asmlinkage __visible void do_notify_resume(struct pt_regs *regs,
 					   unsigned long thread_info_flags)
 {
+	if (thread_info_flags & _TIF_UPROBE)
+		uprobe_notify_resume(regs);
+
 	/* Handle pending signal delivery */
 	if (thread_info_flags & _TIF_SIGPENDING)
 		do_signal(regs);
diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
index c6846dd..c36ecac 100644
--- a/arch/riscv/kernel/traps.c
+++ b/arch/riscv/kernel/traps.c
@@ -76,6 +76,8 @@ void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr)
 static void do_trap_error(struct pt_regs *regs, int signo, int code,
 	unsigned long addr, const char *str)
 {
+	current->thread.bad_cause = regs->cause;
+
 	if (user_mode(regs)) {
 		do_trap(regs, signo, code, addr);
 	} else {
@@ -153,6 +155,14 @@ asmlinkage __visible void do_trap_break(struct pt_regs *regs)
 	if (kprobe_breakpoint_handler(regs))
 		return;
 #endif
+#ifdef CONFIG_UPROBES
+	if (uprobe_single_step_handler(regs))
+		return;
+
+	if (uprobe_breakpoint_handler(regs))
+		return;
+#endif
+	current->thread.bad_cause = regs->cause;
 
 	if (user_mode(regs))
 		force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index da0c08c..ac96d93 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -170,11 +170,14 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
 	mmap_read_unlock(mm);
 	/* User mode accesses just cause a SIGSEGV */
 	if (user_mode(regs)) {
+		tsk->thread.bad_cause = cause;
 		do_trap(regs, SIGSEGV, code, addr);
 		return;
 	}
 
 no_context:
+	tsk->thread.bad_cause = cause;
+
 	/* Are we prepared to handle this kernel fault? */
 	if (fixup_exception(regs))
 		return;
@@ -195,6 +198,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
 	 * (which will retry the fault, or kill us if we got oom-killed).
 	 */
 out_of_memory:
+	tsk->thread.bad_cause = cause;
+
 	mmap_read_unlock(mm);
 	if (!user_mode(regs))
 		goto no_context;
@@ -202,6 +207,8 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
 	return;
 
 do_sigbus:
+	tsk->thread.bad_cause = cause;
+
 	mmap_read_unlock(mm);
 	/* Kernel mode? Handle exceptions or die */
 	if (!user_mode(regs))
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
                   ` (4 preceding siblings ...)
  2020-07-13 23:39 ` [PATCH v3 5/7] riscv: Add uprobes supported guoren
@ 2020-07-13 23:39 ` guoren
  2020-07-14 11:37   ` Masami Hiramatsu
  2020-07-13 23:39 ` [PATCH v3 7/7] riscv: Add support for function error injection guoren
  2020-07-14 11:23 ` [PATCH v3 0/7] riscv: Add k/uprobe supported Masami Hiramatsu
  7 siblings, 1 reply; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren, Pekka Enberg

From: Guo Ren <guoren@linux.alibaba.com>

This patch adds support for kprobes on ftrace call sites to avoids
much of the overhead with regular kprobes. Try it with simple
steps:

1. Get _do_fork ftrace call site.
Dump of assembler code for function _do_fork:
   0xffffffe00020af64 <+0>:     addi    sp,sp,-128
   0xffffffe00020af66 <+2>:     sd      s0,112(sp)
   0xffffffe00020af68 <+4>:     sd      ra,120(sp)
   0xffffffe00020af6a <+6>:     addi    s0,sp,128
   0xffffffe00020af6c <+8>:     sd      s1,104(sp)
   0xffffffe00020af6e <+10>:    sd      s2,96(sp)
   0xffffffe00020af70 <+12>:    sd      s3,88(sp)
   0xffffffe00020af72 <+14>:    sd      s4,80(sp)
   0xffffffe00020af74 <+16>:    sd      s5,72(sp)
   0xffffffe00020af76 <+18>:    sd      s6,64(sp)
   0xffffffe00020af78 <+20>:    sd      s7,56(sp)
   0xffffffe00020af7a <+22>:    mv      s4,a0
   0xffffffe00020af7c <+24>:    mv      a0,ra
   0xffffffe00020af7e <+26>:    nop	<<<<<<<< here!
   0xffffffe00020af82 <+30>:    nop
   0xffffffe00020af86 <+34>:    ld      s3,0(s4)

2. Set _do_fork+26 as the kprobe.
  echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
  echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
  cat /sys/kernel/debug/tracing/trace
  tracer: nop

  entries-in-buffer/entries-written: 3/3   #P:1

                               _-----=> irqs-off
                              / _----=> need-resched
                             | / _---=> hardirq/softirq
                             || / _--=> preempt-depth
                             ||| /     delay
            TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
               | |       |   ||||       |         |
              sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0

  cat /sys/kernel/debug/kprobes/list
ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
                                       ^^^^^^

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Björn Töpel <bjorn.topel@gmail.com>
Cc: Zong Li <zong.li@sifive.com>
Cc: Pekka Enberg <penberg@kernel.org>
---
 arch/riscv/Kconfig                |  1 +
 arch/riscv/kernel/probes/Makefile |  1 +
 arch/riscv/kernel/probes/ftrace.c | 52 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 54 insertions(+)
 create mode 100644 arch/riscv/kernel/probes/ftrace.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index a41b785..0e9f5eb 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -60,6 +60,7 @@ config RISCV
 	select HAVE_FUTEX_CMPXCHG if FUTEX
 	select HAVE_GENERIC_VDSO if MMU && 64BIT
 	select HAVE_KPROBES
+	select HAVE_KPROBES_ON_FTRACE
 	select HAVE_KRETPROBES
 	select HAVE_PCI
 	select HAVE_PERF_EVENTS
diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
index cb62991..7f0840d 100644
--- a/arch/riscv/kernel/probes/Makefile
+++ b/arch/riscv/kernel/probes/Makefile
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o simulate-insn.o
 obj-$(CONFIG_KPROBES)		+= kprobes_trampoline.o
+obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
 obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o simulate-insn.o
 CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
diff --git a/arch/riscv/kernel/probes/ftrace.c b/arch/riscv/kernel/probes/ftrace.c
new file mode 100644
index 00000000..e0fe58a
--- /dev/null
+++ b/arch/riscv/kernel/probes/ftrace.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/kprobes.h>
+
+/* Ftrace callback handler for kprobes -- called under preepmt disabed */
+void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
+			   struct ftrace_ops *ops, struct pt_regs *regs)
+{
+	struct kprobe *p;
+	struct kprobe_ctlblk *kcb;
+
+	p = get_kprobe((kprobe_opcode_t *)ip);
+	if (unlikely(!p) || kprobe_disabled(p))
+		return;
+
+	kcb = get_kprobe_ctlblk();
+	if (kprobe_running()) {
+		kprobes_inc_nmissed_count(p);
+	} else {
+		unsigned long orig_ip = instruction_pointer(regs);
+		instruction_pointer_set(regs, ip);
+
+		__this_cpu_write(current_kprobe, p);
+		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
+		if (!p->pre_handler || !p->pre_handler(p, regs)) {
+			/*
+			 * Emulate singlestep (and also recover regs->pc)
+			 * as if there is a nop
+			 */
+			instruction_pointer_set(regs,
+				(unsigned long)p->addr + MCOUNT_INSN_SIZE);
+			if (unlikely(p->post_handler)) {
+				kcb->kprobe_status = KPROBE_HIT_SSDONE;
+				p->post_handler(p, regs, 0);
+			}
+			instruction_pointer_set(regs, orig_ip);
+		}
+
+		/*
+		 * If pre_handler returns !0, it changes regs->pc. We have to
+		 * skip emulating post_handler.
+		 */
+		__this_cpu_write(current_kprobe, NULL);
+	}
+}
+NOKPROBE_SYMBOL(kprobe_ftrace_handler);
+
+int arch_prepare_kprobe_ftrace(struct kprobe *p)
+{
+	p->ainsn.api.insn = NULL;
+	return 0;
+}
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v3 7/7] riscv: Add support for function error injection
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
                   ` (5 preceding siblings ...)
  2020-07-13 23:39 ` [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported guoren
@ 2020-07-13 23:39 ` guoren
  2020-07-14 11:43   ` Masami Hiramatsu
  2020-07-14 11:23 ` [PATCH v3 0/7] riscv: Add k/uprobe supported Masami Hiramatsu
  7 siblings, 1 reply; 24+ messages in thread
From: guoren @ 2020-07-13 23:39 UTC (permalink / raw)
  To: palmerdabbelt, paul.walmsley, mhiramat, oleg
  Cc: linux-riscv, linux-kernel, anup, linux-csky, greentime.hu,
	zong.li, guoren, me, bjorn.topel, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

Inspired by the commit 42d038c4fb00 ("arm64: Add support for function
error injection"), this patch supports function error injection for
riscv.

This patch mainly support two functions: one is regs_set_return_value()
which is used to overwrite the return value; the another function is
override_function_with_return() which is to override the probed
function returning and jump to its caller.

Test log:
 cd /sys/kernel/debug/fail_function
 echo sys_clone > inject
 echo 100 > probability
 echo 1 > interval
 ls /
[  313.176875] FAULT_INJECTION: forcing a failure.
[  313.176875] name fail_function, interval 1, probability 100, space 0, times 1
[  313.184357] CPU: 0 PID: 87 Comm: sh Not tainted 5.8.0-rc5-00007-g6a758cc #117
[  313.187616] Call Trace:
[  313.189100] [<ffffffe0002036b6>] walk_stackframe+0x0/0xc2
[  313.191626] [<ffffffe00020395c>] show_stack+0x40/0x4c
[  313.193927] [<ffffffe000556c60>] dump_stack+0x7c/0x96
[  313.194795] [<ffffffe0005522e8>] should_fail+0x140/0x142
[  313.195923] [<ffffffe000299ffc>] fei_kprobe_handler+0x2c/0x5a
[  313.197687] [<ffffffe0009e2ec4>] kprobe_breakpoint_handler+0xb4/0x18a
[  313.200054] [<ffffffe00020357e>] do_trap_break+0x36/0xca
[  313.202147] [<ffffffe000201bca>] ret_from_exception+0x0/0xc
[  313.204556] [<ffffffe000201bbc>] ret_from_syscall+0x0/0x2
-sh: can't fork: Invalid argument

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
---
 arch/riscv/Kconfig              |  1 +
 arch/riscv/include/asm/ptrace.h |  6 ++++++
 arch/riscv/lib/Makefile         |  2 ++
 arch/riscv/lib/error-inject.c   | 10 ++++++++++
 4 files changed, 19 insertions(+)
 create mode 100644 arch/riscv/lib/error-inject.c

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 0e9f5eb..ad73174 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -58,6 +58,7 @@ config RISCV
 	select HAVE_DMA_CONTIGUOUS if MMU
 	select HAVE_EBPF_JIT if MMU
 	select HAVE_FUTEX_CMPXCHG if FUTEX
+	select HAVE_FUNCTION_ERROR_INJECTION
 	select HAVE_GENERIC_VDSO if MMU && 64BIT
 	select HAVE_KPROBES
 	select HAVE_KPROBES_ON_FTRACE
diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
index 23372bb..cb4abb6 100644
--- a/arch/riscv/include/asm/ptrace.h
+++ b/arch/riscv/include/asm/ptrace.h
@@ -109,6 +109,12 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
 	return regs->a0;
 }
 
+static inline void regs_set_return_value(struct pt_regs *regs,
+					 unsigned long val)
+{
+	regs->a0 = val;
+}
+
 extern int regs_query_register_offset(const char *name);
 extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
 					       unsigned int n);
diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
index 0d0db80..04baa93 100644
--- a/arch/riscv/lib/Makefile
+++ b/arch/riscv/lib/Makefile
@@ -4,3 +4,5 @@ lib-y			+= memcpy.o
 lib-y			+= memset.o
 lib-y			+= uaccess.o
 lib-$(CONFIG_64BIT)	+= tishift.o
+
+obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/riscv/lib/error-inject.c b/arch/riscv/lib/error-inject.c
new file mode 100644
index 00000000..d667ade
--- /dev/null
+++ b/arch/riscv/lib/error-inject.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/error-injection.h>
+#include <linux/kprobes.h>
+
+void override_function_with_return(struct pt_regs *regs)
+{
+	instruction_pointer_set(regs, regs->ra);
+}
+NOKPROBE_SYMBOL(override_function_with_return);
-- 
2.7.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 0/7] riscv: Add k/uprobe supported
  2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
                   ` (6 preceding siblings ...)
  2020-07-13 23:39 ` [PATCH v3 7/7] riscv: Add support for function error injection guoren
@ 2020-07-14 11:23 ` Masami Hiramatsu
  2020-07-15  6:45   ` Guo Ren
  7 siblings, 1 reply; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-14 11:23 UTC (permalink / raw)
  To: guoren
  Cc: palmerdabbelt, paul.walmsley, oleg, linux-riscv, linux-kernel,
	anup, linux-csky, greentime.hu, zong.li, me, bjorn.topel,
	Guo Ren

Hi Guo,

On Mon, 13 Jul 2020 23:39:15 +0000
guoren@kernel.org wrote:

> From: Guo Ren <guoren@linux.alibaba.com>
> 
> The patchset includes kprobe/uprobe support and some related fixups.
> Patrick provides HAVE_REGS_AND_STACK_ACCESS_API support and some
> kprobe's code. The framework of k/uprobe is from csky but also refers
> to other arches'. kprobes on ftrace is also supported in the patchset.
> 
> There is no single step exception in riscv ISA, only single-step
> facility for jtag. See riscv-Privileged spec:
> 
> Interrupt Exception Code-Description
> 1 0 Reserved
> 1 1 Supervisor software interrupt
> 1 2–4 Reserved
> 1 5 Supervisor timer interrupt
> 1 6–8 Reserved
> 1 9 Supervisor external interrupt
> 1 10–15 Reserved
> 1 ≥16 Available for platform use
> 0 0 Instruction address misaligned
> 0 1 Instruction access fault
> 0 2 Illegal instruction
> 0 3 Breakpoint
> 0 4 Load address misaligned
> 0 5 Load access fault
> 0 6 Store/AMO address misaligned
> 0 7 Store/AMO access fault
> 0 8 Environment call from U-mode
> 0 9 Environment call from S-mode
> 0 10–11 Reserved
> 0 12 Instruction page fault
> 0 13 Load page fault
> 0 14 Reserved
> 0 15 Store/AMO page fault
> 0 16–23 Reserved
> 0 24–31 Available for custom use
> 0 32–47 Reserved
> 0 48–63 Available for custom use
> 0 ≥64 Reserved
> 
> No single step!
> 
> Other arches use hardware single-step exception for k/uprobe,  eg:
>  - powerpc: regs->msr |= MSR_SINGLESTEP
>  - arm/arm64: PSTATE.D for enabling software step exceptions
>  - s390: Set PER control regs, turns on single step for the given address
>  - x86: regs->flags |= X86_EFLAGS_TF
>  - csky: of course use hw single step :)
> 
> All the above arches use a hardware single-step exception
> mechanism to execute the instruction that was replaced with a probe
> breakpoint. So utilize ebreak to simulate.
> 
> Some pc related instructions couldn't be executed out of line and some
> system/fence instructions couldn't be a trace site at all. So we give
> out a reject list and simulate list in decode-insn.c.
> 
> You could use uprobe to test simulate code like this:
> 
>  echo 'p:enter_current_state_one /hello:0x6e4 a0=%a0 a1=%a1' >> /sys/kernel/debug/tracing/uprobe_events
>  echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable
>  /hello
>  ^C
>  cat /sys/kernel/debug/tracing/trace
>  tracer: nop
> 
>  entries-in-buffer/entries-written: 1/1   #P:1
> 
>                               _-----=> irqs-off
>                              / _----=> need-resched
>                             | / _---=> hardirq/softirq
>                             || / _--=> preempt-depth
>                             ||| /     delay
>            TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
>               | |       |   ||||       |         |
>           hello-94    [000] d...    55.404242: enter_current_state_one: (0x106e4) a0=0x1 a1=0x3fffa8ada8
> 
> Be care /hello:0x6e4 is the file offset in elf and it relate to 0x106e4
> in memory and hello is your target elf program.
> 
> Try kprobe like this:
> 
>  echo 'p:myprobe _do_fork dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
>  echo 'r:myretprobe _do_fork $retval' >> /sys/kernel/debug/tracing/kprobe_event
> 
>  echo 1 >/sys/kernel/debug/tracing/events/kprobes/enable
>  cat /sys/kernel/debug/tracing/trace
>  tracer: nop
> 
>  entries-in-buffer/entries-written: 2/2   #P:1
> 
>                               _-----=> irqs-off
>                              / _----=> need-resched
>                             | / _---=> hardirq/softirq
>                             || / _--=> preempt-depth
>                             ||| /     delay
>            TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
>               | |       |   ||||       |         |
>              sh-92    [000] .n..   131.804230: myprobe: (_do_fork+0x0/0x2e6) dfd=0xffffffe03929fdf8 filename=0x0 flags=0x101000 mode=0x1200000ffffffe0
>              sh-92    [000] d...   131.806607: myretprobe: (__do_sys_clone+0x70/0x82 <- _do_fork) arg1=0x5f
>  cat /sys/kernel/debug/tracing/trace

Thank you for your great work!

BTW, could you also run the ftracetest and boot-time smoke test on it?
You can find it under tools/testing/selftests/ftrace, and
CONFIG_KPROBES_SANITY_TEST.
It will ensure that your patch is correctly ported.

Thank you,

> 
> Changlog v3:
>  - Add upport for function error injection
>  - Fixup kprobes handler couldn't change pc
> 
> Changlog v2:
>  - Add Reviewed-by, Tested-by, Acked-by, thx for all of you
>  - Add kprobes on ftrace feature
>  - Use __always_inline as same as fix_to_virt for fixup
>    BUILD_BUG_ON
>  - Use const "const unsigned int" for 2th param for fixup
>    BUILD_BUG_ON
> 
> Guo Ren (6):
>   riscv: Fixup compile error BUILD_BUG_ON failed
>   riscv: Fixup kprobes handler couldn't change pc
>   riscv: Add kprobes supported
>   riscv: Add uprobes supported
>   riscv: Add KPROBES_ON_FTRACE supported
>   riscv: Add support for function error injection
> 
> Patrick Stählin (1):
>   RISC-V: Implement ptrace regs and stack API
> 
>  arch/riscv/Kconfig                            |   8 +
>  arch/riscv/include/asm/kprobes.h              |  40 +++
>  arch/riscv/include/asm/probes.h               |  24 ++
>  arch/riscv/include/asm/processor.h            |   1 +
>  arch/riscv/include/asm/ptrace.h               |  35 ++
>  arch/riscv/include/asm/thread_info.h          |   4 +-
>  arch/riscv/include/asm/uprobes.h              |  40 +++
>  arch/riscv/kernel/Makefile                    |   1 +
>  arch/riscv/kernel/mcount-dyn.S                |   3 +-
>  arch/riscv/kernel/patch.c                     |   8 +-
>  arch/riscv/kernel/probes/Makefile             |   6 +
>  arch/riscv/kernel/probes/decode-insn.c        |  48 +++
>  arch/riscv/kernel/probes/decode-insn.h        |  18 +
>  arch/riscv/kernel/probes/ftrace.c             |  52 +++
>  arch/riscv/kernel/probes/kprobes.c            | 471 ++++++++++++++++++++++++++
>  arch/riscv/kernel/probes/kprobes_trampoline.S |  93 +++++
>  arch/riscv/kernel/probes/simulate-insn.c      |  85 +++++
>  arch/riscv/kernel/probes/simulate-insn.h      |  47 +++
>  arch/riscv/kernel/probes/uprobes.c            | 186 ++++++++++
>  arch/riscv/kernel/ptrace.c                    |  99 ++++++
>  arch/riscv/kernel/signal.c                    |   3 +
>  arch/riscv/kernel/traps.c                     |  19 ++
>  arch/riscv/lib/Makefile                       |   2 +
>  arch/riscv/lib/error-inject.c                 |  10 +
>  arch/riscv/mm/fault.c                         |  11 +
>  25 files changed, 1310 insertions(+), 4 deletions(-)
>  create mode 100644 arch/riscv/include/asm/probes.h
>  create mode 100644 arch/riscv/include/asm/uprobes.h
>  create mode 100644 arch/riscv/kernel/probes/Makefile
>  create mode 100644 arch/riscv/kernel/probes/decode-insn.c
>  create mode 100644 arch/riscv/kernel/probes/decode-insn.h
>  create mode 100644 arch/riscv/kernel/probes/ftrace.c
>  create mode 100644 arch/riscv/kernel/probes/kprobes.c
>  create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
>  create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
>  create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
>  create mode 100644 arch/riscv/kernel/probes/uprobes.c
>  create mode 100644 arch/riscv/lib/error-inject.c
> 
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API
  2020-07-13 23:39 ` [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API guoren
@ 2020-07-14 11:25   ` Masami Hiramatsu
  0 siblings, 0 replies; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-14 11:25 UTC (permalink / raw)
  To: guoren
  Cc: palmerdabbelt, paul.walmsley, oleg, linux-riscv, linux-kernel,
	anup, linux-csky, greentime.hu, zong.li, me, bjorn.topel,
	Guo Ren

On Mon, 13 Jul 2020 23:39:16 +0000
guoren@kernel.org wrote:

> From: Patrick Stählin <me@packi.ch>
> 
> Needed for kprobes support. Copied and adapted from arm64 code.
> 
> Guo Ren fixup pt_regs type for linux-5.8-rc1.
> 

Looks good to me.

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!

> Signed-off-by: Patrick Stählin <me@packi.ch>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Reviewed-by: Pekka Enberg <penberg@kernel.org>
> Reviewed-by: Zong Li <zong.li@sifive.com>
> ---
>  arch/riscv/Kconfig              |  1 +
>  arch/riscv/include/asm/ptrace.h | 29 ++++++++++++
>  arch/riscv/kernel/ptrace.c      | 99 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 129 insertions(+)
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 3230c1d..e70449a 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -78,6 +78,7 @@ config RISCV
>  	select SPARSE_IRQ
>  	select SYSCTL_EXCEPTION_TRACE
>  	select THREAD_INFO_IN_TASK
> +	select HAVE_REGS_AND_STACK_ACCESS_API
>  
>  config ARCH_MMAP_RND_BITS_MIN
>  	default 18 if 64BIT
> diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
> index ee49f80..23372bb 100644
> --- a/arch/riscv/include/asm/ptrace.h
> +++ b/arch/riscv/include/asm/ptrace.h
> @@ -8,6 +8,7 @@
>  
>  #include <uapi/asm/ptrace.h>
>  #include <asm/csr.h>
> +#include <linux/compiler.h>
>  
>  #ifndef __ASSEMBLY__
>  
> @@ -60,6 +61,7 @@ struct pt_regs {
>  
>  #define user_mode(regs) (((regs)->status & SR_PP) == 0)
>  
> +#define MAX_REG_OFFSET offsetof(struct pt_regs, orig_a0)
>  
>  /* Helpers for working with the instruction pointer */
>  static inline unsigned long instruction_pointer(struct pt_regs *regs)
> @@ -85,6 +87,12 @@ static inline void user_stack_pointer_set(struct pt_regs *regs,
>  	regs->sp =  val;
>  }
>  
> +/* Valid only for Kernel mode traps. */
> +static inline unsigned long kernel_stack_pointer(struct pt_regs *regs)
> +{
> +	return regs->sp;
> +}
> +
>  /* Helpers for working with the frame pointer */
>  static inline unsigned long frame_pointer(struct pt_regs *regs)
>  {
> @@ -101,6 +109,27 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
>  	return regs->a0;
>  }
>  
> +extern int regs_query_register_offset(const char *name);
> +extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
> +					       unsigned int n);
> +
> +/**
> + * regs_get_register() - get register value from its offset
> + * @regs:	pt_regs from which register value is gotten
> + * @offset:	offset of the register.
> + *
> + * regs_get_register returns the value of a register whose offset from @regs.
> + * The @offset is the offset of the register in struct pt_regs.
> + * If @offset is bigger than MAX_REG_OFFSET, this returns 0.
> + */
> +static inline unsigned long regs_get_register(struct pt_regs *regs,
> +					      unsigned int offset)
> +{
> +	if (unlikely(offset > MAX_REG_OFFSET))
> +		return 0;
> +
> +	return *(unsigned long *)((unsigned long)regs + offset);
> +}
>  #endif /* __ASSEMBLY__ */
>  
>  #endif /* _ASM_RISCV_PTRACE_H */
> diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c
> index 444dc7b..a11c692 100644
> --- a/arch/riscv/kernel/ptrace.c
> +++ b/arch/riscv/kernel/ptrace.c
> @@ -125,6 +125,105 @@ const struct user_regset_view *task_user_regset_view(struct task_struct *task)
>  	return &riscv_user_native_view;
>  }
>  
> +struct pt_regs_offset {
> +	const char *name;
> +	int offset;
> +};
> +
> +#define REG_OFFSET_NAME(r) {.name = #r, .offset = offsetof(struct pt_regs, r)}
> +#define REG_OFFSET_END {.name = NULL, .offset = 0}
> +
> +static const struct pt_regs_offset regoffset_table[] = {
> +	REG_OFFSET_NAME(epc),
> +	REG_OFFSET_NAME(ra),
> +	REG_OFFSET_NAME(sp),
> +	REG_OFFSET_NAME(gp),
> +	REG_OFFSET_NAME(tp),
> +	REG_OFFSET_NAME(t0),
> +	REG_OFFSET_NAME(t1),
> +	REG_OFFSET_NAME(t2),
> +	REG_OFFSET_NAME(s0),
> +	REG_OFFSET_NAME(s1),
> +	REG_OFFSET_NAME(a0),
> +	REG_OFFSET_NAME(a1),
> +	REG_OFFSET_NAME(a2),
> +	REG_OFFSET_NAME(a3),
> +	REG_OFFSET_NAME(a4),
> +	REG_OFFSET_NAME(a5),
> +	REG_OFFSET_NAME(a6),
> +	REG_OFFSET_NAME(a7),
> +	REG_OFFSET_NAME(s2),
> +	REG_OFFSET_NAME(s3),
> +	REG_OFFSET_NAME(s4),
> +	REG_OFFSET_NAME(s5),
> +	REG_OFFSET_NAME(s6),
> +	REG_OFFSET_NAME(s7),
> +	REG_OFFSET_NAME(s8),
> +	REG_OFFSET_NAME(s9),
> +	REG_OFFSET_NAME(s10),
> +	REG_OFFSET_NAME(s11),
> +	REG_OFFSET_NAME(t3),
> +	REG_OFFSET_NAME(t4),
> +	REG_OFFSET_NAME(t5),
> +	REG_OFFSET_NAME(t6),
> +	REG_OFFSET_NAME(status),
> +	REG_OFFSET_NAME(badaddr),
> +	REG_OFFSET_NAME(cause),
> +	REG_OFFSET_NAME(orig_a0),
> +	REG_OFFSET_END,
> +};
> +
> +/**
> + * regs_query_register_offset() - query register offset from its name
> + * @name:	the name of a register
> + *
> + * regs_query_register_offset() returns the offset of a register in struct
> + * pt_regs from its name. If the name is invalid, this returns -EINVAL;
> + */
> +int regs_query_register_offset(const char *name)
> +{
> +	const struct pt_regs_offset *roff;
> +
> +	for (roff = regoffset_table; roff->name != NULL; roff++)
> +		if (!strcmp(roff->name, name))
> +			return roff->offset;
> +	return -EINVAL;
> +}
> +
> +/**
> + * regs_within_kernel_stack() - check the address in the stack
> + * @regs:      pt_regs which contains kernel stack pointer.
> + * @addr:      address which is checked.
> + *
> + * regs_within_kernel_stack() checks @addr is within the kernel stack page(s).
> + * If @addr is within the kernel stack, it returns true. If not, returns false.
> + */
> +static bool regs_within_kernel_stack(struct pt_regs *regs, unsigned long addr)
> +{
> +	return (addr & ~(THREAD_SIZE - 1))  ==
> +		(kernel_stack_pointer(regs) & ~(THREAD_SIZE - 1));
> +}
> +
> +/**
> + * regs_get_kernel_stack_nth() - get Nth entry of the stack
> + * @regs:	pt_regs which contains kernel stack pointer.
> + * @n:		stack entry number.
> + *
> + * regs_get_kernel_stack_nth() returns @n th entry of the kernel stack which
> + * is specified by @regs. If the @n th entry is NOT in the kernel stack,
> + * this returns 0.
> + */
> +unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs, unsigned int n)
> +{
> +	unsigned long *addr = (unsigned long *)kernel_stack_pointer(regs);
> +
> +	addr += n;
> +	if (regs_within_kernel_stack(regs, (unsigned long)addr))
> +		return *addr;
> +	else
> +		return 0;
> +}
> +
>  void ptrace_disable(struct task_struct *child)
>  {
>  	clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc
  2020-07-13 23:39 ` [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc guoren
@ 2020-07-14 11:32   ` Masami Hiramatsu
  2020-08-14 22:36   ` Palmer Dabbelt
  1 sibling, 0 replies; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-14 11:32 UTC (permalink / raw)
  To: guoren
  Cc: palmerdabbelt, paul.walmsley, oleg, linux-riscv, linux-kernel,
	anup, linux-csky, greentime.hu, zong.li, me, bjorn.topel,
	Guo Ren

On Mon, 13 Jul 2020 23:39:18 +0000
guoren@kernel.org wrote:

> From: Guo Ren <guoren@linux.alibaba.com>
> 
> The "Changing Execution Path" section in the Documentation/kprobes.txt
> said:
> 
> Since kprobes can probe into a running kernel code, it can change the
> register set, including instruction pointer.
> 

Looks Good to me:)

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you!

> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> ---
>  arch/riscv/kernel/mcount-dyn.S | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
> index 35a6ed7..4b58b54 100644
> --- a/arch/riscv/kernel/mcount-dyn.S
> +++ b/arch/riscv/kernel/mcount-dyn.S
> @@ -123,6 +123,7 @@ ENDPROC(ftrace_caller)
>  	sd	ra, (PT_SIZE_ON_STACK+8)(sp)
>  	addi	s0, sp, (PT_SIZE_ON_STACK+16)
>  
> +	sd ra,  PT_EPC(sp)
>  	sd x1,  PT_RA(sp)
>  	sd x2,  PT_SP(sp)
>  	sd x3,  PT_GP(sp)
> @@ -157,6 +158,7 @@ ENDPROC(ftrace_caller)
>  	.endm
>  
>  	.macro RESTORE_ALL
> +	ld ra,  PT_EPC(sp)
>  	ld x1,  PT_RA(sp)
>  	ld x2,  PT_SP(sp)
>  	ld x3,  PT_GP(sp)
> @@ -190,7 +192,6 @@ ENDPROC(ftrace_caller)
>  	ld x31, PT_T6(sp)
>  
>  	ld	s0, (PT_SIZE_ON_STACK)(sp)
> -	ld	ra, (PT_SIZE_ON_STACK+8)(sp)
>  	addi	sp, sp, (PT_SIZE_ON_STACK+16)
>  	.endm
>  
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-13 23:39 ` [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported guoren
@ 2020-07-14 11:37   ` Masami Hiramatsu
  2020-07-14 16:24     ` Guo Ren
  0 siblings, 1 reply; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-14 11:37 UTC (permalink / raw)
  To: guoren
  Cc: palmerdabbelt, paul.walmsley, oleg, linux-riscv, linux-kernel,
	anup, linux-csky, greentime.hu, zong.li, me, bjorn.topel,
	Guo Ren, Pekka Enberg

On Mon, 13 Jul 2020 23:39:21 +0000
guoren@kernel.org wrote:

> From: Guo Ren <guoren@linux.alibaba.com>
> 
> This patch adds support for kprobes on ftrace call sites to avoids
> much of the overhead with regular kprobes. Try it with simple
> steps:
> 
> 1. Get _do_fork ftrace call site.
> Dump of assembler code for function _do_fork:
>    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
>    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
>    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
>    0xffffffe00020af6a <+6>:     addi    s0,sp,128
>    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
>    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
>    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
>    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
>    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
>    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
>    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
>    0xffffffe00020af7a <+22>:    mv      s4,a0
>    0xffffffe00020af7c <+24>:    mv      a0,ra
>    0xffffffe00020af7e <+26>:    nop	<<<<<<<< here!
>    0xffffffe00020af82 <+30>:    nop
>    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> 
> 2. Set _do_fork+26 as the kprobe.
>   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
>   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
>   cat /sys/kernel/debug/tracing/trace
>   tracer: nop
> 
>   entries-in-buffer/entries-written: 3/3   #P:1
> 
>                                _-----=> irqs-off
>                               / _----=> need-resched
>                              | / _---=> hardirq/softirq
>                              || / _--=> preempt-depth
>                              ||| /     delay
>             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
>                | |       |   ||||       |         |
>               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> 
>   cat /sys/kernel/debug/kprobes/list
> ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
>                                        ^^^^^^

Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
it will be useful for users (if they can find the offset).

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you,

> 
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Björn Töpel <bjorn.topel@gmail.com>
> Cc: Zong Li <zong.li@sifive.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> ---
>  arch/riscv/Kconfig                |  1 +
>  arch/riscv/kernel/probes/Makefile |  1 +
>  arch/riscv/kernel/probes/ftrace.c | 52 +++++++++++++++++++++++++++++++++++++++
>  3 files changed, 54 insertions(+)
>  create mode 100644 arch/riscv/kernel/probes/ftrace.c
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index a41b785..0e9f5eb 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -60,6 +60,7 @@ config RISCV
>  	select HAVE_FUTEX_CMPXCHG if FUTEX
>  	select HAVE_GENERIC_VDSO if MMU && 64BIT
>  	select HAVE_KPROBES
> +	select HAVE_KPROBES_ON_FTRACE
>  	select HAVE_KRETPROBES
>  	select HAVE_PCI
>  	select HAVE_PERF_EVENTS
> diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> index cb62991..7f0840d 100644
> --- a/arch/riscv/kernel/probes/Makefile
> +++ b/arch/riscv/kernel/probes/Makefile
> @@ -1,5 +1,6 @@
>  # SPDX-License-Identifier: GPL-2.0
>  obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o simulate-insn.o
>  obj-$(CONFIG_KPROBES)		+= kprobes_trampoline.o
> +obj-$(CONFIG_KPROBES_ON_FTRACE)	+= ftrace.o
>  obj-$(CONFIG_UPROBES)		+= uprobes.o decode-insn.o simulate-insn.o
>  CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> diff --git a/arch/riscv/kernel/probes/ftrace.c b/arch/riscv/kernel/probes/ftrace.c
> new file mode 100644
> index 00000000..e0fe58a
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/ftrace.c
> @@ -0,0 +1,52 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/kprobes.h>
> +
> +/* Ftrace callback handler for kprobes -- called under preepmt disabed */
> +void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
> +			   struct ftrace_ops *ops, struct pt_regs *regs)
> +{
> +	struct kprobe *p;
> +	struct kprobe_ctlblk *kcb;
> +
> +	p = get_kprobe((kprobe_opcode_t *)ip);
> +	if (unlikely(!p) || kprobe_disabled(p))
> +		return;
> +
> +	kcb = get_kprobe_ctlblk();
> +	if (kprobe_running()) {
> +		kprobes_inc_nmissed_count(p);
> +	} else {
> +		unsigned long orig_ip = instruction_pointer(regs);
> +		instruction_pointer_set(regs, ip);
> +
> +		__this_cpu_write(current_kprobe, p);
> +		kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +		if (!p->pre_handler || !p->pre_handler(p, regs)) {
> +			/*
> +			 * Emulate singlestep (and also recover regs->pc)
> +			 * as if there is a nop
> +			 */
> +			instruction_pointer_set(regs,
> +				(unsigned long)p->addr + MCOUNT_INSN_SIZE);
> +			if (unlikely(p->post_handler)) {
> +				kcb->kprobe_status = KPROBE_HIT_SSDONE;
> +				p->post_handler(p, regs, 0);
> +			}
> +			instruction_pointer_set(regs, orig_ip);
> +		}
> +
> +		/*
> +		 * If pre_handler returns !0, it changes regs->pc. We have to
> +		 * skip emulating post_handler.
> +		 */
> +		__this_cpu_write(current_kprobe, NULL);
> +	}
> +}
> +NOKPROBE_SYMBOL(kprobe_ftrace_handler);
> +
> +int arch_prepare_kprobe_ftrace(struct kprobe *p)
> +{
> +	p->ainsn.api.insn = NULL;
> +	return 0;
> +}
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 7/7] riscv: Add support for function error injection
  2020-07-13 23:39 ` [PATCH v3 7/7] riscv: Add support for function error injection guoren
@ 2020-07-14 11:43   ` Masami Hiramatsu
  0 siblings, 0 replies; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-14 11:43 UTC (permalink / raw)
  To: guoren
  Cc: palmerdabbelt, paul.walmsley, oleg, linux-riscv, linux-kernel,
	anup, linux-csky, greentime.hu, zong.li, me, bjorn.topel,
	Guo Ren

Hi Guo,

On Mon, 13 Jul 2020 23:39:22 +0000
guoren@kernel.org wrote:

> From: Guo Ren <guoren@linux.alibaba.com>
> 
> Inspired by the commit 42d038c4fb00 ("arm64: Add support for function
> error injection"), this patch supports function error injection for
> riscv.
> 
> This patch mainly support two functions: one is regs_set_return_value()
> which is used to overwrite the return value; the another function is
> override_function_with_return() which is to override the probed
> function returning and jump to its caller.
> 
> Test log:
>  cd /sys/kernel/debug/fail_function
>  echo sys_clone > inject
>  echo 100 > probability
>  echo 1 > interval
>  ls /
> [  313.176875] FAULT_INJECTION: forcing a failure.
> [  313.176875] name fail_function, interval 1, probability 100, space 0, times 1
> [  313.184357] CPU: 0 PID: 87 Comm: sh Not tainted 5.8.0-rc5-00007-g6a758cc #117
> [  313.187616] Call Trace:
> [  313.189100] [<ffffffe0002036b6>] walk_stackframe+0x0/0xc2
> [  313.191626] [<ffffffe00020395c>] show_stack+0x40/0x4c
> [  313.193927] [<ffffffe000556c60>] dump_stack+0x7c/0x96
> [  313.194795] [<ffffffe0005522e8>] should_fail+0x140/0x142
> [  313.195923] [<ffffffe000299ffc>] fei_kprobe_handler+0x2c/0x5a
> [  313.197687] [<ffffffe0009e2ec4>] kprobe_breakpoint_handler+0xb4/0x18a
> [  313.200054] [<ffffffe00020357e>] do_trap_break+0x36/0xca
> [  313.202147] [<ffffffe000201bca>] ret_from_exception+0x0/0xc
> [  313.204556] [<ffffffe000201bbc>] ret_from_syscall+0x0/0x2
> -sh: can't fork: Invalid argument

OK, this looks good to me.

Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>

Thank you,

> 
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> ---
>  arch/riscv/Kconfig              |  1 +
>  arch/riscv/include/asm/ptrace.h |  6 ++++++
>  arch/riscv/lib/Makefile         |  2 ++
>  arch/riscv/lib/error-inject.c   | 10 ++++++++++
>  4 files changed, 19 insertions(+)
>  create mode 100644 arch/riscv/lib/error-inject.c
> 
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index 0e9f5eb..ad73174 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -58,6 +58,7 @@ config RISCV
>  	select HAVE_DMA_CONTIGUOUS if MMU
>  	select HAVE_EBPF_JIT if MMU
>  	select HAVE_FUTEX_CMPXCHG if FUTEX
> +	select HAVE_FUNCTION_ERROR_INJECTION
>  	select HAVE_GENERIC_VDSO if MMU && 64BIT
>  	select HAVE_KPROBES
>  	select HAVE_KPROBES_ON_FTRACE
> diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h
> index 23372bb..cb4abb6 100644
> --- a/arch/riscv/include/asm/ptrace.h
> +++ b/arch/riscv/include/asm/ptrace.h
> @@ -109,6 +109,12 @@ static inline unsigned long regs_return_value(struct pt_regs *regs)
>  	return regs->a0;
>  }
>  
> +static inline void regs_set_return_value(struct pt_regs *regs,
> +					 unsigned long val)
> +{
> +	regs->a0 = val;
> +}
> +
>  extern int regs_query_register_offset(const char *name);
>  extern unsigned long regs_get_kernel_stack_nth(struct pt_regs *regs,
>  					       unsigned int n);
> diff --git a/arch/riscv/lib/Makefile b/arch/riscv/lib/Makefile
> index 0d0db80..04baa93 100644
> --- a/arch/riscv/lib/Makefile
> +++ b/arch/riscv/lib/Makefile
> @@ -4,3 +4,5 @@ lib-y			+= memcpy.o
>  lib-y			+= memset.o
>  lib-y			+= uaccess.o
>  lib-$(CONFIG_64BIT)	+= tishift.o
> +
> +obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
> diff --git a/arch/riscv/lib/error-inject.c b/arch/riscv/lib/error-inject.c
> new file mode 100644
> index 00000000..d667ade
> --- /dev/null
> +++ b/arch/riscv/lib/error-inject.c
> @@ -0,0 +1,10 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <linux/error-injection.h>
> +#include <linux/kprobes.h>
> +
> +void override_function_with_return(struct pt_regs *regs)
> +{
> +	instruction_pointer_set(regs, regs->ra);
> +}
> +NOKPROBE_SYMBOL(override_function_with_return);
> -- 
> 2.7.4
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-14 11:37   ` Masami Hiramatsu
@ 2020-07-14 16:24     ` Guo Ren
  2020-07-21 13:27       ` Masami Hiramatsu
  0 siblings, 1 reply; 24+ messages in thread
From: Guo Ren @ 2020-07-14 16:24 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren,
	Pekka Enberg

Thx Masami,

On Tue, Jul 14, 2020 at 7:38 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Mon, 13 Jul 2020 23:39:21 +0000
> guoren@kernel.org wrote:
>
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > This patch adds support for kprobes on ftrace call sites to avoids
> > much of the overhead with regular kprobes. Try it with simple
> > steps:
> >
> > 1. Get _do_fork ftrace call site.
> > Dump of assembler code for function _do_fork:
> >    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
> >    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
> >    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
> >    0xffffffe00020af6a <+6>:     addi    s0,sp,128
> >    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
> >    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
> >    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
> >    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
> >    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
> >    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
> >    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
> >    0xffffffe00020af7a <+22>:    mv      s4,a0
> >    0xffffffe00020af7c <+24>:    mv      a0,ra
> >    0xffffffe00020af7e <+26>:    nop   <<<<<<<< here!
> >    0xffffffe00020af82 <+30>:    nop
> >    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> >
> > 2. Set _do_fork+26 as the kprobe.
> >   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> >   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
> >   cat /sys/kernel/debug/tracing/trace
> >   tracer: nop
> >
> >   entries-in-buffer/entries-written: 3/3   #P:1
> >
> >                                _-----=> irqs-off
> >                               / _----=> need-resched
> >                              | / _---=> hardirq/softirq
> >                              || / _--=> preempt-depth
> >                              ||| /     delay
> >             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> >                | |       |   ||||       |         |
> >               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> >
> >   cat /sys/kernel/debug/kprobes/list
> > ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
> >                                        ^^^^^^
>
> Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
> it will be useful for users (if they can find the offset).

Seems only x86 & ⬆️90 use fentry,can you elaborate more about fentry's
benefit and how the user could set kprobe on ftrace call site without
disassemble?

--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 0/7] riscv: Add k/uprobe supported
  2020-07-14 11:23 ` [PATCH v3 0/7] riscv: Add k/uprobe supported Masami Hiramatsu
@ 2020-07-15  6:45   ` Guo Ren
  0 siblings, 0 replies; 24+ messages in thread
From: Guo Ren @ 2020-07-15  6:45 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren

On Tue, Jul 14, 2020 at 7:23 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> Hi Guo,
>
> On Mon, 13 Jul 2020 23:39:15 +0000
> guoren@kernel.org wrote:
>
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > The patchset includes kprobe/uprobe support and some related fixups.
> > Patrick provides HAVE_REGS_AND_STACK_ACCESS_API support and some
> > kprobe's code. The framework of k/uprobe is from csky but also refers
> > to other arches'. kprobes on ftrace is also supported in the patchset.
> >
> > There is no single step exception in riscv ISA, only single-step
> > facility for jtag. See riscv-Privileged spec:
> >
> > Interrupt Exception Code-Description
> > 1 0 Reserved
> > 1 1 Supervisor software interrupt
> > 1 2–4 Reserved
> > 1 5 Supervisor timer interrupt
> > 1 6–8 Reserved
> > 1 9 Supervisor external interrupt
> > 1 10–15 Reserved
> > 1 ≥16 Available for platform use
> > 0 0 Instruction address misaligned
> > 0 1 Instruction access fault
> > 0 2 Illegal instruction
> > 0 3 Breakpoint
> > 0 4 Load address misaligned
> > 0 5 Load access fault
> > 0 6 Store/AMO address misaligned
> > 0 7 Store/AMO access fault
> > 0 8 Environment call from U-mode
> > 0 9 Environment call from S-mode
> > 0 10–11 Reserved
> > 0 12 Instruction page fault
> > 0 13 Load page fault
> > 0 14 Reserved
> > 0 15 Store/AMO page fault
> > 0 16–23 Reserved
> > 0 24–31 Available for custom use
> > 0 32–47 Reserved
> > 0 48–63 Available for custom use
> > 0 ≥64 Reserved
> >
> > No single step!
> >
> > Other arches use hardware single-step exception for k/uprobe,  eg:
> >  - powerpc: regs->msr |= MSR_SINGLESTEP
> >  - arm/arm64: PSTATE.D for enabling software step exceptions
> >  - s390: Set PER control regs, turns on single step for the given address
> >  - x86: regs->flags |= X86_EFLAGS_TF
> >  - csky: of course use hw single step :)
> >
> > All the above arches use a hardware single-step exception
> > mechanism to execute the instruction that was replaced with a probe
> > breakpoint. So utilize ebreak to simulate.
> >
> > Some pc related instructions couldn't be executed out of line and some
> > system/fence instructions couldn't be a trace site at all. So we give
> > out a reject list and simulate list in decode-insn.c.
> >
> > You could use uprobe to test simulate code like this:
> >
> >  echo 'p:enter_current_state_one /hello:0x6e4 a0=%a0 a1=%a1' >> /sys/kernel/debug/tracing/uprobe_events
> >  echo 1 > /sys/kernel/debug/tracing/events/uprobes/enable
> >  /hello
> >  ^C
> >  cat /sys/kernel/debug/tracing/trace
> >  tracer: nop
> >
> >  entries-in-buffer/entries-written: 1/1   #P:1
> >
> >                               _-----=> irqs-off
> >                              / _----=> need-resched
> >                             | / _---=> hardirq/softirq
> >                             || / _--=> preempt-depth
> >                             ||| /     delay
> >            TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> >               | |       |   ||||       |         |
> >           hello-94    [000] d...    55.404242: enter_current_state_one: (0x106e4) a0=0x1 a1=0x3fffa8ada8
> >
> > Be care /hello:0x6e4 is the file offset in elf and it relate to 0x106e4
> > in memory and hello is your target elf program.
> >
> > Try kprobe like this:
> >
> >  echo 'p:myprobe _do_fork dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> >  echo 'r:myretprobe _do_fork $retval' >> /sys/kernel/debug/tracing/kprobe_event
> >
> >  echo 1 >/sys/kernel/debug/tracing/events/kprobes/enable
> >  cat /sys/kernel/debug/tracing/trace
> >  tracer: nop
> >
> >  entries-in-buffer/entries-written: 2/2   #P:1
> >
> >                               _-----=> irqs-off
> >                              / _----=> need-resched
> >                             | / _---=> hardirq/softirq
> >                             || / _--=> preempt-depth
> >                             ||| /     delay
> >            TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> >               | |       |   ||||       |         |
> >              sh-92    [000] .n..   131.804230: myprobe: (_do_fork+0x0/0x2e6) dfd=0xffffffe03929fdf8 filename=0x0 flags=0x101000 mode=0x1200000ffffffe0
> >              sh-92    [000] d...   131.806607: myretprobe: (__do_sys_clone+0x70/0x82 <- _do_fork) arg1=0x5f
> >  cat /sys/kernel/debug/tracing/trace
>
> Thank you for your great work!
>
> BTW, could you also run the ftracetest and boot-time smoke test on it?
> You can find it under tools/testing/selftests/ftrace, and
> CONFIG_KPROBES_SANITY_TEST.
> It will ensure that your patch is correctly ported.

CONFIG_KPROBES_SANITY_TEST passed:
[    0.078274] NET: Registered protocol family 16
[    0.162015] Kprobe smoke test: started
[    0.456900] Kprobe smoke test: passed successfully

The tools/testing/selftests/ftrace cover a lot of stuff not only
kprobe, and I'll try them later to fixup in another patchset.
-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-14 16:24     ` Guo Ren
@ 2020-07-21 13:27       ` Masami Hiramatsu
  2020-07-22  8:39         ` Guo Ren
  2020-07-22 13:31         ` Guo Ren
  0 siblings, 2 replies; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-21 13:27 UTC (permalink / raw)
  To: Guo Ren
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren,
	Pekka Enberg

On Wed, 15 Jul 2020 00:24:54 +0800
Guo Ren <guoren@kernel.org> wrote:

> Thx Masami,
> 
> On Tue, Jul 14, 2020 at 7:38 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > On Mon, 13 Jul 2020 23:39:21 +0000
> > guoren@kernel.org wrote:
> >
> > > From: Guo Ren <guoren@linux.alibaba.com>
> > >
> > > This patch adds support for kprobes on ftrace call sites to avoids
> > > much of the overhead with regular kprobes. Try it with simple
> > > steps:
> > >
> > > 1. Get _do_fork ftrace call site.
> > > Dump of assembler code for function _do_fork:
> > >    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
> > >    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
> > >    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
> > >    0xffffffe00020af6a <+6>:     addi    s0,sp,128
> > >    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
> > >    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
> > >    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
> > >    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
> > >    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
> > >    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
> > >    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
> > >    0xffffffe00020af7a <+22>:    mv      s4,a0
> > >    0xffffffe00020af7c <+24>:    mv      a0,ra
> > >    0xffffffe00020af7e <+26>:    nop   <<<<<<<< here!
> > >    0xffffffe00020af82 <+30>:    nop
> > >    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> > >
> > > 2. Set _do_fork+26 as the kprobe.
> > >   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> > >   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
> > >   cat /sys/kernel/debug/tracing/trace
> > >   tracer: nop
> > >
> > >   entries-in-buffer/entries-written: 3/3   #P:1
> > >
> > >                                _-----=> irqs-off
> > >                               / _----=> need-resched
> > >                              | / _---=> hardirq/softirq
> > >                              || / _--=> preempt-depth
> > >                              ||| /     delay
> > >             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> > >                | |       |   ||||       |         |
> > >               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> > >
> > >   cat /sys/kernel/debug/kprobes/list
> > > ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
> > >                                        ^^^^^^
> >
> > Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
> > it will be useful for users (if they can find the offset).
> 
> Seems only x86 & ⬆️90 use fentry,can you elaborate more about fentry's
> benefit and how the user could set kprobe on ftrace call site without
> disassemble?

On x86, the fentry replaces the mcount with just one call instruction, without
saving any arguments. This means all probes which are puts on the address of
target symbol, are automatically using ftrace. IOW, all probes on _do_fork+0
will use ftrace. We don't need any disassembling.

I think if RISC-V already support "-fpatchable-function-entry=2" option on
GCC, you can easily enable it as same as arm64. See https://lkml.org/lkml/2019/6/18/648

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-21 13:27       ` Masami Hiramatsu
@ 2020-07-22  8:39         ` Guo Ren
  2020-07-23 15:55           ` Masami Hiramatsu
  2020-07-22 13:31         ` Guo Ren
  1 sibling, 1 reply; 24+ messages in thread
From: Guo Ren @ 2020-07-22  8:39 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren,
	Pekka Enberg

Hi Masami,

On Tue, Jul 21, 2020 at 9:27 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Wed, 15 Jul 2020 00:24:54 +0800
> Guo Ren <guoren@kernel.org> wrote:
>
> > Thx Masami,
> >
> > On Tue, Jul 14, 2020 at 7:38 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > >
> > > On Mon, 13 Jul 2020 23:39:21 +0000
> > > guoren@kernel.org wrote:
> > >
> > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > >
> > > > This patch adds support for kprobes on ftrace call sites to avoids
> > > > much of the overhead with regular kprobes. Try it with simple
> > > > steps:
> > > >
> > > > 1. Get _do_fork ftrace call site.
> > > > Dump of assembler code for function _do_fork:
> > > >    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
> > > >    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
> > > >    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
> > > >    0xffffffe00020af6a <+6>:     addi    s0,sp,128
> > > >    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
> > > >    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
> > > >    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
> > > >    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
> > > >    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
> > > >    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
> > > >    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
> > > >    0xffffffe00020af7a <+22>:    mv      s4,a0
> > > >    0xffffffe00020af7c <+24>:    mv      a0,ra
> > > >    0xffffffe00020af7e <+26>:    nop   <<<<<<<< here!
> > > >    0xffffffe00020af82 <+30>:    nop
> > > >    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> > > >
> > > > 2. Set _do_fork+26 as the kprobe.
> > > >   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> > > >   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
> > > >   cat /sys/kernel/debug/tracing/trace
> > > >   tracer: nop
> > > >
> > > >   entries-in-buffer/entries-written: 3/3   #P:1
> > > >
> > > >                                _-----=> irqs-off
> > > >                               / _----=> need-resched
> > > >                              | / _---=> hardirq/softirq
> > > >                              || / _--=> preempt-depth
> > > >                              ||| /     delay
> > > >             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> > > >                | |       |   ||||       |         |
> > > >               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> > > >
> > > >   cat /sys/kernel/debug/kprobes/list
> > > > ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
> > > >                                        ^^^^^^
> > >
> > > Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
> > > it will be useful for users (if they can find the offset).
> >
> > Seems only x86 & ⬆️90 use fentry,can you elaborate more about fentry's
> > benefit and how the user could set kprobe on ftrace call site without
> > disassemble?
>
> On x86, the fentry replaces the mcount with just one call instruction, without
> saving any arguments. This means all probes which are puts on the address of
> target symbol, are automatically using ftrace. IOW, all probes on _do_fork+0
> will use ftrace. We don't need any disassembling.
>
> I think if RISC-V already support "-fpatchable-function-entry=2" option on
> GCC, you can easily enable it as same as arm64. See https://lkml.org/lkml/2019/6/18/648
the link is:
[PATCH 0/7] powerpc/ftrace: Patch out -mprofile-kernel instructions

Is that right?

-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-21 13:27       ` Masami Hiramatsu
  2020-07-22  8:39         ` Guo Ren
@ 2020-07-22 13:31         ` Guo Ren
  2020-07-23 16:11           ` Masami Hiramatsu
  1 sibling, 1 reply; 24+ messages in thread
From: Guo Ren @ 2020-07-22 13:31 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren,
	Pekka Enberg

Hi Masami,

Current riscv ftrace_caller utilize fp(s0) - 8 in stack to get ra of
function, eg:
foo:
   2bb0:       7119                    addi    sp,sp,-128
    2bb2:       f8a2                    sd      s0,112(sp)
    2bb4:       fc86                    sd      ra,120(sp)
...
    2bc4:       0100                    addi    s0,sp,128
...
0000000000002bca <.LVL828>:
    2bca:       00000097                auipc   ra,0x0
    2bce:       000080e7                jalr    ra # 2bca <.LVL828> //_mcount

So just put two nops before prologue of function isn't enough, because
riscv don't like arm64 which could use x9-x18 reserved regs to pass
ra(x30).
    | mov       x9, x30
    | bl        <ftrace-entry>
If the benefit is just making a kprobe on function symbol address to
prevent disassembling, I'll delay this feature.


I also have a look at HAVE_FENTRY & HAVE_NOP_MCOUNT. Seems it just
avoid using scripts/recordmcount.pl script and directly generate nops
for _mcount.
It's different from -fpatchable-function-entry=2 which generating nops
before function prologue in arm64, isn't it?

On Tue, Jul 21, 2020 at 9:27 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Wed, 15 Jul 2020 00:24:54 +0800
> Guo Ren <guoren@kernel.org> wrote:
>
> > Thx Masami,
> >
> > On Tue, Jul 14, 2020 at 7:38 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > >
> > > On Mon, 13 Jul 2020 23:39:21 +0000
> > > guoren@kernel.org wrote:
> > >
> > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > >
> > > > This patch adds support for kprobes on ftrace call sites to avoids
> > > > much of the overhead with regular kprobes. Try it with simple
> > > > steps:
> > > >
> > > > 1. Get _do_fork ftrace call site.
> > > > Dump of assembler code for function _do_fork:
> > > >    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
> > > >    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
> > > >    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
> > > >    0xffffffe00020af6a <+6>:     addi    s0,sp,128
> > > >    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
> > > >    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
> > > >    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
> > > >    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
> > > >    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
> > > >    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
> > > >    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
> > > >    0xffffffe00020af7a <+22>:    mv      s4,a0
> > > >    0xffffffe00020af7c <+24>:    mv      a0,ra
> > > >    0xffffffe00020af7e <+26>:    nop   <<<<<<<< here!
> > > >    0xffffffe00020af82 <+30>:    nop
> > > >    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> > > >
> > > > 2. Set _do_fork+26 as the kprobe.
> > > >   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> > > >   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
> > > >   cat /sys/kernel/debug/tracing/trace
> > > >   tracer: nop
> > > >
> > > >   entries-in-buffer/entries-written: 3/3   #P:1
> > > >
> > > >                                _-----=> irqs-off
> > > >                               / _----=> need-resched
> > > >                              | / _---=> hardirq/softirq
> > > >                              || / _--=> preempt-depth
> > > >                              ||| /     delay
> > > >             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> > > >                | |       |   ||||       |         |
> > > >               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> > > >
> > > >   cat /sys/kernel/debug/kprobes/list
> > > > ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
> > > >                                        ^^^^^^
> > >
> > > Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
> > > it will be useful for users (if they can find the offset).
> >
> > Seems only x86 & ⬆️90 use fentry,can you elaborate more about fentry's
> > benefit and how the user could set kprobe on ftrace call site without
> > disassemble?
>
> On x86, the fentry replaces the mcount with just one call instruction, without
> saving any arguments. This means all probes which are puts on the address of
> target symbol, are automatically using ftrace. IOW, all probes on _do_fork+0
> will use ftrace. We don't need any disassembling.
>
> I think if RISC-V already support "-fpatchable-function-entry=2" option on
> GCC, you can easily enable it as same as arm64. See https://lkml.org/lkml/2019/6/18/648
>
> Thank you,
>
> --
> Masami Hiramatsu <mhiramat@kernel.org>



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-22  8:39         ` Guo Ren
@ 2020-07-23 15:55           ` Masami Hiramatsu
  0 siblings, 0 replies; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-23 15:55 UTC (permalink / raw)
  To: Guo Ren
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren,
	Pekka Enberg

On Wed, 22 Jul 2020 16:39:53 +0800
Guo Ren <guoren@kernel.org> wrote:

> Hi Masami,
> 
> On Tue, Jul 21, 2020 at 9:27 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > On Wed, 15 Jul 2020 00:24:54 +0800
> > Guo Ren <guoren@kernel.org> wrote:
> >
> > > Thx Masami,
> > >
> > > On Tue, Jul 14, 2020 at 7:38 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > >
> > > > On Mon, 13 Jul 2020 23:39:21 +0000
> > > > guoren@kernel.org wrote:
> > > >
> > > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > > >
> > > > > This patch adds support for kprobes on ftrace call sites to avoids
> > > > > much of the overhead with regular kprobes. Try it with simple
> > > > > steps:
> > > > >
> > > > > 1. Get _do_fork ftrace call site.
> > > > > Dump of assembler code for function _do_fork:
> > > > >    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
> > > > >    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
> > > > >    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
> > > > >    0xffffffe00020af6a <+6>:     addi    s0,sp,128
> > > > >    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
> > > > >    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
> > > > >    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
> > > > >    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
> > > > >    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
> > > > >    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
> > > > >    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
> > > > >    0xffffffe00020af7a <+22>:    mv      s4,a0
> > > > >    0xffffffe00020af7c <+24>:    mv      a0,ra
> > > > >    0xffffffe00020af7e <+26>:    nop   <<<<<<<< here!
> > > > >    0xffffffe00020af82 <+30>:    nop
> > > > >    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> > > > >
> > > > > 2. Set _do_fork+26 as the kprobe.
> > > > >   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> > > > >   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
> > > > >   cat /sys/kernel/debug/tracing/trace
> > > > >   tracer: nop
> > > > >
> > > > >   entries-in-buffer/entries-written: 3/3   #P:1
> > > > >
> > > > >                                _-----=> irqs-off
> > > > >                               / _----=> need-resched
> > > > >                              | / _---=> hardirq/softirq
> > > > >                              || / _--=> preempt-depth
> > > > >                              ||| /     delay
> > > > >             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> > > > >                | |       |   ||||       |         |
> > > > >               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> > > > >
> > > > >   cat /sys/kernel/debug/kprobes/list
> > > > > ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
> > > > >                                        ^^^^^^
> > > >
> > > > Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
> > > > it will be useful for users (if they can find the offset).
> > >
> > > Seems only x86 & ⬆️90 use fentry,can you elaborate more about fentry's
> > > benefit and how the user could set kprobe on ftrace call site without
> > > disassemble?
> >
> > On x86, the fentry replaces the mcount with just one call instruction, without
> > saving any arguments. This means all probes which are puts on the address of
> > target symbol, are automatically using ftrace. IOW, all probes on _do_fork+0
> > will use ftrace. We don't need any disassembling.
> >
> > I think if RISC-V already support "-fpatchable-function-entry=2" option on
> > GCC, you can easily enable it as same as arm64. See https://lkml.org/lkml/2019/6/18/648
> the link is:
> [PATCH 0/7] powerpc/ftrace: Patch out -mprofile-kernel instructions

Oops, sorry.

https://lore.kernel.org/linux-arm-kernel/20191225172625.69811b3e@xhacker.debian/

this should be the right one.

-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported
  2020-07-22 13:31         ` Guo Ren
@ 2020-07-23 16:11           ` Masami Hiramatsu
  0 siblings, 0 replies; 24+ messages in thread
From: Masami Hiramatsu @ 2020-07-23 16:11 UTC (permalink / raw)
  To: Guo Ren
  Cc: Palmer Dabbelt, Paul Walmsley, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Björn Töpel, Guo Ren,
	Pekka Enberg

On Wed, 22 Jul 2020 21:31:20 +0800
Guo Ren <guoren@kernel.org> wrote:

> Hi Masami,
> 
> Current riscv ftrace_caller utilize fp(s0) - 8 in stack to get ra of
> function, eg:
> foo:
>    2bb0:       7119                    addi    sp,sp,-128
>     2bb2:       f8a2                    sd      s0,112(sp)
>     2bb4:       fc86                    sd      ra,120(sp)
> ...
>     2bc4:       0100                    addi    s0,sp,128
> ...
> 0000000000002bca <.LVL828>:
>     2bca:       00000097                auipc   ra,0x0
>     2bce:       000080e7                jalr    ra # 2bca <.LVL828> //_mcount
> 
> So just put two nops before prologue of function isn't enough, because
> riscv don't like arm64 which could use x9-x18 reserved regs to pass
> ra(x30).
>     | mov       x9, x30
>     | bl        <ftrace-entry>
> If the benefit is just making a kprobe on function symbol address to
> prevent disassembling, I'll delay this feature.

I recommend that. This feature has to involve ftrace and gcc, so
it is better to split it from this series.

> 
> 
> I also have a look at HAVE_FENTRY & HAVE_NOP_MCOUNT. Seems it just
> avoid using scripts/recordmcount.pl script and directly generate nops
> for _mcount.

Right.

> It's different from -fpatchable-function-entry=2 which generating nops
> before function prologue in arm64, isn't it?

Yes, fentry is for x86, but -fpatchable-function-entry=2 is making a
placeholder with nop at the entry of the functions for direct patching.

Thank you,

> 
> On Tue, Jul 21, 2020 at 9:27 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> >
> > On Wed, 15 Jul 2020 00:24:54 +0800
> > Guo Ren <guoren@kernel.org> wrote:
> >
> > > Thx Masami,
> > >
> > > On Tue, Jul 14, 2020 at 7:38 PM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > >
> > > > On Mon, 13 Jul 2020 23:39:21 +0000
> > > > guoren@kernel.org wrote:
> > > >
> > > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > > >
> > > > > This patch adds support for kprobes on ftrace call sites to avoids
> > > > > much of the overhead with regular kprobes. Try it with simple
> > > > > steps:
> > > > >
> > > > > 1. Get _do_fork ftrace call site.
> > > > > Dump of assembler code for function _do_fork:
> > > > >    0xffffffe00020af64 <+0>:     addi    sp,sp,-128
> > > > >    0xffffffe00020af66 <+2>:     sd      s0,112(sp)
> > > > >    0xffffffe00020af68 <+4>:     sd      ra,120(sp)
> > > > >    0xffffffe00020af6a <+6>:     addi    s0,sp,128
> > > > >    0xffffffe00020af6c <+8>:     sd      s1,104(sp)
> > > > >    0xffffffe00020af6e <+10>:    sd      s2,96(sp)
> > > > >    0xffffffe00020af70 <+12>:    sd      s3,88(sp)
> > > > >    0xffffffe00020af72 <+14>:    sd      s4,80(sp)
> > > > >    0xffffffe00020af74 <+16>:    sd      s5,72(sp)
> > > > >    0xffffffe00020af76 <+18>:    sd      s6,64(sp)
> > > > >    0xffffffe00020af78 <+20>:    sd      s7,56(sp)
> > > > >    0xffffffe00020af7a <+22>:    mv      s4,a0
> > > > >    0xffffffe00020af7c <+24>:    mv      a0,ra
> > > > >    0xffffffe00020af7e <+26>:    nop   <<<<<<<< here!
> > > > >    0xffffffe00020af82 <+30>:    nop
> > > > >    0xffffffe00020af86 <+34>:    ld      s3,0(s4)
> > > > >
> > > > > 2. Set _do_fork+26 as the kprobe.
> > > > >   echo 'p:myprobe _do_fork+26 dfd=%a0 filename=%a1 flags=%a2 mode=+4($stack)' > /sys/kernel/debug/tracing/kprobe_events
> > > > >   echo 1 > /sys/kernel/debug/tracing/events/kprobes/enable
> > > > >   cat /sys/kernel/debug/tracing/trace
> > > > >   tracer: nop
> > > > >
> > > > >   entries-in-buffer/entries-written: 3/3   #P:1
> > > > >
> > > > >                                _-----=> irqs-off
> > > > >                               / _----=> need-resched
> > > > >                              | / _---=> hardirq/softirq
> > > > >                              || / _--=> preempt-depth
> > > > >                              ||| /     delay
> > > > >             TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
> > > > >                | |       |   ||||       |         |
> > > > >               sh-87    [000] ....   551.557031: myprobe: (_do_fork+0x1a/0x2e6) dfd=0xffffffe00020af7e filename=0xffffffe00020b34e flags=0xffffffe00101e7c0 mode=0x20af86ffffffe0
> > > > >
> > > > >   cat /sys/kernel/debug/kprobes/list
> > > > > ffffffe00020af7e  k  _do_fork+0x1a    [FTRACE]
> > > > >                                        ^^^^^^
> > > >
> > > > Hmm, this seems fentry is not supported on RISC-V yet. But anyway,
> > > > it will be useful for users (if they can find the offset).
> > >
> > > Seems only x86 & ⬆️90 use fentry,can you elaborate more about fentry's
> > > benefit and how the user could set kprobe on ftrace call site without
> > > disassemble?
> >
> > On x86, the fentry replaces the mcount with just one call instruction, without
> > saving any arguments. This means all probes which are puts on the address of
> > target symbol, are automatically using ftrace. IOW, all probes on _do_fork+0
> > will use ftrace. We don't need any disassembling.
> >
> > I think if RISC-V already support "-fpatchable-function-entry=2" option on
> > GCC, you can easily enable it as same as arm64. See https://lkml.org/lkml/2019/6/18/648
> >
> > Thank you,
> >
> > --
> > Masami Hiramatsu <mhiramat@kernel.org>
> 
> 
> 
> -- 
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc
  2020-07-13 23:39 ` [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc guoren
  2020-07-14 11:32   ` Masami Hiramatsu
@ 2020-08-14 22:36   ` Palmer Dabbelt
  2020-08-17 12:47     ` Guo Ren
  1 sibling, 1 reply; 24+ messages in thread
From: Palmer Dabbelt @ 2020-08-14 22:36 UTC (permalink / raw)
  To: guoren
  Cc: Paul Walmsley, mhiramat, oleg, linux-riscv, linux-kernel, anup,
	linux-csky, greentime.hu, zong.li, guoren, me, Bjorn Topel,
	guoren

On Mon, 13 Jul 2020 16:39:18 PDT (-0700), guoren@kernel.org wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
>
> The "Changing Execution Path" section in the Documentation/kprobes.txt
> said:
>
> Since kprobes can probe into a running kernel code, it can change the
> register set, including instruction pointer.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Cc: Masami Hiramatsu <mhiramat@kernel.org>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> ---
>  arch/riscv/kernel/mcount-dyn.S | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
> index 35a6ed7..4b58b54 100644
> --- a/arch/riscv/kernel/mcount-dyn.S
> +++ b/arch/riscv/kernel/mcount-dyn.S
> @@ -123,6 +123,7 @@ ENDPROC(ftrace_caller)
>  	sd	ra, (PT_SIZE_ON_STACK+8)(sp)
>  	addi	s0, sp, (PT_SIZE_ON_STACK+16)
>
> +	sd ra,  PT_EPC(sp)
>  	sd x1,  PT_RA(sp)
>  	sd x2,  PT_SP(sp)
>  	sd x3,  PT_GP(sp)

So that's definately not going to be EPC any more.  I'm not sure that field is
sanely named, though, as it's really just the PC when it comes to other ptrace
stuff.

> @@ -157,6 +158,7 @@ ENDPROC(ftrace_caller)
>  	.endm
>
>  	.macro RESTORE_ALL
> +	ld ra,  PT_EPC(sp)
>  	ld x1,  PT_RA(sp)

x1 is ra, so loading it twice doesn't seem reasonable.

>  	ld x2,  PT_SP(sp)
>  	ld x3,  PT_GP(sp)
> @@ -190,7 +192,6 @@ ENDPROC(ftrace_caller)
>  	ld x31, PT_T6(sp)
>
>  	ld	s0, (PT_SIZE_ON_STACK)(sp)
> -	ld	ra, (PT_SIZE_ON_STACK+8)(sp)
>  	addi	sp, sp, (PT_SIZE_ON_STACK+16)
>  	.endm

If you're dropping the load you should drop the store above as well.  In
general this seems kind of mixed up, both before and after this patch.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 4/7] riscv: Add kprobes supported
  2020-07-13 23:39 ` [PATCH v3 4/7] riscv: Add kprobes supported guoren
@ 2020-08-14 22:36   ` Palmer Dabbelt
  2020-08-17 13:48     ` Guo Ren
  0 siblings, 1 reply; 24+ messages in thread
From: Palmer Dabbelt @ 2020-08-14 22:36 UTC (permalink / raw)
  To: guoren
  Cc: Paul Walmsley, mhiramat, oleg, linux-riscv, linux-kernel, anup,
	linux-csky, greentime.hu, zong.li, guoren, me, Bjorn Topel,
	guoren

On Mon, 13 Jul 2020 16:39:19 PDT (-0700), guoren@kernel.org wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
>
> This patch enables "kprobe & kretprobe" to work with ftrace
> interface. It utilized software breakpoint as single-step
> mechanism.
>
> Some instructions which can't be single-step executed must be
> simulated in kernel execution slot, such as: branch, jal, auipc,
> la ...
>
> Some instructions should be rejected for probing and we use a
> blacklist to filter, such as: ecall, ebreak, ...
>
> We use ebreak & c.ebreak to replace origin instruction and the
> kprobe handler prepares an executable memory slot for out-of-line
> execution with a copy of the original instruction being probed.
> In execution slot we add ebreak behind original instruction to
> simulate a single-setp mechanism.
>
> The patch is based on packi's work [1] and csky's work [2].
>  - The kprobes_trampoline.S is all from packi's patch
>  - The single-step mechanism is new designed for riscv without hw
>    single-step trap
>  - The simulation codes are from csky
>  - Frankly, all codes refer to other archs' implementation
>
>  [1] https://lore.kernel.org/linux-riscv/20181113195804.22825-1-me@packi.ch/
>  [2] https://lore.kernel.org/linux-csky/20200403044150.20562-9-guoren@kernel.org/
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Co-Developed-by: Patrick Stählin <me@packi.ch>
> Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
> Tested-by: Zong Li <zong.li@sifive.com>
> Reviewed-by: Pekka Enberg <penberg@kernel.org>
> Cc: Patrick Stählin <me@packi.ch>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Björn Töpel <bjorn.topel@gmail.com>
> ---
>  arch/riscv/Kconfig                            |   2 +
>  arch/riscv/include/asm/kprobes.h              |  40 +++
>  arch/riscv/include/asm/probes.h               |  24 ++
>  arch/riscv/kernel/Makefile                    |   1 +
>  arch/riscv/kernel/probes/Makefile             |   4 +
>  arch/riscv/kernel/probes/decode-insn.c        |  48 +++
>  arch/riscv/kernel/probes/decode-insn.h        |  18 +
>  arch/riscv/kernel/probes/kprobes.c            | 471 ++++++++++++++++++++++++++
>  arch/riscv/kernel/probes/kprobes_trampoline.S |  93 +++++
>  arch/riscv/kernel/probes/simulate-insn.c      |  85 +++++
>  arch/riscv/kernel/probes/simulate-insn.h      |  47 +++
>  arch/riscv/kernel/traps.c                     |   9 +
>  arch/riscv/mm/fault.c                         |   4 +
>  13 files changed, 846 insertions(+)
>  create mode 100644 arch/riscv/include/asm/probes.h
>  create mode 100644 arch/riscv/kernel/probes/Makefile
>  create mode 100644 arch/riscv/kernel/probes/decode-insn.c
>  create mode 100644 arch/riscv/kernel/probes/decode-insn.h
>  create mode 100644 arch/riscv/kernel/probes/kprobes.c
>  create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
>  create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
>  create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index e70449a..b86b2a2 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -59,6 +59,8 @@ config RISCV
>  	select HAVE_EBPF_JIT if MMU
>  	select HAVE_FUTEX_CMPXCHG if FUTEX
>  	select HAVE_GENERIC_VDSO if MMU && 64BIT
> +	select HAVE_KPROBES
> +	select HAVE_KRETPROBES
>  	select HAVE_PCI
>  	select HAVE_PERF_EVENTS
>  	select HAVE_PERF_REGS
> diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> index 56a98ea3..4647d38 100644
> --- a/arch/riscv/include/asm/kprobes.h
> +++ b/arch/riscv/include/asm/kprobes.h
> @@ -11,4 +11,44 @@
>
>  #include <asm-generic/kprobes.h>
>
> +#ifdef CONFIG_KPROBES
> +#include <linux/types.h>
> +#include <linux/ptrace.h>
> +#include <linux/percpu.h>
> +
> +#define __ARCH_WANT_KPROBES_INSN_SLOT
> +#define MAX_INSN_SIZE			2
> +
> +#define flush_insn_slot(p)		do { } while (0)
> +#define kretprobe_blacklist_size	0
> +
> +#include <asm/probes.h>
> +
> +struct prev_kprobe {
> +	struct kprobe *kp;
> +	unsigned int status;
> +};
> +
> +/* Single step context for kprobe */
> +struct kprobe_step_ctx {
> +	unsigned long ss_pending;
> +	unsigned long match_addr;
> +};
> +
> +/* per-cpu kprobe control block */
> +struct kprobe_ctlblk {
> +	unsigned int kprobe_status;
> +	unsigned long saved_status;
> +	struct prev_kprobe prev_kprobe;
> +	struct kprobe_step_ctx ss_ctx;
> +};
> +
> +void arch_remove_kprobe(struct kprobe *p);
> +int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
> +bool kprobe_breakpoint_handler(struct pt_regs *regs);
> +bool kprobe_single_step_handler(struct pt_regs *regs);
> +void kretprobe_trampoline(void);
> +void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
> +
> +#endif /* CONFIG_KPROBES */
>  #endif /* _ASM_RISCV_KPROBES_H */
> diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
> new file mode 100644
> index 00000000..a787e6d
> --- /dev/null
> +++ b/arch/riscv/include/asm/probes.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +#ifndef _ASM_RISCV_PROBES_H
> +#define _ASM_RISCV_PROBES_H
> +
> +typedef u32 probe_opcode_t;
> +typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
> +
> +/* architecture specific copy of original instruction */
> +struct arch_probe_insn {
> +	probe_opcode_t *insn;
> +	probes_handler_t *handler;
> +	/* restore address after simulation */
> +	unsigned long restore;
> +};
> +
> +#ifdef CONFIG_KPROBES
> +typedef u32 kprobe_opcode_t;
> +struct arch_specific_insn {
> +	struct arch_probe_insn api;
> +};
> +#endif
> +
> +#endif /* _ASM_RISCV_PROBES_H */
> diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> index b355cf4..c3fff3e 100644
> --- a/arch/riscv/kernel/Makefile
> +++ b/arch/riscv/kernel/Makefile
> @@ -29,6 +29,7 @@ obj-y	+= riscv_ksyms.o
>  obj-y	+= stacktrace.o
>  obj-y	+= cacheinfo.o
>  obj-y	+= patch.o
> +obj-y	+= probes/
>  obj-$(CONFIG_MMU) += vdso.o vdso/
>
>  obj-$(CONFIG_RISCV_M_MODE)	+= clint.o traps_misaligned.o
> diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> new file mode 100644
> index 00000000..8a39507
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/Makefile
> @@ -0,0 +1,4 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-$(CONFIG_KPROBES)		+= kprobes.o decode-insn.o simulate-insn.o
> +obj-$(CONFIG_KPROBES)		+= kprobes_trampoline.o
> +CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
> new file mode 100644
> index 00000000..0876c30
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/decode-insn.c
> @@ -0,0 +1,48 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +
> +#include <linux/kernel.h>
> +#include <linux/kprobes.h>
> +#include <linux/module.h>
> +#include <linux/kallsyms.h>
> +#include <asm/sections.h>
> +
> +#include "decode-insn.h"
> +#include "simulate-insn.h"
> +
> +/* Return:
> + *   INSN_REJECTED     If instruction is one not allowed to kprobe,
> + *   INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
> + */
> +enum probe_insn __kprobes
> +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
> +{
> +	probe_opcode_t insn = le32_to_cpu(*addr);
> +
> +	/*
> +	 * Reject instructions list:
> +	 */
> +	RISCV_INSN_REJECTED(system,		insn);
> +	RISCV_INSN_REJECTED(fence,		insn);
> +
> +	/*
> +	 * Simulate instructions list:
> +	 * TODO: the REJECTED ones below need to be implemented
> +	 */
> +#ifdef CONFIG_RISCV_ISA_C
> +	RISCV_INSN_REJECTED(c_j,		insn);
> +	RISCV_INSN_REJECTED(c_jr,		insn);
> +	RISCV_INSN_REJECTED(c_jal,		insn);
> +	RISCV_INSN_REJECTED(c_jalr,		insn);
> +	RISCV_INSN_REJECTED(c_beqz,		insn);
> +	RISCV_INSN_REJECTED(c_bnez,		insn);
> +	RISCV_INSN_REJECTED(c_ebreak,		insn);
> +#endif
> +
> +	RISCV_INSN_REJECTED(auipc,		insn);
> +	RISCV_INSN_REJECTED(branch,		insn);
> +
> +	RISCV_INSN_SET_SIMULATE(jal,		insn);
> +	RISCV_INSN_SET_SIMULATE(jalr,		insn);
> +
> +	return INSN_GOOD;
> +}

IIRC I mentioned this in the original version, but I'd anticipate that we need
to at least prevent LR/SC sequences from being probed.

> diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
> new file mode 100644
> index 00000000..42269a7
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/decode-insn.h
> @@ -0,0 +1,18 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +
> +#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> +#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> +
> +#include <asm/sections.h>
> +#include <asm/kprobes.h>
> +
> +enum probe_insn {
> +	INSN_REJECTED,
> +	INSN_GOOD_NO_SLOT,
> +	INSN_GOOD,
> +};
> +
> +enum probe_insn __kprobes
> +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
> +
> +#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
> diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
> new file mode 100644
> index 00000000..31b6196
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/kprobes.c
> @@ -0,0 +1,471 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +
> +#include <linux/kprobes.h>
> +#include <linux/extable.h>
> +#include <linux/slab.h>
> +#include <linux/stop_machine.h>
> +#include <asm/ptrace.h>
> +#include <linux/uaccess.h>
> +#include <asm/sections.h>
> +#include <asm/cacheflush.h>
> +#include <asm/bug.h>
> +#include <asm/patch.h>
> +
> +#include "decode-insn.h"
> +
> +DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> +DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> +
> +static void __kprobes
> +post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
> +
> +static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
> +{
> +	unsigned long offset = GET_INSN_LENGTH(p->opcode);
> +
> +	p->ainsn.api.restore = (unsigned long)p->addr + offset;
> +
> +	patch_text(p->ainsn.api.insn, p->opcode);
> +	patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
> +		   __BUG_INSN_32);
> +}
> +
> +static void __kprobes arch_prepare_simulate(struct kprobe *p)
> +{
> +	p->ainsn.api.restore = 0;
> +}
> +
> +static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
> +{
> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +
> +	if (p->ainsn.api.handler)
> +		p->ainsn.api.handler((u32)p->opcode,
> +					(unsigned long)p->addr, regs);
> +
> +	post_kprobe_handler(kcb, regs);
> +}
> +
> +int __kprobes arch_prepare_kprobe(struct kprobe *p)
> +{
> +	unsigned long probe_addr = (unsigned long)p->addr;
> +
> +	if (probe_addr & 0x1) {
> +		pr_warn("Address not aligned.\n");
> +
> +		return -EINVAL;
> +	}
> +
> +	/* copy instruction */
> +	p->opcode = le32_to_cpu(*p->addr);
> +
> +	/* decode instruction */
> +	switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
> +	case INSN_REJECTED:	/* insn not supported */
> +		return -EINVAL;
> +
> +	case INSN_GOOD_NO_SLOT:	/* insn need simulation */
> +		p->ainsn.api.insn = NULL;
> +		break;
> +
> +	case INSN_GOOD:	/* instruction uses slot */
> +		p->ainsn.api.insn = get_insn_slot();
> +		if (!p->ainsn.api.insn)
> +			return -ENOMEM;
> +		break;
> +	}
> +
> +	/* prepare the instruction */
> +	if (p->ainsn.api.insn)
> +		arch_prepare_ss_slot(p);
> +	else
> +		arch_prepare_simulate(p);
> +
> +	return 0;
> +}
> +
> +/* install breakpoint in text */
> +void __kprobes arch_arm_kprobe(struct kprobe *p)
> +{
> +	if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
> +		patch_text(p->addr, __BUG_INSN_32);
> +	else
> +		patch_text(p->addr, __BUG_INSN_16);
> +}
> +
> +/* remove breakpoint from text */
> +void __kprobes arch_disarm_kprobe(struct kprobe *p)
> +{
> +	patch_text(p->addr, p->opcode);
> +}
> +
> +void __kprobes arch_remove_kprobe(struct kprobe *p)
> +{
> +}
> +
> +static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
> +{
> +	kcb->prev_kprobe.kp = kprobe_running();
> +	kcb->prev_kprobe.status = kcb->kprobe_status;
> +}
> +
> +static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
> +{
> +	__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
> +	kcb->kprobe_status = kcb->prev_kprobe.status;
> +}
> +
> +static void __kprobes set_current_kprobe(struct kprobe *p)
> +{
> +	__this_cpu_write(current_kprobe, p);
> +}
> +
> +/*
> + * Interrupts need to be disabled before single-step mode is set, and not
> + * reenabled until after single-step mode ends.
> + * Without disabling interrupt on local CPU, there is a chance of
> + * interrupt occurrence in the period of exception return and  start of
> + * out-of-line single-step, that result in wrongly single stepping
> + * into the interrupt handler.
> + */
> +static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
> +						struct pt_regs *regs)
> +{
> +	kcb->saved_status = regs->status;
> +	regs->status &= ~SR_SPIE;
> +}
> +
> +static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
> +						struct pt_regs *regs)
> +{
> +	regs->status = kcb->saved_status;
> +}
> +
> +static void __kprobes
> +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
> +{
> +	unsigned long offset = GET_INSN_LENGTH(p->opcode);
> +
> +	kcb->ss_ctx.ss_pending = true;
> +	kcb->ss_ctx.match_addr = addr + offset;
> +}
> +
> +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
> +{
> +	kcb->ss_ctx.ss_pending = false;
> +	kcb->ss_ctx.match_addr = 0;
> +}
> +
> +static void __kprobes setup_singlestep(struct kprobe *p,
> +				       struct pt_regs *regs,
> +				       struct kprobe_ctlblk *kcb, int reenter)
> +{
> +	unsigned long slot;
> +
> +	if (reenter) {
> +		save_previous_kprobe(kcb);
> +		set_current_kprobe(p);
> +		kcb->kprobe_status = KPROBE_REENTER;
> +	} else {
> +		kcb->kprobe_status = KPROBE_HIT_SS;
> +	}
> +
> +	if (p->ainsn.api.insn) {
> +		/* prepare for single stepping */
> +		slot = (unsigned long)p->ainsn.api.insn;
> +
> +		set_ss_context(kcb, slot, p);	/* mark pending ss */
> +
> +		/* IRQs and single stepping do not mix well. */
> +		kprobes_save_local_irqflag(kcb, regs);
> +
> +		instruction_pointer_set(regs, slot);
> +	} else {
> +		/* insn simulation */
> +		arch_simulate_insn(p, regs);
> +	}
> +}
> +
> +static int __kprobes reenter_kprobe(struct kprobe *p,
> +				    struct pt_regs *regs,
> +				    struct kprobe_ctlblk *kcb)
> +{
> +	switch (kcb->kprobe_status) {
> +	case KPROBE_HIT_SSDONE:
> +	case KPROBE_HIT_ACTIVE:
> +		kprobes_inc_nmissed_count(p);
> +		setup_singlestep(p, regs, kcb, 1);
> +		break;
> +	case KPROBE_HIT_SS:
> +	case KPROBE_REENTER:
> +		pr_warn("Unrecoverable kprobe detected.\n");
> +		dump_kprobe(p);
> +		BUG();
> +		break;
> +	default:
> +		WARN_ON(1);
> +		return 0;
> +	}
> +
> +	return 1;
> +}
> +
> +static void __kprobes
> +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
> +{
> +	struct kprobe *cur = kprobe_running();
> +
> +	if (!cur)
> +		return;
> +
> +	/* return addr restore if non-branching insn */
> +	if (cur->ainsn.api.restore != 0)
> +		regs->epc = cur->ainsn.api.restore;
> +
> +	/* restore back original saved kprobe variables and continue */
> +	if (kcb->kprobe_status == KPROBE_REENTER) {
> +		restore_previous_kprobe(kcb);
> +		return;
> +	}
> +
> +	/* call post handler */
> +	kcb->kprobe_status = KPROBE_HIT_SSDONE;
> +	if (cur->post_handler)	{
> +		/* post_handler can hit breakpoint and single step
> +		 * again, so we enable D-flag for recursive exception.
> +		 */
> +		cur->post_handler(cur, regs, 0);
> +	}
> +
> +	reset_current_kprobe();
> +}
> +
> +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
> +{
> +	struct kprobe *cur = kprobe_running();
> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +
> +	switch (kcb->kprobe_status) {
> +	case KPROBE_HIT_SS:
> +	case KPROBE_REENTER:
> +		/*
> +		 * We are here because the instruction being single
> +		 * stepped caused a page fault. We reset the current
> +		 * kprobe and the ip points back to the probe address
> +		 * and allow the page fault handler to continue as a
> +		 * normal page fault.
> +		 */
> +		regs->epc = (unsigned long) cur->addr;
> +		if (!instruction_pointer(regs))
> +			BUG();
> +
> +		if (kcb->kprobe_status == KPROBE_REENTER)
> +			restore_previous_kprobe(kcb);
> +		else
> +			reset_current_kprobe();
> +
> +		break;
> +	case KPROBE_HIT_ACTIVE:
> +	case KPROBE_HIT_SSDONE:
> +		/*
> +		 * We increment the nmissed count for accounting,
> +		 * we can also use npre/npostfault count for accounting
> +		 * these specific fault cases.
> +		 */
> +		kprobes_inc_nmissed_count(cur);
> +
> +		/*
> +		 * We come here because instructions in the pre/post
> +		 * handler caused the page_fault, this could happen
> +		 * if handler tries to access user space by
> +		 * copy_from_user(), get_user() etc. Let the
> +		 * user-specified handler try to fix it first.
> +		 */
> +		if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> +			return 1;
> +
> +		/*
> +		 * In case the user-specified fault handler returned
> +		 * zero, try to fix up.
> +		 */
> +		if (fixup_exception(regs))
> +			return 1;
> +	}
> +	return 0;
> +}
> +
> +bool __kprobes
> +kprobe_breakpoint_handler(struct pt_regs *regs)
> +{
> +	struct kprobe *p, *cur_kprobe;
> +	struct kprobe_ctlblk *kcb;
> +	unsigned long addr = instruction_pointer(regs);
> +
> +	kcb = get_kprobe_ctlblk();
> +	cur_kprobe = kprobe_running();
> +
> +	p = get_kprobe((kprobe_opcode_t *) addr);
> +
> +	if (p) {
> +		if (cur_kprobe) {
> +			if (reenter_kprobe(p, regs, kcb))
> +				return true;
> +		} else {
> +			/* Probe hit */
> +			set_current_kprobe(p);
> +			kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> +
> +			/*
> +			 * If we have no pre-handler or it returned 0, we
> +			 * continue with normal processing.  If we have a
> +			 * pre-handler and it returned non-zero, it will
> +			 * modify the execution path and no need to single
> +			 * stepping. Let's just reset current kprobe and exit.
> +			 *
> +			 * pre_handler can hit a breakpoint and can step thru
> +			 * before return.
> +			 */
> +			if (!p->pre_handler || !p->pre_handler(p, regs))
> +				setup_singlestep(p, regs, kcb, 0);
> +			else
> +				reset_current_kprobe();
> +		}
> +		return true;
> +	}
> +
> +	/*
> +	 * The breakpoint instruction was removed right
> +	 * after we hit it.  Another cpu has removed
> +	 * either a probepoint or a debugger breakpoint
> +	 * at this address.  In either case, no further
> +	 * handling of this interrupt is appropriate.
> +	 * Return back to original instruction, and continue.
> +	 */
> +	return false;
> +}
> +
> +bool __kprobes
> +kprobe_single_step_handler(struct pt_regs *regs)
> +{
> +	struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> +
> +	if ((kcb->ss_ctx.ss_pending)
> +	    && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
> +		clear_ss_context(kcb);	/* clear pending ss */
> +
> +		kprobes_restore_local_irqflag(kcb, regs);
> +
> +		post_kprobe_handler(kcb, regs);
> +		return true;
> +	}
> +	return false;
> +}
> +
> +/*
> + * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> + * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> + */
> +int __init arch_populate_kprobe_blacklist(void)
> +{
> +	int ret;
> +
> +	ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
> +					(unsigned long)__irqentry_text_end);
> +	return ret;
> +}
> +
> +void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
> +{
> +	struct kretprobe_instance *ri = NULL;
> +	struct hlist_head *head, empty_rp;
> +	struct hlist_node *tmp;
> +	unsigned long flags, orig_ret_address = 0;
> +	unsigned long trampoline_address =
> +		(unsigned long)&kretprobe_trampoline;
> +	kprobe_opcode_t *correct_ret_addr = NULL;
> +
> +	INIT_HLIST_HEAD(&empty_rp);
> +	kretprobe_hash_lock(current, &head, &flags);
> +
> +	/*
> +	 * It is possible to have multiple instances associated with a given
> +	 * task either because multiple functions in the call path have
> +	 * return probes installed on them, and/or more than one
> +	 * return probe was registered for a target function.
> +	 *
> +	 * We can handle this because:
> +	 *     - instances are always pushed into the head of the list
> +	 *     - when multiple return probes are registered for the same
> +	 *	 function, the (chronologically) first instance's ret_addr
> +	 *	 will be the real return address, and all the rest will
> +	 *	 point to kretprobe_trampoline.
> +	 */
> +	hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> +		if (ri->task != current)
> +			/* another task is sharing our hash bucket */
> +			continue;
> +
> +		orig_ret_address = (unsigned long)ri->ret_addr;
> +
> +		if (orig_ret_address != trampoline_address)
> +			/*
> +			 * This is the real return address. Any other
> +			 * instances associated with this task are for
> +			 * other calls deeper on the call stack
> +			 */
> +			break;
> +	}
> +
> +	kretprobe_assert(ri, orig_ret_address, trampoline_address);
> +
> +	correct_ret_addr = ri->ret_addr;
> +	hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> +		if (ri->task != current)
> +			/* another task is sharing our hash bucket */
> +			continue;
> +
> +		orig_ret_address = (unsigned long)ri->ret_addr;
> +		if (ri->rp && ri->rp->handler) {
> +			__this_cpu_write(current_kprobe, &ri->rp->kp);
> +			get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
> +			ri->ret_addr = correct_ret_addr;
> +			ri->rp->handler(ri, regs);
> +			__this_cpu_write(current_kprobe, NULL);
> +		}
> +
> +		recycle_rp_inst(ri, &empty_rp);
> +
> +		if (orig_ret_address != trampoline_address)
> +			/*
> +			 * This is the real return address. Any other
> +			 * instances associated with this task are for
> +			 * other calls deeper on the call stack
> +			 */
> +			break;
> +	}
> +
> +	kretprobe_hash_unlock(current, &flags);
> +
> +	hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
> +		hlist_del(&ri->hlist);
> +		kfree(ri);
> +	}
> +	return (void *)orig_ret_address;
> +}
> +
> +void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
> +				      struct pt_regs *regs)
> +{
> +	ri->ret_addr = (kprobe_opcode_t *)regs->ra;
> +	regs->ra = (unsigned long) &kretprobe_trampoline;
> +}
> +
> +int __kprobes arch_trampoline_kprobe(struct kprobe *p)
> +{
> +	return 0;
> +}
> +
> +int __init arch_init_kprobes(void)
> +{
> +	return 0;
> +}
> diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
> new file mode 100644
> index 00000000..6e85d02
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
> @@ -0,0 +1,93 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +/*
> + * Author: Patrick Stählin <me@packi.ch>
> + */
> +#include <linux/linkage.h>
> +
> +#include <asm/asm.h>
> +#include <asm/asm-offsets.h>
> +
> +	.text
> +	.altmacro
> +
> +	.macro save_all_base_regs
> +	REG_S x1,  PT_RA(sp)
> +	REG_S x3,  PT_GP(sp)
> +	REG_S x4,  PT_TP(sp)
> +	REG_S x5,  PT_T0(sp)
> +	REG_S x6,  PT_T1(sp)
> +	REG_S x7,  PT_T2(sp)
> +	REG_S x8,  PT_S0(sp)
> +	REG_S x9,  PT_S1(sp)
> +	REG_S x10, PT_A0(sp)
> +	REG_S x11, PT_A1(sp)
> +	REG_S x12, PT_A2(sp)
> +	REG_S x13, PT_A3(sp)
> +	REG_S x14, PT_A4(sp)
> +	REG_S x15, PT_A5(sp)
> +	REG_S x16, PT_A6(sp)
> +	REG_S x17, PT_A7(sp)
> +	REG_S x18, PT_S2(sp)
> +	REG_S x19, PT_S3(sp)
> +	REG_S x20, PT_S4(sp)
> +	REG_S x21, PT_S5(sp)
> +	REG_S x22, PT_S6(sp)
> +	REG_S x23, PT_S7(sp)
> +	REG_S x24, PT_S8(sp)
> +	REG_S x25, PT_S9(sp)
> +	REG_S x26, PT_S10(sp)
> +	REG_S x27, PT_S11(sp)
> +	REG_S x28, PT_T3(sp)
> +	REG_S x29, PT_T4(sp)
> +	REG_S x30, PT_T5(sp)
> +	REG_S x31, PT_T6(sp)
> +	.endm
> +
> +	.macro restore_all_base_regs
> +	REG_L x3,  PT_GP(sp)
> +	REG_L x4,  PT_TP(sp)
> +	REG_L x5,  PT_T0(sp)
> +	REG_L x6,  PT_T1(sp)
> +	REG_L x7,  PT_T2(sp)
> +	REG_L x8,  PT_S0(sp)
> +	REG_L x9,  PT_S1(sp)
> +	REG_L x10, PT_A0(sp)
> +	REG_L x11, PT_A1(sp)
> +	REG_L x12, PT_A2(sp)
> +	REG_L x13, PT_A3(sp)
> +	REG_L x14, PT_A4(sp)
> +	REG_L x15, PT_A5(sp)
> +	REG_L x16, PT_A6(sp)
> +	REG_L x17, PT_A7(sp)
> +	REG_L x18, PT_S2(sp)
> +	REG_L x19, PT_S3(sp)
> +	REG_L x20, PT_S4(sp)
> +	REG_L x21, PT_S5(sp)
> +	REG_L x22, PT_S6(sp)
> +	REG_L x23, PT_S7(sp)
> +	REG_L x24, PT_S8(sp)
> +	REG_L x25, PT_S9(sp)
> +	REG_L x26, PT_S10(sp)
> +	REG_L x27, PT_S11(sp)
> +	REG_L x28, PT_T3(sp)
> +	REG_L x29, PT_T4(sp)
> +	REG_L x30, PT_T5(sp)
> +	REG_L x31, PT_T6(sp)
> +	.endm
> +
> +ENTRY(kretprobe_trampoline)
> +	addi sp, sp, -(PT_SIZE_ON_STACK)
> +	save_all_base_regs
> +
> +	move a0, sp /* pt_regs */
> +
> +	call trampoline_probe_handler
> +
> +	/* use the result as the return-address */
> +	move ra, a0
> +
> +	restore_all_base_regs
> +	addi sp, sp, PT_SIZE_ON_STACK
> +
> +	ret
> +ENDPROC(kretprobe_trampoline)
> diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
> new file mode 100644
> index 00000000..2519ce2
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/simulate-insn.c
> @@ -0,0 +1,85 @@
> +// SPDX-License-Identifier: GPL-2.0+
> +
> +#include <linux/bitops.h>
> +#include <linux/kernel.h>
> +#include <linux/kprobes.h>
> +
> +#include "decode-insn.h"
> +#include "simulate-insn.h"
> +
> +static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
> +				       unsigned long *ptr)
> +{
> +	if (index == 0)
> +		*ptr = 0;
> +	else if (index <= 31)
> +		*ptr = *((unsigned long *)regs + index);
> +	else
> +		return false;
> +
> +	return true;
> +}
> +
> +static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
> +				       unsigned long val)
> +{
> +	if (index == 0)
> +		return false;
> +	else if (index <= 31)
> +		*((unsigned long *)regs + index) = val;
> +	else
> +		return false;
> +
> +	return true;
> +}
> +
> +bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
> +{
> +	/*
> +	 *     31    30       21    20     19        12 11 7 6      0
> +	 * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
> +	 *     1         10          1           8       5    JAL/J
> +	 */
> +	bool ret;
> +	u32 imm;
> +	u32 index = (opcode >> 7) & 0x1f;
> +
> +	ret = rv_insn_reg_set_val(regs, index, addr + 4);
> +	if (!ret)
> +		return ret;
> +
> +	imm  = ((opcode >> 21) & 0x3ff) << 1;
> +	imm |= ((opcode >> 20) & 0x1)   << 11;
> +	imm |= ((opcode >> 12) & 0xff)  << 12;
> +	imm |= ((opcode >> 31) & 0x1)   << 20;
> +
> +	instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
> +
> +	return ret;
> +}
> +
> +bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
> +{
> +	/*
> +	 * 31          20 19 15 14 12 11 7 6      0
> +	 *  offset[11:0] | rs1 | 010 | rd | opcode
> +	 *      12         5      3    5    JALR/JR
> +	 */
> +	bool ret;
> +	unsigned long base_addr;
> +	u32 imm = (opcode >> 20) & 0xfff;
> +	u32 rd_index = (opcode >> 7) & 0x1f;
> +	u32 rs1_index = (opcode >> 15) & 0x1f;
> +
> +	ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
> +	if (!ret)
> +		return ret;
> +
> +	ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
> +	if (!ret)
> +		return ret;
> +
> +	instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
> +
> +	return ret;
> +}
> diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
> new file mode 100644
> index 00000000..a62d784
> --- /dev/null
> +++ b/arch/riscv/kernel/probes/simulate-insn.h
> @@ -0,0 +1,47 @@
> +/* SPDX-License-Identifier: GPL-2.0+ */
> +
> +#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> +#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> +
> +#define __RISCV_INSN_FUNCS(name, mask, val)				\
> +static __always_inline bool riscv_insn_is_##name(probe_opcode_t code)	\
> +{									\
> +	BUILD_BUG_ON(~(mask) & (val));					\
> +	return (code & (mask)) == (val);				\
> +}									\
> +bool simulate_##name(u32 opcode, unsigned long addr,			\
> +		     struct pt_regs *regs);
> +
> +#define RISCV_INSN_REJECTED(name, code)					\
> +	do {								\
> +		if (riscv_insn_is_##name(code)) {			\
> +			return INSN_REJECTED;				\
> +		}							\
> +	} while (0)
> +
> +__RISCV_INSN_FUNCS(system,	0x7f, 0x73)
> +__RISCV_INSN_FUNCS(fence,	0x7f, 0x0f)
> +
> +#define RISCV_INSN_SET_SIMULATE(name, code)				\
> +	do {								\
> +		if (riscv_insn_is_##name(code)) {			\
> +			api->handler = simulate_##name;			\
> +			return INSN_GOOD_NO_SLOT;			\
> +		}							\
> +	} while (0)
> +
> +__RISCV_INSN_FUNCS(c_j,		0xe003, 0xa001)
> +__RISCV_INSN_FUNCS(c_jr,	0xf007, 0x8002)
> +__RISCV_INSN_FUNCS(c_jal,	0xe003, 0x2001)
> +__RISCV_INSN_FUNCS(c_jalr,	0xf007, 0x9002)
> +__RISCV_INSN_FUNCS(c_beqz,	0xe003, 0xc001)
> +__RISCV_INSN_FUNCS(c_bnez,	0xe003, 0xe001)
> +__RISCV_INSN_FUNCS(c_ebreak,	0xffff, 0x9002)
> +
> +__RISCV_INSN_FUNCS(auipc,	0x7f, 0x17)
> +__RISCV_INSN_FUNCS(branch,	0x7f, 0x63)
> +
> +__RISCV_INSN_FUNCS(jal,		0x7f, 0x6f)
> +__RISCV_INSN_FUNCS(jalr,	0x707f, 0x67)
> +
> +#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
> diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> index 7d95cce..c6846dd 100644
> --- a/arch/riscv/kernel/traps.c
> +++ b/arch/riscv/kernel/traps.c
> @@ -12,6 +12,7 @@
>  #include <linux/signal.h>
>  #include <linux/kdebug.h>
>  #include <linux/uaccess.h>
> +#include <linux/kprobes.h>
>  #include <linux/mm.h>
>  #include <linux/module.h>
>  #include <linux/irq.h>
> @@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
>
>  asmlinkage __visible void do_trap_break(struct pt_regs *regs)
>  {
> +#ifdef CONFIG_KPROBES
> +	if (kprobe_single_step_handler(regs))
> +		return;
> +
> +	if (kprobe_breakpoint_handler(regs))
> +		return;
> +#endif
> +
>  	if (user_mode(regs))
>  		force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
>  #ifdef CONFIG_KGDB
> diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> index ae7b7fe..da0c08c 100644
> --- a/arch/riscv/mm/fault.c
> +++ b/arch/riscv/mm/fault.c
> @@ -13,6 +13,7 @@
>  #include <linux/perf_event.h>
>  #include <linux/signal.h>
>  #include <linux/uaccess.h>
> +#include <linux/kprobes.h>
>
>  #include <asm/pgalloc.h>
>  #include <asm/ptrace.h>
> @@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
>  	tsk = current;
>  	mm = tsk->mm;
>
> +	if (kprobe_page_fault(regs, cause))
> +		return;
> +
>  	/*
>  	 * Fault-in kernel-space virtual memory on-demand.
>  	 * The 'reference' page table is init_mm.pgd.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc
  2020-08-14 22:36   ` Palmer Dabbelt
@ 2020-08-17 12:47     ` Guo Ren
  0 siblings, 0 replies; 24+ messages in thread
From: Guo Ren @ 2020-08-17 12:47 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Paul Walmsley, Masami Hiramatsu, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Bjorn Topel, Guo Ren

On Sat, Aug 15, 2020 at 6:36 AM Palmer Dabbelt <palmerdabbelt@google.com> wrote:
>
> On Mon, 13 Jul 2020 16:39:18 PDT (-0700), guoren@kernel.org wrote:
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > The "Changing Execution Path" section in the Documentation/kprobes.txt
> > said:
> >
> > Since kprobes can probe into a running kernel code, it can change the
> > register set, including instruction pointer.
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Cc: Masami Hiramatsu <mhiramat@kernel.org>
> > Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> > ---
> >  arch/riscv/kernel/mcount-dyn.S | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
> > index 35a6ed7..4b58b54 100644
> > --- a/arch/riscv/kernel/mcount-dyn.S
> > +++ b/arch/riscv/kernel/mcount-dyn.S
> > @@ -123,6 +123,7 @@ ENDPROC(ftrace_caller)
> >       sd      ra, (PT_SIZE_ON_STACK+8)(sp)
> >       addi    s0, sp, (PT_SIZE_ON_STACK+16)
> >
> > +     sd ra,  PT_EPC(sp)
> >       sd x1,  PT_RA(sp)
> >       sd x2,  PT_SP(sp)
> >       sd x3,  PT_GP(sp)
>
> So that's definately not going to be EPC any more.  I'm not sure that field is
> sanely named, though, as it's really just the PC when it comes to other ptrace
> stuff.
>
> > @@ -157,6 +158,7 @@ ENDPROC(ftrace_caller)
> >       .endm
> >
> >       .macro RESTORE_ALL
> > +     ld ra,  PT_EPC(sp)
> >       ld x1,  PT_RA(sp)
>
> x1 is ra, so loading it twice doesn't seem reasonable.
>
> >       ld x2,  PT_SP(sp)
> >       ld x3,  PT_GP(sp)
> > @@ -190,7 +192,6 @@ ENDPROC(ftrace_caller)
> >       ld x31, PT_T6(sp)
> >
> >       ld      s0, (PT_SIZE_ON_STACK)(sp)
> > -     ld      ra, (PT_SIZE_ON_STACK+8)(sp)
> >       addi    sp, sp, (PT_SIZE_ON_STACK+16)
> >       .endm
>
> If you're dropping the load you should drop the store above as well.  In
> general this seems kind of mixed up, both before and after this patch.

This is a wrong patch, it should be:

diff --git a/arch/riscv/kernel/mcount-dyn.S b/arch/riscv/kernel/mcount-dyn.S
index 35a6ed7..d82b8f0 100644
--- a/arch/riscv/kernel/mcount-dyn.S
+++ b/arch/riscv/kernel/mcount-dyn.S
@@ -120,10 +120,10 @@ ENDPROC(ftrace_caller)
        .macro SAVE_ALL
        addi    sp, sp, -(PT_SIZE_ON_STACK+16)
        sd      s0, (PT_SIZE_ON_STACK)(sp)
-       sd      ra, (PT_SIZE_ON_STACK+8)(sp)
        addi    s0, sp, (PT_SIZE_ON_STACK+16)

-       sd x1,  PT_RA(sp)
+       sd ra,  PT_EPC(sp)
+       sd ra,  PT_RA(sp)
        sd x2,  PT_SP(sp)
        sd x3,  PT_GP(sp)
        sd x4,  PT_TP(sp)
@@ -157,7 +157,7 @@ ENDPROC(ftrace_caller)
        .endm

        .macro RESTORE_ALL
-       ld x1,  PT_RA(sp)
+       ld ra,  PT_EPC(sp)
        ld x2,  PT_SP(sp)
        ld x3,  PT_GP(sp)
        ld x4,  PT_TP(sp)
@@ -190,7 +190,6 @@ ENDPROC(ftrace_caller)
        ld x31, PT_T6(sp)

        ld      s0, (PT_SIZE_ON_STACK)(sp)
-       ld      ra, (PT_SIZE_ON_STACK+8)(sp)
        addi    sp, sp, (PT_SIZE_ON_STACK+16)
        .endm

Now, I'm developing livepatch and they are so mixed features (kprobe,
livepatch, ftrace, optprobes, STACK_WALK, -fpatchable-function-entry
'no -pg'). I'll test this patch in the next version of the patchset.

Thx for the review.

-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v3 4/7] riscv: Add kprobes supported
  2020-08-14 22:36   ` Palmer Dabbelt
@ 2020-08-17 13:48     ` Guo Ren
  0 siblings, 0 replies; 24+ messages in thread
From: Guo Ren @ 2020-08-17 13:48 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: Paul Walmsley, Masami Hiramatsu, Oleg Nesterov, linux-riscv,
	Linux Kernel Mailing List, Anup Patel, linux-csky, Greentime Hu,
	Zong Li, Patrick Stählin, Bjorn Topel, Guo Ren

On Sat, Aug 15, 2020 at 6:36 AM Palmer Dabbelt <palmerdabbelt@google.com> wrote:
>
> On Mon, 13 Jul 2020 16:39:19 PDT (-0700), guoren@kernel.org wrote:
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > This patch enables "kprobe & kretprobe" to work with ftrace
> > interface. It utilized software breakpoint as single-step
> > mechanism.
> >
> > Some instructions which can't be single-step executed must be
> > simulated in kernel execution slot, such as: branch, jal, auipc,
> > la ...
> >
> > Some instructions should be rejected for probing and we use a
> > blacklist to filter, such as: ecall, ebreak, ...
> >
> > We use ebreak & c.ebreak to replace origin instruction and the
> > kprobe handler prepares an executable memory slot for out-of-line
> > execution with a copy of the original instruction being probed.
> > In execution slot we add ebreak behind original instruction to
> > simulate a single-setp mechanism.
> >
> > The patch is based on packi's work [1] and csky's work [2].
> >  - The kprobes_trampoline.S is all from packi's patch
> >  - The single-step mechanism is new designed for riscv without hw
> >    single-step trap
> >  - The simulation codes are from csky
> >  - Frankly, all codes refer to other archs' implementation
> >
> >  [1] https://lore.kernel.org/linux-riscv/20181113195804.22825-1-me@packi.ch/
> >  [2] https://lore.kernel.org/linux-csky/20200403044150.20562-9-guoren@kernel.org/
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Co-Developed-by: Patrick Stählin <me@packi.ch>
> > Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
> > Tested-by: Zong Li <zong.li@sifive.com>
> > Reviewed-by: Pekka Enberg <penberg@kernel.org>
> > Cc: Patrick Stählin <me@packi.ch>
> > Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> > Cc: Björn Töpel <bjorn.topel@gmail.com>
> > ---
> >  arch/riscv/Kconfig                            |   2 +
> >  arch/riscv/include/asm/kprobes.h              |  40 +++
> >  arch/riscv/include/asm/probes.h               |  24 ++
> >  arch/riscv/kernel/Makefile                    |   1 +
> >  arch/riscv/kernel/probes/Makefile             |   4 +
> >  arch/riscv/kernel/probes/decode-insn.c        |  48 +++
> >  arch/riscv/kernel/probes/decode-insn.h        |  18 +
> >  arch/riscv/kernel/probes/kprobes.c            | 471 ++++++++++++++++++++++++++
> >  arch/riscv/kernel/probes/kprobes_trampoline.S |  93 +++++
> >  arch/riscv/kernel/probes/simulate-insn.c      |  85 +++++
> >  arch/riscv/kernel/probes/simulate-insn.h      |  47 +++
> >  arch/riscv/kernel/traps.c                     |   9 +
> >  arch/riscv/mm/fault.c                         |   4 +
> >  13 files changed, 846 insertions(+)
> >  create mode 100644 arch/riscv/include/asm/probes.h
> >  create mode 100644 arch/riscv/kernel/probes/Makefile
> >  create mode 100644 arch/riscv/kernel/probes/decode-insn.c
> >  create mode 100644 arch/riscv/kernel/probes/decode-insn.h
> >  create mode 100644 arch/riscv/kernel/probes/kprobes.c
> >  create mode 100644 arch/riscv/kernel/probes/kprobes_trampoline.S
> >  create mode 100644 arch/riscv/kernel/probes/simulate-insn.c
> >  create mode 100644 arch/riscv/kernel/probes/simulate-insn.h
> >
> > diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> > index e70449a..b86b2a2 100644
> > --- a/arch/riscv/Kconfig
> > +++ b/arch/riscv/Kconfig
> > @@ -59,6 +59,8 @@ config RISCV
> >       select HAVE_EBPF_JIT if MMU
> >       select HAVE_FUTEX_CMPXCHG if FUTEX
> >       select HAVE_GENERIC_VDSO if MMU && 64BIT
> > +     select HAVE_KPROBES
> > +     select HAVE_KRETPROBES
> >       select HAVE_PCI
> >       select HAVE_PERF_EVENTS
> >       select HAVE_PERF_REGS
> > diff --git a/arch/riscv/include/asm/kprobes.h b/arch/riscv/include/asm/kprobes.h
> > index 56a98ea3..4647d38 100644
> > --- a/arch/riscv/include/asm/kprobes.h
> > +++ b/arch/riscv/include/asm/kprobes.h
> > @@ -11,4 +11,44 @@
> >
> >  #include <asm-generic/kprobes.h>
> >
> > +#ifdef CONFIG_KPROBES
> > +#include <linux/types.h>
> > +#include <linux/ptrace.h>
> > +#include <linux/percpu.h>
> > +
> > +#define __ARCH_WANT_KPROBES_INSN_SLOT
> > +#define MAX_INSN_SIZE                        2
> > +
> > +#define flush_insn_slot(p)           do { } while (0)
> > +#define kretprobe_blacklist_size     0
> > +
> > +#include <asm/probes.h>
> > +
> > +struct prev_kprobe {
> > +     struct kprobe *kp;
> > +     unsigned int status;
> > +};
> > +
> > +/* Single step context for kprobe */
> > +struct kprobe_step_ctx {
> > +     unsigned long ss_pending;
> > +     unsigned long match_addr;
> > +};
> > +
> > +/* per-cpu kprobe control block */
> > +struct kprobe_ctlblk {
> > +     unsigned int kprobe_status;
> > +     unsigned long saved_status;
> > +     struct prev_kprobe prev_kprobe;
> > +     struct kprobe_step_ctx ss_ctx;
> > +};
> > +
> > +void arch_remove_kprobe(struct kprobe *p);
> > +int kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr);
> > +bool kprobe_breakpoint_handler(struct pt_regs *regs);
> > +bool kprobe_single_step_handler(struct pt_regs *regs);
> > +void kretprobe_trampoline(void);
> > +void __kprobes *trampoline_probe_handler(struct pt_regs *regs);
> > +
> > +#endif /* CONFIG_KPROBES */
> >  #endif /* _ASM_RISCV_KPROBES_H */
> > diff --git a/arch/riscv/include/asm/probes.h b/arch/riscv/include/asm/probes.h
> > new file mode 100644
> > index 00000000..a787e6d
> > --- /dev/null
> > +++ b/arch/riscv/include/asm/probes.h
> > @@ -0,0 +1,24 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +#ifndef _ASM_RISCV_PROBES_H
> > +#define _ASM_RISCV_PROBES_H
> > +
> > +typedef u32 probe_opcode_t;
> > +typedef bool (probes_handler_t) (u32 opcode, unsigned long addr, struct pt_regs *);
> > +
> > +/* architecture specific copy of original instruction */
> > +struct arch_probe_insn {
> > +     probe_opcode_t *insn;
> > +     probes_handler_t *handler;
> > +     /* restore address after simulation */
> > +     unsigned long restore;
> > +};
> > +
> > +#ifdef CONFIG_KPROBES
> > +typedef u32 kprobe_opcode_t;
> > +struct arch_specific_insn {
> > +     struct arch_probe_insn api;
> > +};
> > +#endif
> > +
> > +#endif /* _ASM_RISCV_PROBES_H */
> > diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
> > index b355cf4..c3fff3e 100644
> > --- a/arch/riscv/kernel/Makefile
> > +++ b/arch/riscv/kernel/Makefile
> > @@ -29,6 +29,7 @@ obj-y       += riscv_ksyms.o
> >  obj-y        += stacktrace.o
> >  obj-y        += cacheinfo.o
> >  obj-y        += patch.o
> > +obj-y        += probes/
> >  obj-$(CONFIG_MMU) += vdso.o vdso/
> >
> >  obj-$(CONFIG_RISCV_M_MODE)   += clint.o traps_misaligned.o
> > diff --git a/arch/riscv/kernel/probes/Makefile b/arch/riscv/kernel/probes/Makefile
> > new file mode 100644
> > index 00000000..8a39507
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/Makefile
> > @@ -0,0 +1,4 @@
> > +# SPDX-License-Identifier: GPL-2.0
> > +obj-$(CONFIG_KPROBES)                += kprobes.o decode-insn.o simulate-insn.o
> > +obj-$(CONFIG_KPROBES)                += kprobes_trampoline.o
> > +CFLAGS_REMOVE_simulate-insn.o = $(CC_FLAGS_FTRACE)
> > diff --git a/arch/riscv/kernel/probes/decode-insn.c b/arch/riscv/kernel/probes/decode-insn.c
> > new file mode 100644
> > index 00000000..0876c30
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/decode-insn.c
> > @@ -0,0 +1,48 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +
> > +#include <linux/kernel.h>
> > +#include <linux/kprobes.h>
> > +#include <linux/module.h>
> > +#include <linux/kallsyms.h>
> > +#include <asm/sections.h>
> > +
> > +#include "decode-insn.h"
> > +#include "simulate-insn.h"
> > +
> > +/* Return:
> > + *   INSN_REJECTED     If instruction is one not allowed to kprobe,
> > + *   INSN_GOOD_NO_SLOT If instruction is supported but doesn't use its slot.
> > + */
> > +enum probe_insn __kprobes
> > +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *api)
> > +{
> > +     probe_opcode_t insn = le32_to_cpu(*addr);
> > +
> > +     /*
> > +      * Reject instructions list:
> > +      */
> > +     RISCV_INSN_REJECTED(system,             insn);
> > +     RISCV_INSN_REJECTED(fence,              insn);
> > +
> > +     /*
> > +      * Simulate instructions list:
> > +      * TODO: the REJECTED ones below need to be implemented
> > +      */
> > +#ifdef CONFIG_RISCV_ISA_C
> > +     RISCV_INSN_REJECTED(c_j,                insn);
> > +     RISCV_INSN_REJECTED(c_jr,               insn);
> > +     RISCV_INSN_REJECTED(c_jal,              insn);
> > +     RISCV_INSN_REJECTED(c_jalr,             insn);
> > +     RISCV_INSN_REJECTED(c_beqz,             insn);
> > +     RISCV_INSN_REJECTED(c_bnez,             insn);
> > +     RISCV_INSN_REJECTED(c_ebreak,           insn);
> > +#endif
> > +
> > +     RISCV_INSN_REJECTED(auipc,              insn);
> > +     RISCV_INSN_REJECTED(branch,             insn);
> > +
> > +     RISCV_INSN_SET_SIMULATE(jal,            insn);
> > +     RISCV_INSN_SET_SIMULATE(jalr,           insn);
> > +
> > +     return INSN_GOOD;
> > +}
>
> IIRC I mentioned this in the original version, but I'd anticipate that we need
> to at least prevent LR/SC sequences from being probed.
        /*
         * Instructions which load PC relative literals are not going to work
         * when executed from an XOL slot. Instructions doing an exclusive
         * load/store are not going to complete successfully when single-step
         * exception handling happens in the middle of the sequence.
         */
        if (aarch64_insn_uses_literal(insn) ||
            aarch64_insn_is_exclusive(insn))
                return false;

Arm64 only prevents to probe on lr/sc instruction, not in the middle
of the sequence. So I'll just add:
RISCV_INSN_REJECTED(exclusive, insn);

The wrapper of LD/SC could is another patch.

>
> > diff --git a/arch/riscv/kernel/probes/decode-insn.h b/arch/riscv/kernel/probes/decode-insn.h
> > new file mode 100644
> > index 00000000..42269a7
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/decode-insn.h
> > @@ -0,0 +1,18 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +
> > +#ifndef _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> > +#define _RISCV_KERNEL_KPROBES_DECODE_INSN_H
> > +
> > +#include <asm/sections.h>
> > +#include <asm/kprobes.h>
> > +
> > +enum probe_insn {
> > +     INSN_REJECTED,
> > +     INSN_GOOD_NO_SLOT,
> > +     INSN_GOOD,
> > +};
> > +
> > +enum probe_insn __kprobes
> > +riscv_probe_decode_insn(probe_opcode_t *addr, struct arch_probe_insn *asi);
> > +
> > +#endif /* _RISCV_KERNEL_KPROBES_DECODE_INSN_H */
> > diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
> > new file mode 100644
> > index 00000000..31b6196
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/kprobes.c
> > @@ -0,0 +1,471 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +
> > +#include <linux/kprobes.h>
> > +#include <linux/extable.h>
> > +#include <linux/slab.h>
> > +#include <linux/stop_machine.h>
> > +#include <asm/ptrace.h>
> > +#include <linux/uaccess.h>
> > +#include <asm/sections.h>
> > +#include <asm/cacheflush.h>
> > +#include <asm/bug.h>
> > +#include <asm/patch.h>
> > +
> > +#include "decode-insn.h"
> > +
> > +DEFINE_PER_CPU(struct kprobe *, current_kprobe) = NULL;
> > +DEFINE_PER_CPU(struct kprobe_ctlblk, kprobe_ctlblk);
> > +
> > +static void __kprobes
> > +post_kprobe_handler(struct kprobe_ctlblk *, struct pt_regs *);
> > +
> > +static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
> > +{
> > +     unsigned long offset = GET_INSN_LENGTH(p->opcode);
> > +
> > +     p->ainsn.api.restore = (unsigned long)p->addr + offset;
> > +
> > +     patch_text(p->ainsn.api.insn, p->opcode);
> > +     patch_text((void *)((unsigned long)(p->ainsn.api.insn) + offset),
> > +                __BUG_INSN_32);
> > +}
> > +
> > +static void __kprobes arch_prepare_simulate(struct kprobe *p)
> > +{
> > +     p->ainsn.api.restore = 0;
> > +}
> > +
> > +static void __kprobes arch_simulate_insn(struct kprobe *p, struct pt_regs *regs)
> > +{
> > +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > +     if (p->ainsn.api.handler)
> > +             p->ainsn.api.handler((u32)p->opcode,
> > +                                     (unsigned long)p->addr, regs);
> > +
> > +     post_kprobe_handler(kcb, regs);
> > +}
> > +
> > +int __kprobes arch_prepare_kprobe(struct kprobe *p)
> > +{
> > +     unsigned long probe_addr = (unsigned long)p->addr;
> > +
> > +     if (probe_addr & 0x1) {
> > +             pr_warn("Address not aligned.\n");
> > +
> > +             return -EINVAL;
> > +     }
> > +
> > +     /* copy instruction */
> > +     p->opcode = le32_to_cpu(*p->addr);
> > +
> > +     /* decode instruction */
> > +     switch (riscv_probe_decode_insn(p->addr, &p->ainsn.api)) {
> > +     case INSN_REJECTED:     /* insn not supported */
> > +             return -EINVAL;
> > +
> > +     case INSN_GOOD_NO_SLOT: /* insn need simulation */
> > +             p->ainsn.api.insn = NULL;
> > +             break;
> > +
> > +     case INSN_GOOD: /* instruction uses slot */
> > +             p->ainsn.api.insn = get_insn_slot();
> > +             if (!p->ainsn.api.insn)
> > +                     return -ENOMEM;
> > +             break;
> > +     }
> > +
> > +     /* prepare the instruction */
> > +     if (p->ainsn.api.insn)
> > +             arch_prepare_ss_slot(p);
> > +     else
> > +             arch_prepare_simulate(p);
> > +
> > +     return 0;
> > +}
> > +
> > +/* install breakpoint in text */
> > +void __kprobes arch_arm_kprobe(struct kprobe *p)
> > +{
> > +     if ((p->opcode & __INSN_LENGTH_MASK) == __INSN_LENGTH_32)
> > +             patch_text(p->addr, __BUG_INSN_32);
> > +     else
> > +             patch_text(p->addr, __BUG_INSN_16);
> > +}
> > +
> > +/* remove breakpoint from text */
> > +void __kprobes arch_disarm_kprobe(struct kprobe *p)
> > +{
> > +     patch_text(p->addr, p->opcode);
> > +}
> > +
> > +void __kprobes arch_remove_kprobe(struct kprobe *p)
> > +{
> > +}
> > +
> > +static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
> > +{
> > +     kcb->prev_kprobe.kp = kprobe_running();
> > +     kcb->prev_kprobe.status = kcb->kprobe_status;
> > +}
> > +
> > +static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
> > +{
> > +     __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
> > +     kcb->kprobe_status = kcb->prev_kprobe.status;
> > +}
> > +
> > +static void __kprobes set_current_kprobe(struct kprobe *p)
> > +{
> > +     __this_cpu_write(current_kprobe, p);
> > +}
> > +
> > +/*
> > + * Interrupts need to be disabled before single-step mode is set, and not
> > + * reenabled until after single-step mode ends.
> > + * Without disabling interrupt on local CPU, there is a chance of
> > + * interrupt occurrence in the period of exception return and  start of
> > + * out-of-line single-step, that result in wrongly single stepping
> > + * into the interrupt handler.
> > + */
> > +static void __kprobes kprobes_save_local_irqflag(struct kprobe_ctlblk *kcb,
> > +                                             struct pt_regs *regs)
> > +{
> > +     kcb->saved_status = regs->status;
> > +     regs->status &= ~SR_SPIE;
> > +}
> > +
> > +static void __kprobes kprobes_restore_local_irqflag(struct kprobe_ctlblk *kcb,
> > +                                             struct pt_regs *regs)
> > +{
> > +     regs->status = kcb->saved_status;
> > +}
> > +
> > +static void __kprobes
> > +set_ss_context(struct kprobe_ctlblk *kcb, unsigned long addr, struct kprobe *p)
> > +{
> > +     unsigned long offset = GET_INSN_LENGTH(p->opcode);
> > +
> > +     kcb->ss_ctx.ss_pending = true;
> > +     kcb->ss_ctx.match_addr = addr + offset;
> > +}
> > +
> > +static void __kprobes clear_ss_context(struct kprobe_ctlblk *kcb)
> > +{
> > +     kcb->ss_ctx.ss_pending = false;
> > +     kcb->ss_ctx.match_addr = 0;
> > +}
> > +
> > +static void __kprobes setup_singlestep(struct kprobe *p,
> > +                                    struct pt_regs *regs,
> > +                                    struct kprobe_ctlblk *kcb, int reenter)
> > +{
> > +     unsigned long slot;
> > +
> > +     if (reenter) {
> > +             save_previous_kprobe(kcb);
> > +             set_current_kprobe(p);
> > +             kcb->kprobe_status = KPROBE_REENTER;
> > +     } else {
> > +             kcb->kprobe_status = KPROBE_HIT_SS;
> > +     }
> > +
> > +     if (p->ainsn.api.insn) {
> > +             /* prepare for single stepping */
> > +             slot = (unsigned long)p->ainsn.api.insn;
> > +
> > +             set_ss_context(kcb, slot, p);   /* mark pending ss */
> > +
> > +             /* IRQs and single stepping do not mix well. */
> > +             kprobes_save_local_irqflag(kcb, regs);
> > +
> > +             instruction_pointer_set(regs, slot);
> > +     } else {
> > +             /* insn simulation */
> > +             arch_simulate_insn(p, regs);
> > +     }
> > +}
> > +
> > +static int __kprobes reenter_kprobe(struct kprobe *p,
> > +                                 struct pt_regs *regs,
> > +                                 struct kprobe_ctlblk *kcb)
> > +{
> > +     switch (kcb->kprobe_status) {
> > +     case KPROBE_HIT_SSDONE:
> > +     case KPROBE_HIT_ACTIVE:
> > +             kprobes_inc_nmissed_count(p);
> > +             setup_singlestep(p, regs, kcb, 1);
> > +             break;
> > +     case KPROBE_HIT_SS:
> > +     case KPROBE_REENTER:
> > +             pr_warn("Unrecoverable kprobe detected.\n");
> > +             dump_kprobe(p);
> > +             BUG();
> > +             break;
> > +     default:
> > +             WARN_ON(1);
> > +             return 0;
> > +     }
> > +
> > +     return 1;
> > +}
> > +
> > +static void __kprobes
> > +post_kprobe_handler(struct kprobe_ctlblk *kcb, struct pt_regs *regs)
> > +{
> > +     struct kprobe *cur = kprobe_running();
> > +
> > +     if (!cur)
> > +             return;
> > +
> > +     /* return addr restore if non-branching insn */
> > +     if (cur->ainsn.api.restore != 0)
> > +             regs->epc = cur->ainsn.api.restore;
> > +
> > +     /* restore back original saved kprobe variables and continue */
> > +     if (kcb->kprobe_status == KPROBE_REENTER) {
> > +             restore_previous_kprobe(kcb);
> > +             return;
> > +     }
> > +
> > +     /* call post handler */
> > +     kcb->kprobe_status = KPROBE_HIT_SSDONE;
> > +     if (cur->post_handler)  {
> > +             /* post_handler can hit breakpoint and single step
> > +              * again, so we enable D-flag for recursive exception.
> > +              */
> > +             cur->post_handler(cur, regs, 0);
> > +     }
> > +
> > +     reset_current_kprobe();
> > +}
> > +
> > +int __kprobes kprobe_fault_handler(struct pt_regs *regs, unsigned int trapnr)
> > +{
> > +     struct kprobe *cur = kprobe_running();
> > +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > +     switch (kcb->kprobe_status) {
> > +     case KPROBE_HIT_SS:
> > +     case KPROBE_REENTER:
> > +             /*
> > +              * We are here because the instruction being single
> > +              * stepped caused a page fault. We reset the current
> > +              * kprobe and the ip points back to the probe address
> > +              * and allow the page fault handler to continue as a
> > +              * normal page fault.
> > +              */
> > +             regs->epc = (unsigned long) cur->addr;
> > +             if (!instruction_pointer(regs))
> > +                     BUG();
> > +
> > +             if (kcb->kprobe_status == KPROBE_REENTER)
> > +                     restore_previous_kprobe(kcb);
> > +             else
> > +                     reset_current_kprobe();
> > +
> > +             break;
> > +     case KPROBE_HIT_ACTIVE:
> > +     case KPROBE_HIT_SSDONE:
> > +             /*
> > +              * We increment the nmissed count for accounting,
> > +              * we can also use npre/npostfault count for accounting
> > +              * these specific fault cases.
> > +              */
> > +             kprobes_inc_nmissed_count(cur);
> > +
> > +             /*
> > +              * We come here because instructions in the pre/post
> > +              * handler caused the page_fault, this could happen
> > +              * if handler tries to access user space by
> > +              * copy_from_user(), get_user() etc. Let the
> > +              * user-specified handler try to fix it first.
> > +              */
> > +             if (cur->fault_handler && cur->fault_handler(cur, regs, trapnr))
> > +                     return 1;
> > +
> > +             /*
> > +              * In case the user-specified fault handler returned
> > +              * zero, try to fix up.
> > +              */
> > +             if (fixup_exception(regs))
> > +                     return 1;
> > +     }
> > +     return 0;
> > +}
> > +
> > +bool __kprobes
> > +kprobe_breakpoint_handler(struct pt_regs *regs)
> > +{
> > +     struct kprobe *p, *cur_kprobe;
> > +     struct kprobe_ctlblk *kcb;
> > +     unsigned long addr = instruction_pointer(regs);
> > +
> > +     kcb = get_kprobe_ctlblk();
> > +     cur_kprobe = kprobe_running();
> > +
> > +     p = get_kprobe((kprobe_opcode_t *) addr);
> > +
> > +     if (p) {
> > +             if (cur_kprobe) {
> > +                     if (reenter_kprobe(p, regs, kcb))
> > +                             return true;
> > +             } else {
> > +                     /* Probe hit */
> > +                     set_current_kprobe(p);
> > +                     kcb->kprobe_status = KPROBE_HIT_ACTIVE;
> > +
> > +                     /*
> > +                      * If we have no pre-handler or it returned 0, we
> > +                      * continue with normal processing.  If we have a
> > +                      * pre-handler and it returned non-zero, it will
> > +                      * modify the execution path and no need to single
> > +                      * stepping. Let's just reset current kprobe and exit.
> > +                      *
> > +                      * pre_handler can hit a breakpoint and can step thru
> > +                      * before return.
> > +                      */
> > +                     if (!p->pre_handler || !p->pre_handler(p, regs))
> > +                             setup_singlestep(p, regs, kcb, 0);
> > +                     else
> > +                             reset_current_kprobe();
> > +             }
> > +             return true;
> > +     }
> > +
> > +     /*
> > +      * The breakpoint instruction was removed right
> > +      * after we hit it.  Another cpu has removed
> > +      * either a probepoint or a debugger breakpoint
> > +      * at this address.  In either case, no further
> > +      * handling of this interrupt is appropriate.
> > +      * Return back to original instruction, and continue.
> > +      */
> > +     return false;
> > +}
> > +
> > +bool __kprobes
> > +kprobe_single_step_handler(struct pt_regs *regs)
> > +{
> > +     struct kprobe_ctlblk *kcb = get_kprobe_ctlblk();
> > +
> > +     if ((kcb->ss_ctx.ss_pending)
> > +         && (kcb->ss_ctx.match_addr == instruction_pointer(regs))) {
> > +             clear_ss_context(kcb);  /* clear pending ss */
> > +
> > +             kprobes_restore_local_irqflag(kcb, regs);
> > +
> > +             post_kprobe_handler(kcb, regs);
> > +             return true;
> > +     }
> > +     return false;
> > +}
> > +
> > +/*
> > + * Provide a blacklist of symbols identifying ranges which cannot be kprobed.
> > + * This blacklist is exposed to userspace via debugfs (kprobes/blacklist).
> > + */
> > +int __init arch_populate_kprobe_blacklist(void)
> > +{
> > +     int ret;
> > +
> > +     ret = kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
> > +                                     (unsigned long)__irqentry_text_end);
> > +     return ret;
> > +}
> > +
> > +void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs)
> > +{
> > +     struct kretprobe_instance *ri = NULL;
> > +     struct hlist_head *head, empty_rp;
> > +     struct hlist_node *tmp;
> > +     unsigned long flags, orig_ret_address = 0;
> > +     unsigned long trampoline_address =
> > +             (unsigned long)&kretprobe_trampoline;
> > +     kprobe_opcode_t *correct_ret_addr = NULL;
> > +
> > +     INIT_HLIST_HEAD(&empty_rp);
> > +     kretprobe_hash_lock(current, &head, &flags);
> > +
> > +     /*
> > +      * It is possible to have multiple instances associated with a given
> > +      * task either because multiple functions in the call path have
> > +      * return probes installed on them, and/or more than one
> > +      * return probe was registered for a target function.
> > +      *
> > +      * We can handle this because:
> > +      *     - instances are always pushed into the head of the list
> > +      *     - when multiple return probes are registered for the same
> > +      *       function, the (chronologically) first instance's ret_addr
> > +      *       will be the real return address, and all the rest will
> > +      *       point to kretprobe_trampoline.
> > +      */
> > +     hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> > +             if (ri->task != current)
> > +                     /* another task is sharing our hash bucket */
> > +                     continue;
> > +
> > +             orig_ret_address = (unsigned long)ri->ret_addr;
> > +
> > +             if (orig_ret_address != trampoline_address)
> > +                     /*
> > +                      * This is the real return address. Any other
> > +                      * instances associated with this task are for
> > +                      * other calls deeper on the call stack
> > +                      */
> > +                     break;
> > +     }
> > +
> > +     kretprobe_assert(ri, orig_ret_address, trampoline_address);
> > +
> > +     correct_ret_addr = ri->ret_addr;
> > +     hlist_for_each_entry_safe(ri, tmp, head, hlist) {
> > +             if (ri->task != current)
> > +                     /* another task is sharing our hash bucket */
> > +                     continue;
> > +
> > +             orig_ret_address = (unsigned long)ri->ret_addr;
> > +             if (ri->rp && ri->rp->handler) {
> > +                     __this_cpu_write(current_kprobe, &ri->rp->kp);
> > +                     get_kprobe_ctlblk()->kprobe_status = KPROBE_HIT_ACTIVE;
> > +                     ri->ret_addr = correct_ret_addr;
> > +                     ri->rp->handler(ri, regs);
> > +                     __this_cpu_write(current_kprobe, NULL);
> > +             }
> > +
> > +             recycle_rp_inst(ri, &empty_rp);
> > +
> > +             if (orig_ret_address != trampoline_address)
> > +                     /*
> > +                      * This is the real return address. Any other
> > +                      * instances associated with this task are for
> > +                      * other calls deeper on the call stack
> > +                      */
> > +                     break;
> > +     }
> > +
> > +     kretprobe_hash_unlock(current, &flags);
> > +
> > +     hlist_for_each_entry_safe(ri, tmp, &empty_rp, hlist) {
> > +             hlist_del(&ri->hlist);
> > +             kfree(ri);
> > +     }
> > +     return (void *)orig_ret_address;
> > +}
> > +
> > +void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri,
> > +                                   struct pt_regs *regs)
> > +{
> > +     ri->ret_addr = (kprobe_opcode_t *)regs->ra;
> > +     regs->ra = (unsigned long) &kretprobe_trampoline;
> > +}
> > +
> > +int __kprobes arch_trampoline_kprobe(struct kprobe *p)
> > +{
> > +     return 0;
> > +}
> > +
> > +int __init arch_init_kprobes(void)
> > +{
> > +     return 0;
> > +}
> > diff --git a/arch/riscv/kernel/probes/kprobes_trampoline.S b/arch/riscv/kernel/probes/kprobes_trampoline.S
> > new file mode 100644
> > index 00000000..6e85d02
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/kprobes_trampoline.S
> > @@ -0,0 +1,93 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +/*
> > + * Author: Patrick Stählin <me@packi.ch>
> > + */
> > +#include <linux/linkage.h>
> > +
> > +#include <asm/asm.h>
> > +#include <asm/asm-offsets.h>
> > +
> > +     .text
> > +     .altmacro
> > +
> > +     .macro save_all_base_regs
> > +     REG_S x1,  PT_RA(sp)
> > +     REG_S x3,  PT_GP(sp)
> > +     REG_S x4,  PT_TP(sp)
> > +     REG_S x5,  PT_T0(sp)
> > +     REG_S x6,  PT_T1(sp)
> > +     REG_S x7,  PT_T2(sp)
> > +     REG_S x8,  PT_S0(sp)
> > +     REG_S x9,  PT_S1(sp)
> > +     REG_S x10, PT_A0(sp)
> > +     REG_S x11, PT_A1(sp)
> > +     REG_S x12, PT_A2(sp)
> > +     REG_S x13, PT_A3(sp)
> > +     REG_S x14, PT_A4(sp)
> > +     REG_S x15, PT_A5(sp)
> > +     REG_S x16, PT_A6(sp)
> > +     REG_S x17, PT_A7(sp)
> > +     REG_S x18, PT_S2(sp)
> > +     REG_S x19, PT_S3(sp)
> > +     REG_S x20, PT_S4(sp)
> > +     REG_S x21, PT_S5(sp)
> > +     REG_S x22, PT_S6(sp)
> > +     REG_S x23, PT_S7(sp)
> > +     REG_S x24, PT_S8(sp)
> > +     REG_S x25, PT_S9(sp)
> > +     REG_S x26, PT_S10(sp)
> > +     REG_S x27, PT_S11(sp)
> > +     REG_S x28, PT_T3(sp)
> > +     REG_S x29, PT_T4(sp)
> > +     REG_S x30, PT_T5(sp)
> > +     REG_S x31, PT_T6(sp)
> > +     .endm
> > +
> > +     .macro restore_all_base_regs
> > +     REG_L x3,  PT_GP(sp)
> > +     REG_L x4,  PT_TP(sp)
> > +     REG_L x5,  PT_T0(sp)
> > +     REG_L x6,  PT_T1(sp)
> > +     REG_L x7,  PT_T2(sp)
> > +     REG_L x8,  PT_S0(sp)
> > +     REG_L x9,  PT_S1(sp)
> > +     REG_L x10, PT_A0(sp)
> > +     REG_L x11, PT_A1(sp)
> > +     REG_L x12, PT_A2(sp)
> > +     REG_L x13, PT_A3(sp)
> > +     REG_L x14, PT_A4(sp)
> > +     REG_L x15, PT_A5(sp)
> > +     REG_L x16, PT_A6(sp)
> > +     REG_L x17, PT_A7(sp)
> > +     REG_L x18, PT_S2(sp)
> > +     REG_L x19, PT_S3(sp)
> > +     REG_L x20, PT_S4(sp)
> > +     REG_L x21, PT_S5(sp)
> > +     REG_L x22, PT_S6(sp)
> > +     REG_L x23, PT_S7(sp)
> > +     REG_L x24, PT_S8(sp)
> > +     REG_L x25, PT_S9(sp)
> > +     REG_L x26, PT_S10(sp)
> > +     REG_L x27, PT_S11(sp)
> > +     REG_L x28, PT_T3(sp)
> > +     REG_L x29, PT_T4(sp)
> > +     REG_L x30, PT_T5(sp)
> > +     REG_L x31, PT_T6(sp)
> > +     .endm
> > +
> > +ENTRY(kretprobe_trampoline)
> > +     addi sp, sp, -(PT_SIZE_ON_STACK)
> > +     save_all_base_regs
> > +
> > +     move a0, sp /* pt_regs */
> > +
> > +     call trampoline_probe_handler
> > +
> > +     /* use the result as the return-address */
> > +     move ra, a0
> > +
> > +     restore_all_base_regs
> > +     addi sp, sp, PT_SIZE_ON_STACK
> > +
> > +     ret
> > +ENDPROC(kretprobe_trampoline)
> > diff --git a/arch/riscv/kernel/probes/simulate-insn.c b/arch/riscv/kernel/probes/simulate-insn.c
> > new file mode 100644
> > index 00000000..2519ce2
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/simulate-insn.c
> > @@ -0,0 +1,85 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +
> > +#include <linux/bitops.h>
> > +#include <linux/kernel.h>
> > +#include <linux/kprobes.h>
> > +
> > +#include "decode-insn.h"
> > +#include "simulate-insn.h"
> > +
> > +static inline bool rv_insn_reg_get_val(struct pt_regs *regs, u32 index,
> > +                                    unsigned long *ptr)
> > +{
> > +     if (index == 0)
> > +             *ptr = 0;
> > +     else if (index <= 31)
> > +             *ptr = *((unsigned long *)regs + index);
> > +     else
> > +             return false;
> > +
> > +     return true;
> > +}
> > +
> > +static inline bool rv_insn_reg_set_val(struct pt_regs *regs, u32 index,
> > +                                    unsigned long val)
> > +{
> > +     if (index == 0)
> > +             return false;
> > +     else if (index <= 31)
> > +             *((unsigned long *)regs + index) = val;
> > +     else
> > +             return false;
> > +
> > +     return true;
> > +}
> > +
> > +bool __kprobes simulate_jal(u32 opcode, unsigned long addr, struct pt_regs *regs)
> > +{
> > +     /*
> > +      *     31    30       21    20     19        12 11 7 6      0
> > +      * imm [20] | imm[10:1] | imm[11] | imm[19:12] | rd | opcode
> > +      *     1         10          1           8       5    JAL/J
> > +      */
> > +     bool ret;
> > +     u32 imm;
> > +     u32 index = (opcode >> 7) & 0x1f;
> > +
> > +     ret = rv_insn_reg_set_val(regs, index, addr + 4);
> > +     if (!ret)
> > +             return ret;
> > +
> > +     imm  = ((opcode >> 21) & 0x3ff) << 1;
> > +     imm |= ((opcode >> 20) & 0x1)   << 11;
> > +     imm |= ((opcode >> 12) & 0xff)  << 12;
> > +     imm |= ((opcode >> 31) & 0x1)   << 20;
> > +
> > +     instruction_pointer_set(regs, addr + sign_extend32((imm), 20));
> > +
> > +     return ret;
> > +}
> > +
> > +bool __kprobes simulate_jalr(u32 opcode, unsigned long addr, struct pt_regs *regs)
> > +{
> > +     /*
> > +      * 31          20 19 15 14 12 11 7 6      0
> > +      *  offset[11:0] | rs1 | 010 | rd | opcode
> > +      *      12         5      3    5    JALR/JR
> > +      */
> > +     bool ret;
> > +     unsigned long base_addr;
> > +     u32 imm = (opcode >> 20) & 0xfff;
> > +     u32 rd_index = (opcode >> 7) & 0x1f;
> > +     u32 rs1_index = (opcode >> 15) & 0x1f;
> > +
> > +     ret = rv_insn_reg_set_val(regs, rd_index, addr + 4);
> > +     if (!ret)
> > +             return ret;
> > +
> > +     ret = rv_insn_reg_get_val(regs, rs1_index, &base_addr);
> > +     if (!ret)
> > +             return ret;
> > +
> > +     instruction_pointer_set(regs, (base_addr + sign_extend32((imm), 11))&~1);
> > +
> > +     return ret;
> > +}
> > diff --git a/arch/riscv/kernel/probes/simulate-insn.h b/arch/riscv/kernel/probes/simulate-insn.h
> > new file mode 100644
> > index 00000000..a62d784
> > --- /dev/null
> > +++ b/arch/riscv/kernel/probes/simulate-insn.h
> > @@ -0,0 +1,47 @@
> > +/* SPDX-License-Identifier: GPL-2.0+ */
> > +
> > +#ifndef _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> > +#define _RISCV_KERNEL_PROBES_SIMULATE_INSN_H
> > +
> > +#define __RISCV_INSN_FUNCS(name, mask, val)                          \
> > +static __always_inline bool riscv_insn_is_##name(probe_opcode_t code)        \
> > +{                                                                    \
> > +     BUILD_BUG_ON(~(mask) & (val));                                  \
> > +     return (code & (mask)) == (val);                                \
> > +}                                                                    \
> > +bool simulate_##name(u32 opcode, unsigned long addr,                 \
> > +                  struct pt_regs *regs);
> > +
> > +#define RISCV_INSN_REJECTED(name, code)                                      \
> > +     do {                                                            \
> > +             if (riscv_insn_is_##name(code)) {                       \
> > +                     return INSN_REJECTED;                           \
> > +             }                                                       \
> > +     } while (0)
> > +
> > +__RISCV_INSN_FUNCS(system,   0x7f, 0x73)
> > +__RISCV_INSN_FUNCS(fence,    0x7f, 0x0f)
> > +
> > +#define RISCV_INSN_SET_SIMULATE(name, code)                          \
> > +     do {                                                            \
> > +             if (riscv_insn_is_##name(code)) {                       \
> > +                     api->handler = simulate_##name;                 \
> > +                     return INSN_GOOD_NO_SLOT;                       \
> > +             }                                                       \
> > +     } while (0)
> > +
> > +__RISCV_INSN_FUNCS(c_j,              0xe003, 0xa001)
> > +__RISCV_INSN_FUNCS(c_jr,     0xf007, 0x8002)
> > +__RISCV_INSN_FUNCS(c_jal,    0xe003, 0x2001)
> > +__RISCV_INSN_FUNCS(c_jalr,   0xf007, 0x9002)
> > +__RISCV_INSN_FUNCS(c_beqz,   0xe003, 0xc001)
> > +__RISCV_INSN_FUNCS(c_bnez,   0xe003, 0xe001)
> > +__RISCV_INSN_FUNCS(c_ebreak, 0xffff, 0x9002)
> > +
> > +__RISCV_INSN_FUNCS(auipc,    0x7f, 0x17)
> > +__RISCV_INSN_FUNCS(branch,   0x7f, 0x63)
> > +
> > +__RISCV_INSN_FUNCS(jal,              0x7f, 0x6f)
> > +__RISCV_INSN_FUNCS(jalr,     0x707f, 0x67)
> > +
> > +#endif /* _RISCV_KERNEL_PROBES_SIMULATE_INSN_H */
> > diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
> > index 7d95cce..c6846dd 100644
> > --- a/arch/riscv/kernel/traps.c
> > +++ b/arch/riscv/kernel/traps.c
> > @@ -12,6 +12,7 @@
> >  #include <linux/signal.h>
> >  #include <linux/kdebug.h>
> >  #include <linux/uaccess.h>
> > +#include <linux/kprobes.h>
> >  #include <linux/mm.h>
> >  #include <linux/module.h>
> >  #include <linux/irq.h>
> > @@ -145,6 +146,14 @@ static inline unsigned long get_break_insn_length(unsigned long pc)
> >
> >  asmlinkage __visible void do_trap_break(struct pt_regs *regs)
> >  {
> > +#ifdef CONFIG_KPROBES
> > +     if (kprobe_single_step_handler(regs))
> > +             return;
> > +
> > +     if (kprobe_breakpoint_handler(regs))
> > +             return;
> > +#endif
> > +
> >       if (user_mode(regs))
> >               force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc);
> >  #ifdef CONFIG_KGDB
> > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
> > index ae7b7fe..da0c08c 100644
> > --- a/arch/riscv/mm/fault.c
> > +++ b/arch/riscv/mm/fault.c
> > @@ -13,6 +13,7 @@
> >  #include <linux/perf_event.h>
> >  #include <linux/signal.h>
> >  #include <linux/uaccess.h>
> > +#include <linux/kprobes.h>
> >
> >  #include <asm/pgalloc.h>
> >  #include <asm/ptrace.h>
> > @@ -40,6 +41,9 @@ asmlinkage void do_page_fault(struct pt_regs *regs)
> >       tsk = current;
> >       mm = tsk->mm;
> >
> > +     if (kprobe_page_fault(regs, cause))
> > +             return;
> > +
> >       /*
> >        * Fault-in kernel-space virtual memory on-demand.
> >        * The 'reference' page table is init_mm.pgd.



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-08-17 13:48 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-13 23:39 [PATCH v3 0/7] riscv: Add k/uprobe supported guoren
2020-07-13 23:39 ` [PATCH v3 1/7] RISC-V: Implement ptrace regs and stack API guoren
2020-07-14 11:25   ` Masami Hiramatsu
2020-07-13 23:39 ` [PATCH v3 2/7] riscv: Fixup compile error BUILD_BUG_ON failed guoren
2020-07-13 23:39 ` [PATCH v3 3/7] riscv: Fixup kprobes handler couldn't change pc guoren
2020-07-14 11:32   ` Masami Hiramatsu
2020-08-14 22:36   ` Palmer Dabbelt
2020-08-17 12:47     ` Guo Ren
2020-07-13 23:39 ` [PATCH v3 4/7] riscv: Add kprobes supported guoren
2020-08-14 22:36   ` Palmer Dabbelt
2020-08-17 13:48     ` Guo Ren
2020-07-13 23:39 ` [PATCH v3 5/7] riscv: Add uprobes supported guoren
2020-07-13 23:39 ` [PATCH v3 6/7] riscv: Add KPROBES_ON_FTRACE supported guoren
2020-07-14 11:37   ` Masami Hiramatsu
2020-07-14 16:24     ` Guo Ren
2020-07-21 13:27       ` Masami Hiramatsu
2020-07-22  8:39         ` Guo Ren
2020-07-23 15:55           ` Masami Hiramatsu
2020-07-22 13:31         ` Guo Ren
2020-07-23 16:11           ` Masami Hiramatsu
2020-07-13 23:39 ` [PATCH v3 7/7] riscv: Add support for function error injection guoren
2020-07-14 11:43   ` Masami Hiramatsu
2020-07-14 11:23 ` [PATCH v3 0/7] riscv: Add k/uprobe supported Masami Hiramatsu
2020-07-15  6:45   ` Guo Ren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).