bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper
@ 2021-06-29 19:29 Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 1/5] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-06-29 19:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu

hi,
adding bpf_get_func_ip helper that returns IP address of the
caller function for trampoline and krobe programs.

There're 2 specific implementation of the bpf_get_func_ip
helper, one for trampoline progs and one for kprobe/kretprobe
progs.

The trampoline helper call is replaced/inlined by verifier
with simple move instruction. The kprobe/kretprobe is actual
helper call that returns prepared caller address.

The trampoline extra 3 instructions for storing IP address
is now optional, which I'm not completely sure is necessary,
so I plan to do some benchmarks, if it's noticeable, hence
the RFC. I'm also not completely sure about the kprobe/kretprobe
implementation.

Also available at:
  https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
  bpf/get_func_ip

thanks,
jirka


---
Jiri Olsa (5):
      bpf, x86: Store caller's ip in trampoline stack
      bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
      bpf: Add bpf_get_func_ip helper for tracing programs
      bpf: Add bpf_get_func_ip helper for kprobe programs
      selftests/bpf: Add test for bpf_get_func_ip helper

 arch/x86/net/bpf_jit_comp.c                               | 19 +++++++++++++++++++
 include/linux/bpf.h                                       |  5 +++++
 include/linux/filter.h                                    |  3 ++-
 include/uapi/linux/bpf.h                                  |  7 +++++++
 kernel/bpf/trampoline.c                                   | 12 +++++++++---
 kernel/bpf/verifier.c                                     | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c                                  | 29 +++++++++++++++++++++++++++++
 kernel/trace/trace_kprobe.c                               | 20 ++++++++++++++++++--
 kernel/trace/trace_probe.h                                |  5 +++++
 tools/include/uapi/linux/bpf.h                            |  7 +++++++
 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 12 files changed, 260 insertions(+), 6 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 1/5] bpf, x86: Store caller's ip in trampoline stack
  2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
@ 2021-06-29 19:29 ` Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 2/5] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-06-29 19:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu

Storing caller's ip in trampoline's stack. Trampoline programs
can reach the IP in (ctx - 8) address, so there's no change in
program's arguments interface.

The IP address is takes from [fp + 8], which is return address
from the initial 'call fentry' call to trampoline.

This IP address will be returned via bpf_get_func_ip helper
helper, which is added in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 19 +++++++++++++++++++
 include/linux/bpf.h         |  5 +++++
 2 files changed, 24 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index e835164189f1..c320b3ce7b58 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1951,6 +1951,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	if (flags & BPF_TRAMP_F_CALL_ORIG)
 		stack_size += 8; /* room for return value of orig_call */
 
+	if (flags & BPF_TRAMP_F_IP_ARG)
+		stack_size += 8; /* room for IP address argument */
+
 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
 		/* skip patched call instruction and point orig_call to actual
 		 * body of the kernel function.
@@ -1964,6 +1967,22 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
 	EMIT1(0x53);		 /* push rbx */
 
+	if (flags & BPF_TRAMP_F_IP_ARG) {
+		/* Store IP address of the traced function:
+		 * mov rax, QWORD PTR [rbp + 8]
+		 * sub rax, X86_PATCH_SIZE
+		 * mov QWORD PTR [rbp - stack_size], rax
+		 */
+		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
+		EMIT4(0x48, 0x83, 0xe8, X86_PATCH_SIZE);
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -stack_size);
+
+		/* Continue with stack_size for regs storage, stack will
+		 * be correctly restored with 'leave' instruction.
+		 */
+		stack_size -= 8;
+	}
+
 	save_regs(m, &prog, nr_args, stack_size);
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f309fc1509f2..6b3da9bc3d16 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -554,6 +554,11 @@ struct btf_func_model {
  */
 #define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
 
+/* Store IP address of the caller on the trampoline stack,
+ * so it's available for trampoline's programs.
+ */
+#define BPF_TRAMP_F_IP_ARG		BIT(3)
+
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
  * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
  */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 2/5] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
  2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 1/5] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
@ 2021-06-29 19:29 ` Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 3/5] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-06-29 19:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu

Enabling BPF_TRAMP_F_IP_ARG for trampolines that actually need it.

The BPF_TRAMP_F_IP_ARG adds extra 3 instructions to trampoline code
and is used only by programs with bpf_get_func_ip helper, which is
added in following patch and sets call_get_func_ip bit.

This patch ensures that BPF_TRAMP_F_IP_ARG flag is used only for
trampolines that have programs with call_get_func_ip set.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/filter.h  |  3 ++-
 kernel/bpf/trampoline.c | 12 +++++++++---
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 472f97074da0..ba36989f711a 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -559,7 +559,8 @@ struct bpf_prog {
 				kprobe_override:1, /* Do we override a kprobe? */
 				has_callchain_buf:1, /* callchain buffer allocated? */
 				enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */
-				call_get_stack:1; /* Do we call bpf_get_stack() or bpf_get_stackid() */
+				call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
+				call_get_func_ip:1; /* Do we call get_func_ip() */
 	enum bpf_prog_type	type;		/* Type of BPF program */
 	enum bpf_attach_type	expected_attach_type; /* For some prog types */
 	u32			len;		/* Number of filter blocks */
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 28a3630c48ee..b2535acfe9db 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -172,7 +172,7 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 }
 
 static struct bpf_tramp_progs *
-bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
+bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
 {
 	const struct bpf_prog_aux *aux;
 	struct bpf_tramp_progs *tprogs;
@@ -189,8 +189,10 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
 		*total += tr->progs_cnt[kind];
 		progs = tprogs[kind].progs;
 
-		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist)
+		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist) {
+			*ip_arg |= aux->prog->call_get_func_ip;
 			*progs++ = aux->prog;
+		}
 	}
 	return tprogs;
 }
@@ -333,9 +335,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	struct bpf_tramp_image *im;
 	struct bpf_tramp_progs *tprogs;
 	u32 flags = BPF_TRAMP_F_RESTORE_REGS;
+	bool ip_arg = false;
 	int err, total;
 
-	tprogs = bpf_trampoline_get_progs(tr, &total);
+	tprogs = bpf_trampoline_get_progs(tr, &total, &ip_arg);
 	if (IS_ERR(tprogs))
 		return PTR_ERR(tprogs);
 
@@ -357,6 +360,9 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
 
+	if (ip_arg)
+		flags |= BPF_TRAMP_F_IP_ARG;
+
 	err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
 					  &tr->func.model, flags, tprogs,
 					  tr->func.addr);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 3/5] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 1/5] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 2/5] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
@ 2021-06-29 19:29 ` Jiri Olsa
  2021-06-29 19:29 ` [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-06-29 19:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu

Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
specifically for all trampoline attach types.

The trampoline's caller IP address is stored in (ctx - 8) address.
so there's no reason to actually call the helper, but rather fixup
the call instruction and return [ctx - 8] value directly (suggested
by Alexei).

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/bpf.h       |  7 +++++
 kernel/bpf/verifier.c          | 53 ++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c       | 15 ++++++++++
 tools/include/uapi/linux/bpf.h |  7 +++++
 4 files changed, 82 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index bf9252c7381e..83e87ffdbb6e 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4780,6 +4780,12 @@ union bpf_attr {
  * 		Execute close syscall for given FD.
  * 	Return
  * 		A syscall result.
+ *
+ * u64 bpf_get_func_ip(void *ctx)
+ * 	Description
+ * 		Get address of the traced function (for tracing programs).
+ * 	Return
+ * 		Address of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4951,6 +4957,7 @@ union bpf_attr {
 	FN(sys_bpf),			\
 	FN(btf_find_by_name_kind),	\
 	FN(sys_close),			\
+	FN(get_func_ip),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e04e33893cff..701ff7384fa7 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5149,6 +5149,11 @@ static bool allow_tail_call_in_subprogs(struct bpf_verifier_env *env)
 	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);
 }
 
+static bool allow_get_func_ip_tracing(struct bpf_verifier_env *env)
+{
+	return env->prog->jit_requested && IS_ENABLED(CONFIG_X86_64);
+}
+
 static int check_map_func_compatibility(struct bpf_verifier_env *env,
 					struct bpf_map *map, int func_id)
 {
@@ -5955,6 +5960,32 @@ static int check_bpf_snprintf_call(struct bpf_verifier_env *env,
 	return err;
 }
 
+static bool has_get_func_ip(struct bpf_verifier_env *env)
+{
+	enum bpf_attach_type eatype = env->prog->expected_attach_type;
+	enum bpf_prog_type type = resolve_prog_type(env->prog);
+	int func_id = BPF_FUNC_get_func_ip;
+
+	if (type == BPF_PROG_TYPE_TRACING) {
+		if (eatype != BPF_TRACE_FENTRY && eatype != BPF_TRACE_FEXIT &&
+		    eatype != BPF_MODIFY_RETURN) {
+			verbose(env, "func %s#%d supported only for fentry/fexit/fmod_ret programs\n",
+				func_id_name(func_id), func_id);
+			return -ENOTSUPP;
+		}
+		if (!allow_get_func_ip_tracing(env)) {
+			verbose(env, "func %s#%d for tracing programs supported only for JITed x86_64\n",
+				func_id_name(func_id), func_id);
+			return -ENOTSUPP;
+		}
+		return 0;
+	}
+
+	verbose(env, "func %s#%d not supported for program type %d\n",
+		func_id_name(func_id), func_id, type);
+	return -ENOTSUPP;
+}
+
 static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			     int *insn_idx_p)
 {
@@ -6225,6 +6256,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
 		env->prog->call_get_stack = true;
 
+	if (func_id == BPF_FUNC_get_func_ip) {
+		if (has_get_func_ip(env))
+			return -ENOTSUPP;
+		env->prog->call_get_func_ip = true;
+	}
+
 	if (changes_data)
 		clear_all_pkt_pointers(env);
 	return 0;
@@ -12367,6 +12404,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog;
 	bool expect_blinding = bpf_jit_blinding_enabled(prog);
+	enum bpf_prog_type prog_type = resolve_prog_type(prog);
 	struct bpf_insn *insn = prog->insnsi;
 	const struct bpf_func_proto *fn;
 	const int insn_cnt = prog->len;
@@ -12700,6 +12738,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
+		/* Implement bpf_get_func_ip inline. */
+		if (prog_type == BPF_PROG_TYPE_TRACING &&
+		    insn->imm == BPF_FUNC_get_func_ip) {
+			/* Load IP address from ctx - 8 */
+			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
+			if (!new_prog)
+				return -ENOMEM;
+
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
 patch_call_imm:
 		fn = env->ops->get_func_proto(insn->imm, env->prog);
 		/* all functions that have prototype and verifier allowed
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 64bd2d84367f..9edd3b1a00ad 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
 	.arg5_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)
+{
+	/* Stub, the helper call is inlined in the program. */
+	return 0;
+}
+
+static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
+	.func		= bpf_get_func_ip_tracing,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1058,6 +1071,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_for_each_map_elem_proto;
 	case BPF_FUNC_snprintf:
 		return &bpf_snprintf_proto;
+	case BPF_FUNC_get_func_ip:
+		return &bpf_get_func_ip_proto_tracing;
 	default:
 		return NULL;
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index bf9252c7381e..83e87ffdbb6e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4780,6 +4780,12 @@ union bpf_attr {
  * 		Execute close syscall for given FD.
  * 	Return
  * 		A syscall result.
+ *
+ * u64 bpf_get_func_ip(void *ctx)
+ * 	Description
+ * 		Get address of the traced function (for tracing programs).
+ * 	Return
+ * 		Address of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4951,6 +4957,7 @@ union bpf_attr {
 	FN(sys_bpf),			\
 	FN(btf_find_by_name_kind),	\
 	FN(sys_close),			\
+	FN(get_func_ip),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (2 preceding siblings ...)
  2021-06-29 19:29 ` [PATCH bpf-next 3/5] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
@ 2021-06-29 19:29 ` Jiri Olsa
  2021-06-30 17:47   ` Yonghong Song
  2021-06-29 19:29 ` [PATCH bpf-next 5/5] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
  2021-07-01 17:22 ` [RFC bpf-next 0/5] bpf, x86: Add " Alan Maguire
  5 siblings, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2021-06-29 19:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu

Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
so it's now possible to call bpf_get_func_ip from both kprobe and
kretprobe programs.

Taking the caller's address from 'struct kprobe::addr', which is
defined for both kprobe and kretprobe.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/bpf.h       |  2 +-
 kernel/bpf/verifier.c          |  2 ++
 kernel/trace/bpf_trace.c       | 14 ++++++++++++++
 kernel/trace/trace_kprobe.c    | 20 ++++++++++++++++++--
 kernel/trace/trace_probe.h     |  5 +++++
 tools/include/uapi/linux/bpf.h |  2 +-
 6 files changed, 41 insertions(+), 4 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 83e87ffdbb6e..4894f99a1993 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4783,7 +4783,7 @@ union bpf_attr {
  *
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
- * 		Get address of the traced function (for tracing programs).
+ * 		Get address of the traced function (for tracing and kprobe programs).
  * 	Return
  * 		Address of the traced function.
  */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 701ff7384fa7..b66e0a7104f8 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5979,6 +5979,8 @@ static bool has_get_func_ip(struct bpf_verifier_env *env)
 			return -ENOTSUPP;
 		}
 		return 0;
+	} else if (type == BPF_PROG_TYPE_KPROBE) {
+		return 0;
 	}
 
 	verbose(env, "func %s#%d not supported for program type %d\n",
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 9edd3b1a00ad..1a5bddce9abd 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -961,6 +961,18 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
 	.arg1_type	= ARG_PTR_TO_CTX,
 };
 
+BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
+{
+	return trace_current_kprobe_addr();
+}
+
+static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
+	.func		= bpf_get_func_ip_kprobe,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1092,6 +1104,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	case BPF_FUNC_override_return:
 		return &bpf_override_return_proto;
 #endif
+	case BPF_FUNC_get_func_ip:
+		return &bpf_get_func_ip_proto_kprobe;
 	default:
 		return bpf_tracing_func_proto(func_id, prog);
 	}
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index ea6178cb5e33..b07d5888db14 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
 }
 
 #ifdef CONFIG_PERF_EVENTS
+/* Used by bpf get_func_ip helper */
+DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
+
+u64 trace_current_kprobe_addr(void)
+{
+	return *this_cpu_ptr(&current_kprobe_addr);
+}
+
+static void trace_current_kprobe_set(struct trace_kprobe *tk)
+{
+	__this_cpu_write(current_kprobe_addr, (u64) tk->rp.kp.addr);
+}
 
 /* Kprobe profile handler */
 static int
@@ -1585,6 +1597,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
 		unsigned long orig_ip = instruction_pointer(regs);
 		int ret;
 
+		trace_current_kprobe_set(tk);
 		ret = trace_call_bpf(call, regs);
 
 		/*
@@ -1631,8 +1644,11 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
 	int size, __size, dsize;
 	int rctx;
 
-	if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
-		return;
+	if (bpf_prog_array_valid(call)) {
+		trace_current_kprobe_set(tk);
+		if (!trace_call_bpf(call, regs))
+			return;
+	}
 
 	head = this_cpu_ptr(call->perf_events);
 	if (hlist_empty(head))
diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
index 227d518e5ba5..19c979834916 100644
--- a/kernel/trace/trace_probe.h
+++ b/kernel/trace/trace_probe.h
@@ -199,6 +199,7 @@ DECLARE_BASIC_PRINT_TYPE_FUNC(symbol);
 #ifdef CONFIG_KPROBE_EVENTS
 bool trace_kprobe_on_func_entry(struct trace_event_call *call);
 bool trace_kprobe_error_injectable(struct trace_event_call *call);
+u64 trace_current_kprobe_addr(void);
 #else
 static inline bool trace_kprobe_on_func_entry(struct trace_event_call *call)
 {
@@ -209,6 +210,10 @@ static inline bool trace_kprobe_error_injectable(struct trace_event_call *call)
 {
 	return false;
 }
+static inline u64 trace_current_kprobe_addr(void)
+{
+	return 0;
+}
 #endif /* CONFIG_KPROBE_EVENTS */
 
 struct probe_arg {
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 83e87ffdbb6e..4894f99a1993 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4783,7 +4783,7 @@ union bpf_attr {
  *
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
- * 		Get address of the traced function (for tracing programs).
+ * 		Get address of the traced function (for tracing and kprobe programs).
  * 	Return
  * 		Address of the traced function.
  */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH bpf-next 5/5] selftests/bpf: Add test for bpf_get_func_ip helper
  2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (3 preceding siblings ...)
  2021-06-29 19:29 ` [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
@ 2021-06-29 19:29 ` Jiri Olsa
  2021-07-01 17:22 ` [RFC bpf-next 0/5] bpf, x86: Add " Alan Maguire
  5 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-06-29 19:29 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu

Adding test for bpf_get_func_ip helper for fentry, fexit,
kprobe, kretprobe and fmod_ret programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c         | 42 +++++++++++++
 .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++
 2 files changed, 104 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
new file mode 100644
index 000000000000..06d34f566bbb
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -0,0 +1,42 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "get_func_ip_test.skel.h"
+
+void test_get_func_ip_test(void)
+{
+	struct get_func_ip_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd, i;
+	__u64 *result;
+
+	skel = get_func_ip_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open_and_load"))
+		goto cleanup;
+
+	err = get_func_ip_test__attach(skel);
+	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test1);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+
+	ASSERT_OK(err, "test_run");
+
+	result = (__u64 *)skel->bss;
+	for (i = 0; i < sizeof(*skel->bss) / sizeof(__u64); i++) {
+		if (!ASSERT_EQ(result[i], 1, "fentry_result"))
+			break;
+	}
+
+	get_func_ip_test__detach(skel);
+
+cleanup:
+	get_func_ip_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
new file mode 100644
index 000000000000..8ca54390d2b1
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_modify_return_test __ksym;
+
+__u64 test1_result = 0;
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1, int a)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test1_result = (const void *) addr == &bpf_fentry_test1;
+	return 0;
+}
+
+__u64 test2_result = 0;
+SEC("fexit/bpf_fentry_test2")
+int BPF_PROG(test2, int a)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test2_result = (const void *) addr == &bpf_fentry_test2;
+	return 0;
+}
+
+__u64 test3_result = 0;
+SEC("kprobe/bpf_fentry_test3")
+int test3(struct pt_regs *ctx)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test3_result = (const void *) addr == &bpf_fentry_test3;
+	return 0;
+}
+
+__u64 test4_result = 0;
+SEC("kretprobe/bpf_fentry_test4")
+int BPF_KRETPROBE(test4)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test4_result = (const void *) addr == &bpf_fentry_test4;
+	return 0;
+}
+
+__u64 test5_result = 0;
+SEC("fmod_ret/bpf_modify_return_test")
+int BPF_PROG(fmod_ret_test, int a, int *b, int ret)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test5_result = (const void *) addr == &bpf_modify_return_test;
+	return ret;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-06-29 19:29 ` [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
@ 2021-06-30 17:47   ` Yonghong Song
  2021-06-30 23:58     ` Masami Hiramatsu
  2021-07-01  8:34     ` Jiri Olsa
  0 siblings, 2 replies; 15+ messages in thread
From: Yonghong Song @ 2021-06-30 17:47 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Masami Hiramatsu



On 6/29/21 12:29 PM, Jiri Olsa wrote:
> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
> so it's now possible to call bpf_get_func_ip from both kprobe and
> kretprobe programs.
> 
> Taking the caller's address from 'struct kprobe::addr', which is
> defined for both kprobe and kretprobe.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>   include/uapi/linux/bpf.h       |  2 +-
>   kernel/bpf/verifier.c          |  2 ++
>   kernel/trace/bpf_trace.c       | 14 ++++++++++++++
>   kernel/trace/trace_kprobe.c    | 20 ++++++++++++++++++--
>   kernel/trace/trace_probe.h     |  5 +++++
>   tools/include/uapi/linux/bpf.h |  2 +-
>   6 files changed, 41 insertions(+), 4 deletions(-)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 83e87ffdbb6e..4894f99a1993 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -4783,7 +4783,7 @@ union bpf_attr {
>    *
>    * u64 bpf_get_func_ip(void *ctx)
>    * 	Description
> - * 		Get address of the traced function (for tracing programs).
> + * 		Get address of the traced function (for tracing and kprobe programs).
>    * 	Return
>    * 		Address of the traced function.
>    */
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 701ff7384fa7..b66e0a7104f8 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5979,6 +5979,8 @@ static bool has_get_func_ip(struct bpf_verifier_env *env)
>   			return -ENOTSUPP;
>   		}
>   		return 0;
> +	} else if (type == BPF_PROG_TYPE_KPROBE) {
> +		return 0;
>   	}
>   
>   	verbose(env, "func %s#%d not supported for program type %d\n",
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 9edd3b1a00ad..1a5bddce9abd 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -961,6 +961,18 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
>   	.arg1_type	= ARG_PTR_TO_CTX,
>   };
>   
> +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> +{
> +	return trace_current_kprobe_addr();
> +}
> +
> +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
> +	.func		= bpf_get_func_ip_kprobe,
> +	.gpl_only	= true,
> +	.ret_type	= RET_INTEGER,
> +	.arg1_type	= ARG_PTR_TO_CTX,
> +};
> +
>   const struct bpf_func_proto *
>   bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>   {
> @@ -1092,6 +1104,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>   	case BPF_FUNC_override_return:
>   		return &bpf_override_return_proto;
>   #endif
> +	case BPF_FUNC_get_func_ip:
> +		return &bpf_get_func_ip_proto_kprobe;
>   	default:
>   		return bpf_tracing_func_proto(func_id, prog);
>   	}
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index ea6178cb5e33..b07d5888db14 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
>   }
>   
>   #ifdef CONFIG_PERF_EVENTS
> +/* Used by bpf get_func_ip helper */
> +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;

Didn't check other architectures. But this should work
for x86 where if nested kprobe happens, the second
kprobe will not call kprobe handlers.

This essentially is to provide an additional parameter to
bpf program. Andrii is developing a mechanism to
save arbitrary data in *current task_struct*, which
might be used here to save current_kprobe_addr, we can
save one per cpu variable.

> +
> +u64 trace_current_kprobe_addr(void)
> +{
> +	return *this_cpu_ptr(&current_kprobe_addr);
> +}
> +
> +static void trace_current_kprobe_set(struct trace_kprobe *tk)
> +{
> +	__this_cpu_write(current_kprobe_addr, (u64) tk->rp.kp.addr);
> +}
>   
>   /* Kprobe profile handler */
>   static int
> @@ -1585,6 +1597,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
>   		unsigned long orig_ip = instruction_pointer(regs);
>   		int ret;
>   
> +		trace_current_kprobe_set(tk);
>   		ret = trace_call_bpf(call, regs);
>   
>   		/*
> @@ -1631,8 +1644,11 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
>   	int size, __size, dsize;
>   	int rctx;
>   
> -	if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> -		return;
> +	if (bpf_prog_array_valid(call)) {
> +		trace_current_kprobe_set(tk);
> +		if (!trace_call_bpf(call, regs))
> +			return;
> +	}
>   
>   	head = this_cpu_ptr(call->perf_events);
>   	if (hlist_empty(head))
[...]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-06-30 17:47   ` Yonghong Song
@ 2021-06-30 23:58     ` Masami Hiramatsu
  2021-07-01  1:45       ` Yonghong Song
  2021-07-01  8:38       ` Jiri Olsa
  2021-07-01  8:34     ` Jiri Olsa
  1 sibling, 2 replies; 15+ messages in thread
From: Masami Hiramatsu @ 2021-06-30 23:58 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	netdev, bpf, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh, Masami Hiramatsu

On Wed, 30 Jun 2021 10:47:01 -0700
Yonghong Song <yhs@fb.com> wrote:

> 
> 
> On 6/29/21 12:29 PM, Jiri Olsa wrote:
> > Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
> > so it's now possible to call bpf_get_func_ip from both kprobe and
> > kretprobe programs.
> > 
> > Taking the caller's address from 'struct kprobe::addr', which is
> > defined for both kprobe and kretprobe.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >   include/uapi/linux/bpf.h       |  2 +-
> >   kernel/bpf/verifier.c          |  2 ++
> >   kernel/trace/bpf_trace.c       | 14 ++++++++++++++
> >   kernel/trace/trace_kprobe.c    | 20 ++++++++++++++++++--
> >   kernel/trace/trace_probe.h     |  5 +++++
> >   tools/include/uapi/linux/bpf.h |  2 +-
> >   6 files changed, 41 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 83e87ffdbb6e..4894f99a1993 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -4783,7 +4783,7 @@ union bpf_attr {
> >    *
> >    * u64 bpf_get_func_ip(void *ctx)
> >    * 	Description
> > - * 		Get address of the traced function (for tracing programs).
> > + * 		Get address of the traced function (for tracing and kprobe programs).
> >    * 	Return
> >    * 		Address of the traced function.
> >    */
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 701ff7384fa7..b66e0a7104f8 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -5979,6 +5979,8 @@ static bool has_get_func_ip(struct bpf_verifier_env *env)
> >   			return -ENOTSUPP;
> >   		}
> >   		return 0;
> > +	} else if (type == BPF_PROG_TYPE_KPROBE) {
> > +		return 0;
> >   	}
> >   
> >   	verbose(env, "func %s#%d not supported for program type %d\n",
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 9edd3b1a00ad..1a5bddce9abd 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -961,6 +961,18 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
> >   	.arg1_type	= ARG_PTR_TO_CTX,
> >   };
> >   
> > +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> > +{
> > +	return trace_current_kprobe_addr();
> > +}
> > +
> > +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
> > +	.func		= bpf_get_func_ip_kprobe,
> > +	.gpl_only	= true,
> > +	.ret_type	= RET_INTEGER,
> > +	.arg1_type	= ARG_PTR_TO_CTX,
> > +};
> > +
> >   const struct bpf_func_proto *
> >   bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >   {
> > @@ -1092,6 +1104,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >   	case BPF_FUNC_override_return:
> >   		return &bpf_override_return_proto;
> >   #endif
> > +	case BPF_FUNC_get_func_ip:
> > +		return &bpf_get_func_ip_proto_kprobe;
> >   	default:
> >   		return bpf_tracing_func_proto(func_id, prog);
> >   	}
> > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > index ea6178cb5e33..b07d5888db14 100644
> > --- a/kernel/trace/trace_kprobe.c
> > +++ b/kernel/trace/trace_kprobe.c
> > @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
> >   }
> >   
> >   #ifdef CONFIG_PERF_EVENTS
> > +/* Used by bpf get_func_ip helper */
> > +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
> 
> Didn't check other architectures. But this should work
> for x86 where if nested kprobe happens, the second
> kprobe will not call kprobe handlers.

No problem, other architecture also does not call nested kprobes handlers.
However, you don't need this because you can use kprobe_running()
in kprobe context.

kp = kprobe_running();
if (kp)
	return kp->addr;

BTW, I'm not sure why don't you use instruction_pointer(regs)?

Thank you,

> 
> This essentially is to provide an additional parameter to
> bpf program. Andrii is developing a mechanism to
> save arbitrary data in *current task_struct*, which
> might be used here to save current_kprobe_addr, we can
> save one per cpu variable.
> 
> > +
> > +u64 trace_current_kprobe_addr(void)
> > +{
> > +	return *this_cpu_ptr(&current_kprobe_addr);
> > +}
> > +
> > +static void trace_current_kprobe_set(struct trace_kprobe *tk)
> > +{
> > +	__this_cpu_write(current_kprobe_addr, (u64) tk->rp.kp.addr);
> > +}
> >   
> >   /* Kprobe profile handler */
> >   static int
> > @@ -1585,6 +1597,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
> >   		unsigned long orig_ip = instruction_pointer(regs);
> >   		int ret;
> >   
> > +		trace_current_kprobe_set(tk);
> >   		ret = trace_call_bpf(call, regs);
> >   
> >   		/*
> > @@ -1631,8 +1644,11 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
> >   	int size, __size, dsize;
> >   	int rctx;
> >   
> > -	if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> > -		return;
> > +	if (bpf_prog_array_valid(call)) {
> > +		trace_current_kprobe_set(tk);
> > +		if (!trace_call_bpf(call, regs))
> > +			return;
> > +	}
> >   
> >   	head = this_cpu_ptr(call->perf_events);
> >   	if (hlist_empty(head))
> [...]


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-06-30 23:58     ` Masami Hiramatsu
@ 2021-07-01  1:45       ` Yonghong Song
  2021-07-01  2:01         ` Masami Hiramatsu
  2021-07-01  8:38       ` Jiri Olsa
  1 sibling, 1 reply; 15+ messages in thread
From: Yonghong Song @ 2021-07-01  1:45 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	netdev, bpf, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh



On 6/30/21 4:58 PM, Masami Hiramatsu wrote:
> On Wed, 30 Jun 2021 10:47:01 -0700
> Yonghong Song <yhs@fb.com> wrote:
> 
>>
>>
>> On 6/29/21 12:29 PM, Jiri Olsa wrote:
>>> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
>>> so it's now possible to call bpf_get_func_ip from both kprobe and
>>> kretprobe programs.
>>>
>>> Taking the caller's address from 'struct kprobe::addr', which is
>>> defined for both kprobe and kretprobe.
>>>
>>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
>>> ---
>>>    include/uapi/linux/bpf.h       |  2 +-
>>>    kernel/bpf/verifier.c          |  2 ++
>>>    kernel/trace/bpf_trace.c       | 14 ++++++++++++++
>>>    kernel/trace/trace_kprobe.c    | 20 ++++++++++++++++++--
>>>    kernel/trace/trace_probe.h     |  5 +++++
>>>    tools/include/uapi/linux/bpf.h |  2 +-
>>>    6 files changed, 41 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>>> index 83e87ffdbb6e..4894f99a1993 100644
>>> --- a/include/uapi/linux/bpf.h
>>> +++ b/include/uapi/linux/bpf.h
>>> @@ -4783,7 +4783,7 @@ union bpf_attr {
>>>     *
>>>     * u64 bpf_get_func_ip(void *ctx)
>>>     * 	Description
>>> - * 		Get address of the traced function (for tracing programs).
>>> + * 		Get address of the traced function (for tracing and kprobe programs).
>>>     * 	Return
>>>     * 		Address of the traced function.
>>>     */
>>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
>>> index 701ff7384fa7..b66e0a7104f8 100644
>>> --- a/kernel/bpf/verifier.c
>>> +++ b/kernel/bpf/verifier.c
>>> @@ -5979,6 +5979,8 @@ static bool has_get_func_ip(struct bpf_verifier_env *env)
>>>    			return -ENOTSUPP;
>>>    		}
>>>    		return 0;
>>> +	} else if (type == BPF_PROG_TYPE_KPROBE) {
>>> +		return 0;
>>>    	}
>>>    
>>>    	verbose(env, "func %s#%d not supported for program type %d\n",
>>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
>>> index 9edd3b1a00ad..1a5bddce9abd 100644
>>> --- a/kernel/trace/bpf_trace.c
>>> +++ b/kernel/trace/bpf_trace.c
>>> @@ -961,6 +961,18 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
>>>    	.arg1_type	= ARG_PTR_TO_CTX,
>>>    };
>>>    
>>> +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
>>> +{
>>> +	return trace_current_kprobe_addr();
>>> +}
>>> +
>>> +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
>>> +	.func		= bpf_get_func_ip_kprobe,
>>> +	.gpl_only	= true,
>>> +	.ret_type	= RET_INTEGER,
>>> +	.arg1_type	= ARG_PTR_TO_CTX,
>>> +};
>>> +
>>>    const struct bpf_func_proto *
>>>    bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>>>    {
>>> @@ -1092,6 +1104,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
>>>    	case BPF_FUNC_override_return:
>>>    		return &bpf_override_return_proto;
>>>    #endif
>>> +	case BPF_FUNC_get_func_ip:
>>> +		return &bpf_get_func_ip_proto_kprobe;
>>>    	default:
>>>    		return bpf_tracing_func_proto(func_id, prog);
>>>    	}
>>> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
>>> index ea6178cb5e33..b07d5888db14 100644
>>> --- a/kernel/trace/trace_kprobe.c
>>> +++ b/kernel/trace/trace_kprobe.c
>>> @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
>>>    }
>>>    
>>>    #ifdef CONFIG_PERF_EVENTS
>>> +/* Used by bpf get_func_ip helper */
>>> +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
>>
>> Didn't check other architectures. But this should work
>> for x86 where if nested kprobe happens, the second
>> kprobe will not call kprobe handlers.
> 
> No problem, other architecture also does not call nested kprobes handlers.
> However, you don't need this because you can use kprobe_running()
> in kprobe context.
> 
> kp = kprobe_running();
> if (kp)
> 	return kp->addr;
> 
> BTW, I'm not sure why don't you use instruction_pointer(regs)?

How about kretprobe? I guess kp->addr should still point to
function address but instruction_pointer(regs) does not?

> 
> Thank you,
> 
>>
>> This essentially is to provide an additional parameter to
>> bpf program. Andrii is developing a mechanism to
>> save arbitrary data in *current task_struct*, which
>> might be used here to save current_kprobe_addr, we can
>> save one per cpu variable.
>>
>>> +
>>> +u64 trace_current_kprobe_addr(void)
>>> +{
>>> +	return *this_cpu_ptr(&current_kprobe_addr);
>>> +}
>>> +
>>> +static void trace_current_kprobe_set(struct trace_kprobe *tk)
>>> +{
>>> +	__this_cpu_write(current_kprobe_addr, (u64) tk->rp.kp.addr);
>>> +}
>>>    
>>>    /* Kprobe profile handler */
>>>    static int
>>> @@ -1585,6 +1597,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
>>>    		unsigned long orig_ip = instruction_pointer(regs);
>>>    		int ret;
>>>    
>>> +		trace_current_kprobe_set(tk);
>>>    		ret = trace_call_bpf(call, regs);
>>>    
>>>    		/*
>>> @@ -1631,8 +1644,11 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
>>>    	int size, __size, dsize;
>>>    	int rctx;
>>>    
>>> -	if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
>>> -		return;
>>> +	if (bpf_prog_array_valid(call)) {
>>> +		trace_current_kprobe_set(tk);
>>> +		if (!trace_call_bpf(call, regs))
>>> +			return;
>>> +	}
>>>    
>>>    	head = this_cpu_ptr(call->perf_events);
>>>    	if (hlist_empty(head))
>> [...]
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-07-01  1:45       ` Yonghong Song
@ 2021-07-01  2:01         ` Masami Hiramatsu
  0 siblings, 0 replies; 15+ messages in thread
From: Masami Hiramatsu @ 2021-07-01  2:01 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	netdev, bpf, Martin KaFai Lau, Song Liu, John Fastabend,
	KP Singh

On Wed, 30 Jun 2021 18:45:46 -0700
Yonghong Song <yhs@fb.com> wrote:

> 
> 
> On 6/30/21 4:58 PM, Masami Hiramatsu wrote:
> > On Wed, 30 Jun 2021 10:47:01 -0700
> > Yonghong Song <yhs@fb.com> wrote:
> > 
> >>
> >>
> >> On 6/29/21 12:29 PM, Jiri Olsa wrote:
> >>> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
> >>> so it's now possible to call bpf_get_func_ip from both kprobe and
> >>> kretprobe programs.
> >>>
> >>> Taking the caller's address from 'struct kprobe::addr', which is
> >>> defined for both kprobe and kretprobe.
> >>>
> >>> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> >>> ---
> >>>    include/uapi/linux/bpf.h       |  2 +-
> >>>    kernel/bpf/verifier.c          |  2 ++
> >>>    kernel/trace/bpf_trace.c       | 14 ++++++++++++++
> >>>    kernel/trace/trace_kprobe.c    | 20 ++++++++++++++++++--
> >>>    kernel/trace/trace_probe.h     |  5 +++++
> >>>    tools/include/uapi/linux/bpf.h |  2 +-
> >>>    6 files changed, 41 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> >>> index 83e87ffdbb6e..4894f99a1993 100644
> >>> --- a/include/uapi/linux/bpf.h
> >>> +++ b/include/uapi/linux/bpf.h
> >>> @@ -4783,7 +4783,7 @@ union bpf_attr {
> >>>     *
> >>>     * u64 bpf_get_func_ip(void *ctx)
> >>>     * 	Description
> >>> - * 		Get address of the traced function (for tracing programs).
> >>> + * 		Get address of the traced function (for tracing and kprobe programs).
> >>>     * 	Return
> >>>     * 		Address of the traced function.
> >>>     */
> >>> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> >>> index 701ff7384fa7..b66e0a7104f8 100644
> >>> --- a/kernel/bpf/verifier.c
> >>> +++ b/kernel/bpf/verifier.c
> >>> @@ -5979,6 +5979,8 @@ static bool has_get_func_ip(struct bpf_verifier_env *env)
> >>>    			return -ENOTSUPP;
> >>>    		}
> >>>    		return 0;
> >>> +	} else if (type == BPF_PROG_TYPE_KPROBE) {
> >>> +		return 0;
> >>>    	}
> >>>    
> >>>    	verbose(env, "func %s#%d not supported for program type %d\n",
> >>> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> >>> index 9edd3b1a00ad..1a5bddce9abd 100644
> >>> --- a/kernel/trace/bpf_trace.c
> >>> +++ b/kernel/trace/bpf_trace.c
> >>> @@ -961,6 +961,18 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
> >>>    	.arg1_type	= ARG_PTR_TO_CTX,
> >>>    };
> >>>    
> >>> +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> >>> +{
> >>> +	return trace_current_kprobe_addr();
> >>> +}
> >>> +
> >>> +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
> >>> +	.func		= bpf_get_func_ip_kprobe,
> >>> +	.gpl_only	= true,
> >>> +	.ret_type	= RET_INTEGER,
> >>> +	.arg1_type	= ARG_PTR_TO_CTX,
> >>> +};
> >>> +
> >>>    const struct bpf_func_proto *
> >>>    bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >>>    {
> >>> @@ -1092,6 +1104,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >>>    	case BPF_FUNC_override_return:
> >>>    		return &bpf_override_return_proto;
> >>>    #endif
> >>> +	case BPF_FUNC_get_func_ip:
> >>> +		return &bpf_get_func_ip_proto_kprobe;
> >>>    	default:
> >>>    		return bpf_tracing_func_proto(func_id, prog);
> >>>    	}
> >>> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> >>> index ea6178cb5e33..b07d5888db14 100644
> >>> --- a/kernel/trace/trace_kprobe.c
> >>> +++ b/kernel/trace/trace_kprobe.c
> >>> @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
> >>>    }
> >>>    
> >>>    #ifdef CONFIG_PERF_EVENTS
> >>> +/* Used by bpf get_func_ip helper */
> >>> +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
> >>
> >> Didn't check other architectures. But this should work
> >> for x86 where if nested kprobe happens, the second
> >> kprobe will not call kprobe handlers.
> > 
> > No problem, other architecture also does not call nested kprobes handlers.
> > However, you don't need this because you can use kprobe_running()
> > in kprobe context.
> > 
> > kp = kprobe_running();
> > if (kp)
> > 	return kp->addr;
> > 
> > BTW, I'm not sure why don't you use instruction_pointer(regs)?
> 
> How about kretprobe? I guess kp->addr should still point to
> function address but instruction_pointer(regs) does not?

It is now under review (waiting for merge), please check this series.

https://lore.kernel.org/bpf/162400000592.506599.4695807810528866713.stgit@devnote2/

Then you can use instruction_pointer(regs) in kretprobes too.

Thank you,

> 
> > 
> > Thank you,
> > 
> >>
> >> This essentially is to provide an additional parameter to
> >> bpf program. Andrii is developing a mechanism to
> >> save arbitrary data in *current task_struct*, which
> >> might be used here to save current_kprobe_addr, we can
> >> save one per cpu variable.
> >>
> >>> +
> >>> +u64 trace_current_kprobe_addr(void)
> >>> +{
> >>> +	return *this_cpu_ptr(&current_kprobe_addr);
> >>> +}
> >>> +
> >>> +static void trace_current_kprobe_set(struct trace_kprobe *tk)
> >>> +{
> >>> +	__this_cpu_write(current_kprobe_addr, (u64) tk->rp.kp.addr);
> >>> +}
> >>>    
> >>>    /* Kprobe profile handler */
> >>>    static int
> >>> @@ -1585,6 +1597,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
> >>>    		unsigned long orig_ip = instruction_pointer(regs);
> >>>    		int ret;
> >>>    
> >>> +		trace_current_kprobe_set(tk);
> >>>    		ret = trace_call_bpf(call, regs);
> >>>    
> >>>    		/*
> >>> @@ -1631,8 +1644,11 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
> >>>    	int size, __size, dsize;
> >>>    	int rctx;
> >>>    
> >>> -	if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> >>> -		return;
> >>> +	if (bpf_prog_array_valid(call)) {
> >>> +		trace_current_kprobe_set(tk);
> >>> +		if (!trace_call_bpf(call, regs))
> >>> +			return;
> >>> +	}
> >>>    
> >>>    	head = this_cpu_ptr(call->perf_events);
> >>>    	if (hlist_empty(head))
> >> [...]
> > 
> > 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-06-30 17:47   ` Yonghong Song
  2021-06-30 23:58     ` Masami Hiramatsu
@ 2021-07-01  8:34     ` Jiri Olsa
  1 sibling, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-07-01  8:34 UTC (permalink / raw)
  To: Yonghong Song
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, netdev,
	bpf, Martin KaFai Lau, Song Liu, John Fastabend, KP Singh,
	Masami Hiramatsu

On Wed, Jun 30, 2021 at 10:47:01AM -0700, Yonghong Song wrote:
> 
> 
> On 6/29/21 12:29 PM, Jiri Olsa wrote:
> > Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
> > so it's now possible to call bpf_get_func_ip from both kprobe and
> > kretprobe programs.
> > 
> > Taking the caller's address from 'struct kprobe::addr', which is
> > defined for both kprobe and kretprobe.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >   include/uapi/linux/bpf.h       |  2 +-
> >   kernel/bpf/verifier.c          |  2 ++
> >   kernel/trace/bpf_trace.c       | 14 ++++++++++++++
> >   kernel/trace/trace_kprobe.c    | 20 ++++++++++++++++++--
> >   kernel/trace/trace_probe.h     |  5 +++++
> >   tools/include/uapi/linux/bpf.h |  2 +-
> >   6 files changed, 41 insertions(+), 4 deletions(-)
> > 
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index 83e87ffdbb6e..4894f99a1993 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -4783,7 +4783,7 @@ union bpf_attr {
> >    *
> >    * u64 bpf_get_func_ip(void *ctx)
> >    * 	Description
> > - * 		Get address of the traced function (for tracing programs).
> > + * 		Get address of the traced function (for tracing and kprobe programs).
> >    * 	Return
> >    * 		Address of the traced function.
> >    */
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 701ff7384fa7..b66e0a7104f8 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -5979,6 +5979,8 @@ static bool has_get_func_ip(struct bpf_verifier_env *env)
> >   			return -ENOTSUPP;
> >   		}
> >   		return 0;
> > +	} else if (type == BPF_PROG_TYPE_KPROBE) {
> > +		return 0;
> >   	}
> >   	verbose(env, "func %s#%d not supported for program type %d\n",
> > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> > index 9edd3b1a00ad..1a5bddce9abd 100644
> > --- a/kernel/trace/bpf_trace.c
> > +++ b/kernel/trace/bpf_trace.c
> > @@ -961,6 +961,18 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
> >   	.arg1_type	= ARG_PTR_TO_CTX,
> >   };
> > +BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
> > +{
> > +	return trace_current_kprobe_addr();
> > +}
> > +
> > +static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
> > +	.func		= bpf_get_func_ip_kprobe,
> > +	.gpl_only	= true,
> > +	.ret_type	= RET_INTEGER,
> > +	.arg1_type	= ARG_PTR_TO_CTX,
> > +};
> > +
> >   const struct bpf_func_proto *
> >   bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >   {
> > @@ -1092,6 +1104,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
> >   	case BPF_FUNC_override_return:
> >   		return &bpf_override_return_proto;
> >   #endif
> > +	case BPF_FUNC_get_func_ip:
> > +		return &bpf_get_func_ip_proto_kprobe;
> >   	default:
> >   		return bpf_tracing_func_proto(func_id, prog);
> >   	}
> > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > index ea6178cb5e33..b07d5888db14 100644
> > --- a/kernel/trace/trace_kprobe.c
> > +++ b/kernel/trace/trace_kprobe.c
> > @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
> >   }
> >   #ifdef CONFIG_PERF_EVENTS
> > +/* Used by bpf get_func_ip helper */
> > +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
> 
> Didn't check other architectures. But this should work
> for x86 where if nested kprobe happens, the second
> kprobe will not call kprobe handlers.
> 
> This essentially is to provide an additional parameter to
> bpf program. Andrii is developing a mechanism to
> save arbitrary data in *current task_struct*, which
> might be used here to save current_kprobe_addr, we can
> save one per cpu variable.

ok, will check.. was there a post already?

thanks,
jirka

> 
> > +
> > +u64 trace_current_kprobe_addr(void)
> > +{
> > +	return *this_cpu_ptr(&current_kprobe_addr);
> > +}
> > +
> > +static void trace_current_kprobe_set(struct trace_kprobe *tk)
> > +{
> > +	__this_cpu_write(current_kprobe_addr, (u64) tk->rp.kp.addr);
> > +}
> >   /* Kprobe profile handler */
> >   static int
> > @@ -1585,6 +1597,7 @@ kprobe_perf_func(struct trace_kprobe *tk, struct pt_regs *regs)
> >   		unsigned long orig_ip = instruction_pointer(regs);
> >   		int ret;
> > +		trace_current_kprobe_set(tk);
> >   		ret = trace_call_bpf(call, regs);
> >   		/*
> > @@ -1631,8 +1644,11 @@ kretprobe_perf_func(struct trace_kprobe *tk, struct kretprobe_instance *ri,
> >   	int size, __size, dsize;
> >   	int rctx;
> > -	if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
> > -		return;
> > +	if (bpf_prog_array_valid(call)) {
> > +		trace_current_kprobe_set(tk);
> > +		if (!trace_call_bpf(call, regs))
> > +			return;
> > +	}
> >   	head = this_cpu_ptr(call->perf_events);
> >   	if (hlist_empty(head))
> [...]
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-06-30 23:58     ` Masami Hiramatsu
  2021-07-01  1:45       ` Yonghong Song
@ 2021-07-01  8:38       ` Jiri Olsa
  2021-07-01 13:10         ` Masami Hiramatsu
  1 sibling, 1 reply; 15+ messages in thread
From: Jiri Olsa @ 2021-07-01  8:38 UTC (permalink / raw)
  To: Masami Hiramatsu
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, Martin KaFai Lau, Song Liu,
	John Fastabend, KP Singh

On Thu, Jul 01, 2021 at 08:58:54AM +0900, Masami Hiramatsu wrote:

SNIP

> > >   		return &bpf_override_return_proto;
> > >   #endif
> > > +	case BPF_FUNC_get_func_ip:
> > > +		return &bpf_get_func_ip_proto_kprobe;
> > >   	default:
> > >   		return bpf_tracing_func_proto(func_id, prog);
> > >   	}
> > > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > > index ea6178cb5e33..b07d5888db14 100644
> > > --- a/kernel/trace/trace_kprobe.c
> > > +++ b/kernel/trace/trace_kprobe.c
> > > @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
> > >   }
> > >   
> > >   #ifdef CONFIG_PERF_EVENTS
> > > +/* Used by bpf get_func_ip helper */
> > > +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
> > 
> > Didn't check other architectures. But this should work
> > for x86 where if nested kprobe happens, the second
> > kprobe will not call kprobe handlers.
> 
> No problem, other architecture also does not call nested kprobes handlers.
> However, you don't need this because you can use kprobe_running()
> in kprobe context.
> 
> kp = kprobe_running();
> if (kp)
> 	return kp->addr;

great, that's easier

> 
> BTW, I'm not sure why don't you use instruction_pointer(regs)?

I tried that but it returns function address + 1,
and I thought that could be different on each arch
and we'd need arch specific code to deal with that

thanks,
jirka


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-07-01  8:38       ` Jiri Olsa
@ 2021-07-01 13:10         ` Masami Hiramatsu
  0 siblings, 0 replies; 15+ messages in thread
From: Masami Hiramatsu @ 2021-07-01 13:10 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Yonghong Song, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, Martin KaFai Lau, Song Liu,
	John Fastabend, KP Singh

On Thu, 1 Jul 2021 10:38:14 +0200
Jiri Olsa <jolsa@redhat.com> wrote:

> On Thu, Jul 01, 2021 at 08:58:54AM +0900, Masami Hiramatsu wrote:
> 
> SNIP
> 
> > > >   		return &bpf_override_return_proto;
> > > >   #endif
> > > > +	case BPF_FUNC_get_func_ip:
> > > > +		return &bpf_get_func_ip_proto_kprobe;
> > > >   	default:
> > > >   		return bpf_tracing_func_proto(func_id, prog);
> > > >   	}
> > > > diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> > > > index ea6178cb5e33..b07d5888db14 100644
> > > > --- a/kernel/trace/trace_kprobe.c
> > > > +++ b/kernel/trace/trace_kprobe.c
> > > > @@ -1570,6 +1570,18 @@ static int kretprobe_event_define_fields(struct trace_event_call *event_call)
> > > >   }
> > > >   
> > > >   #ifdef CONFIG_PERF_EVENTS
> > > > +/* Used by bpf get_func_ip helper */
> > > > +DEFINE_PER_CPU(u64, current_kprobe_addr) = 0;
> > > 
> > > Didn't check other architectures. But this should work
> > > for x86 where if nested kprobe happens, the second
> > > kprobe will not call kprobe handlers.
> > 
> > No problem, other architecture also does not call nested kprobes handlers.
> > However, you don't need this because you can use kprobe_running()
> > in kprobe context.
> > 
> > kp = kprobe_running();
> > if (kp)
> > 	return kp->addr;
> 
> great, that's easier
> 
> > 
> > BTW, I'm not sure why don't you use instruction_pointer(regs)?
> 
> I tried that but it returns function address + 1,
> and I thought that could be different on each arch
> and we'd need arch specific code to deal with that

Oh, I got it. Yes, since it emulates the int3 interruption, the
regs->ip must be kp->addr + 1 on x86. And indeed, it depends
on each arch.

Thank you,

> 
> thanks,
> jirka
> 


-- 
Masami Hiramatsu <mhiramat@kernel.org>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper
  2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (4 preceding siblings ...)
  2021-06-29 19:29 ` [PATCH bpf-next 5/5] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
@ 2021-07-01 17:22 ` Alan Maguire
  2021-07-02  8:16   ` Jiri Olsa
  5 siblings, 1 reply; 15+ messages in thread
From: Alan Maguire @ 2021-07-01 17:22 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, netdev,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu

On Tue, 29 Jun 2021, Jiri Olsa wrote:

> hi,
> adding bpf_get_func_ip helper that returns IP address of the
> caller function for trampoline and krobe programs.
> 
> There're 2 specific implementation of the bpf_get_func_ip
> helper, one for trampoline progs and one for kprobe/kretprobe
> progs.
> 
> The trampoline helper call is replaced/inlined by verifier
> with simple move instruction. The kprobe/kretprobe is actual
> helper call that returns prepared caller address.
> 
> The trampoline extra 3 instructions for storing IP address
> is now optional, which I'm not completely sure is necessary,
> so I plan to do some benchmarks, if it's noticeable, hence
> the RFC. I'm also not completely sure about the kprobe/kretprobe
> implementation.
> 
> Also available at:
>   https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
>   bpf/get_func_ip
> 
> thanks,
> jirka
> 
>

This is great Jiri! Feel free to add for the series:

Tested-by: Alan Maguire <alan.maguire@oracle.com>

BTW I also verified that if we extend bpf_program__attach_kprobe() to
support the function+offset format in the func_name argument for kprobes, 
the following test will pass too:

__u64 test5_result = 0;
SEC("kprobe/bpf_fentry_test5+0x6")
int test5(struct pt_regs *ctx)
{
        __u64 addr = bpf_get_func_ip(ctx);

        test5_result = (const void *) addr == (&bpf_fentry_test5 + 0x6);
        return 0;
}

Thanks!

Alan
 
> ---
> Jiri Olsa (5):
>       bpf, x86: Store caller's ip in trampoline stack
>       bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
>       bpf: Add bpf_get_func_ip helper for tracing programs
>       bpf: Add bpf_get_func_ip helper for kprobe programs
>       selftests/bpf: Add test for bpf_get_func_ip helper
> 
>  arch/x86/net/bpf_jit_comp.c                               | 19 +++++++++++++++++++
>  include/linux/bpf.h                                       |  5 +++++
>  include/linux/filter.h                                    |  3 ++-
>  include/uapi/linux/bpf.h                                  |  7 +++++++
>  kernel/bpf/trampoline.c                                   | 12 +++++++++---
>  kernel/bpf/verifier.c                                     | 55 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  kernel/trace/bpf_trace.c                                  | 29 +++++++++++++++++++++++++++++
>  kernel/trace/trace_kprobe.c                               | 20 ++++++++++++++++++--
>  kernel/trace/trace_probe.h                                |  5 +++++
>  tools/include/uapi/linux/bpf.h                            |  7 +++++++
>  tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 42 ++++++++++++++++++++++++++++++++++++++++++
>  tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 62 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  12 files changed, 260 insertions(+), 6 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
>  create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c
> 
> 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper
  2021-07-01 17:22 ` [RFC bpf-next 0/5] bpf, x86: Add " Alan Maguire
@ 2021-07-02  8:16   ` Jiri Olsa
  0 siblings, 0 replies; 15+ messages in thread
From: Jiri Olsa @ 2021-07-02  8:16 UTC (permalink / raw)
  To: Alan Maguire
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, netdev,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu

On Thu, Jul 01, 2021 at 06:22:45PM +0100, Alan Maguire wrote:
> On Tue, 29 Jun 2021, Jiri Olsa wrote:
> 
> > hi,
> > adding bpf_get_func_ip helper that returns IP address of the
> > caller function for trampoline and krobe programs.
> > 
> > There're 2 specific implementation of the bpf_get_func_ip
> > helper, one for trampoline progs and one for kprobe/kretprobe
> > progs.
> > 
> > The trampoline helper call is replaced/inlined by verifier
> > with simple move instruction. The kprobe/kretprobe is actual
> > helper call that returns prepared caller address.
> > 
> > The trampoline extra 3 instructions for storing IP address
> > is now optional, which I'm not completely sure is necessary,
> > so I plan to do some benchmarks, if it's noticeable, hence
> > the RFC. I'm also not completely sure about the kprobe/kretprobe
> > implementation.
> > 
> > Also available at:
> >   https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
> >   bpf/get_func_ip
> > 
> > thanks,
> > jirka
> > 
> >
> 
> This is great Jiri! Feel free to add for the series:
> 
> Tested-by: Alan Maguire <alan.maguire@oracle.com>

great, thanks for testing

> 
> BTW I also verified that if we extend bpf_program__attach_kprobe() to
> support the function+offset format in the func_name argument for kprobes, 
> the following test will pass too:
> 
> __u64 test5_result = 0;
> SEC("kprobe/bpf_fentry_test5+0x6")
> int test5(struct pt_regs *ctx)
> {
>         __u64 addr = bpf_get_func_ip(ctx);
> 
>         test5_result = (const void *) addr == (&bpf_fentry_test5 + 0x6);
>         return 0;
> }

right, I did not think of this test, I'll add it

thanks,
jirka


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-07-02  8:16 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-29 19:29 [RFC bpf-next 0/5] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
2021-06-29 19:29 ` [PATCH bpf-next 1/5] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
2021-06-29 19:29 ` [PATCH bpf-next 2/5] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
2021-06-29 19:29 ` [PATCH bpf-next 3/5] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
2021-06-29 19:29 ` [PATCH bpf-next 4/5] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
2021-06-30 17:47   ` Yonghong Song
2021-06-30 23:58     ` Masami Hiramatsu
2021-07-01  1:45       ` Yonghong Song
2021-07-01  2:01         ` Masami Hiramatsu
2021-07-01  8:38       ` Jiri Olsa
2021-07-01 13:10         ` Masami Hiramatsu
2021-07-01  8:34     ` Jiri Olsa
2021-06-29 19:29 ` [PATCH bpf-next 5/5] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
2021-07-01 17:22 ` [RFC bpf-next 0/5] bpf, x86: Add " Alan Maguire
2021-07-02  8:16   ` Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).