netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper
@ 2021-07-14  9:43 Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 1/8] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
                   ` (7 more replies)
  0 siblings, 8 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

hi,
adding bpf_get_func_ip helper that returns IP address of the
caller function for trampoline and krobe programs.

There're 2 specific implementation of the bpf_get_func_ip
helper, one for trampoline progs and one for kprobe/kretprobe
progs.

The trampoline helper call is replaced/inlined by verifier
with simple move instruction. The kprobe/kretprobe is actual
helper call that returns prepared caller address.

Also available at:
  https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git
  bpf/get_func_ip

v4 changes:
  - dropped jit/x86 check for get_func_ip tracing check [Alexei]
  - added code to bpf_get_func_ip_tracing [Alexei]
    and tested that it works without inlining [Alexei]
  - changed has_get_func_ip to check_get_func_ip [Andrii]
  - replaced test assert loop with explicit asserts [Andrii]
  - adde bpf_program__attach_kprobe_opts function
    and use it for offset setup [Andrii]
  - used bpf_program__set_autoload(false) for test6 [Andrii]
  - added Masami's ack

v3 changes:
  - resend with Masami in cc and v3 in each patch subject

v2 changes:
  - use kprobe_running to get kprobe instead of cpu var [Masami]
  - added support to add kprobe on function+offset
    and test for that [Alan]

thanks,
jirka


---
Alan Maguire (1):
      libbpf: Allow specification of "kprobe/function+offset"

Jiri Olsa (7):
      bpf, x86: Store caller's ip in trampoline stack
      bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
      bpf: Add bpf_get_func_ip helper for tracing programs
      bpf: Add bpf_get_func_ip helper for kprobe programs
      selftests/bpf: Add test for bpf_get_func_ip helper
      libbpf: Add bpf_program__attach_kprobe_opts function
      selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe

 arch/x86/net/bpf_jit_comp.c                               | 19 +++++++++++++++++++
 include/linux/bpf.h                                       |  5 +++++
 include/linux/filter.h                                    |  3 ++-
 include/uapi/linux/bpf.h                                  |  7 +++++++
 kernel/bpf/trampoline.c                                   | 12 +++++++++---
 kernel/bpf/verifier.c                                     | 45 +++++++++++++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c                                  | 31 +++++++++++++++++++++++++++++++
 tools/include/uapi/linux/bpf.h                            |  7 +++++++
 tools/lib/bpf/libbpf.c                                    | 56 ++++++++++++++++++++++++++++++++++++++++++++++----------
 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/get_func_ip_test.c      | 73 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 11 files changed, 297 insertions(+), 14 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 1/8] bpf, x86: Store caller's ip in trampoline stack
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 2/8] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Storing caller's ip in trampoline's stack. Trampoline programs
can reach the IP in (ctx - 8) address, so there's no change in
program's arguments interface.

The IP address is takes from [fp + 8], which is return address
from the initial 'call fentry' call to trampoline.

This IP address will be returned via bpf_get_func_ip helper
helper, which is added in following patches.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 19 +++++++++++++++++++
 include/linux/bpf.h         |  5 +++++
 2 files changed, 24 insertions(+)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index e835164189f1..c320b3ce7b58 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1951,6 +1951,9 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	if (flags & BPF_TRAMP_F_CALL_ORIG)
 		stack_size += 8; /* room for return value of orig_call */
 
+	if (flags & BPF_TRAMP_F_IP_ARG)
+		stack_size += 8; /* room for IP address argument */
+
 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
 		/* skip patched call instruction and point orig_call to actual
 		 * body of the kernel function.
@@ -1964,6 +1967,22 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
 	EMIT1(0x53);		 /* push rbx */
 
+	if (flags & BPF_TRAMP_F_IP_ARG) {
+		/* Store IP address of the traced function:
+		 * mov rax, QWORD PTR [rbp + 8]
+		 * sub rax, X86_PATCH_SIZE
+		 * mov QWORD PTR [rbp - stack_size], rax
+		 */
+		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
+		EMIT4(0x48, 0x83, 0xe8, X86_PATCH_SIZE);
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -stack_size);
+
+		/* Continue with stack_size for regs storage, stack will
+		 * be correctly restored with 'leave' instruction.
+		 */
+		stack_size -= 8;
+	}
+
 	save_regs(m, &prog, nr_args, stack_size);
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 4afbff308ca3..1ebb7690af91 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -554,6 +554,11 @@ struct btf_func_model {
  */
 #define BPF_TRAMP_F_SKIP_FRAME		BIT(2)
 
+/* Store IP address of the caller on the trampoline stack,
+ * so it's available for trampoline's programs.
+ */
+#define BPF_TRAMP_F_IP_ARG		BIT(3)
+
 /* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
  * bytes on x86.  Pick a number to fit into BPF_IMAGE_SIZE / 2
  */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 2/8] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 1/8] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Enabling BPF_TRAMP_F_IP_ARG for trampolines that actually need it.

The BPF_TRAMP_F_IP_ARG adds extra 3 instructions to trampoline code
and is used only by programs with bpf_get_func_ip helper, which is
added in following patch and sets call_get_func_ip bit.

This patch ensures that BPF_TRAMP_F_IP_ARG flag is used only for
trampolines that have programs with call_get_func_ip set.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/linux/filter.h  |  3 ++-
 kernel/bpf/trampoline.c | 12 +++++++++---
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/include/linux/filter.h b/include/linux/filter.h
index 472f97074da0..ba36989f711a 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -559,7 +559,8 @@ struct bpf_prog {
 				kprobe_override:1, /* Do we override a kprobe? */
 				has_callchain_buf:1, /* callchain buffer allocated? */
 				enforce_expected_attach_type:1, /* Enforce expected_attach_type checking at attach time */
-				call_get_stack:1; /* Do we call bpf_get_stack() or bpf_get_stackid() */
+				call_get_stack:1, /* Do we call bpf_get_stack() or bpf_get_stackid() */
+				call_get_func_ip:1; /* Do we call get_func_ip() */
 	enum bpf_prog_type	type;		/* Type of BPF program */
 	enum bpf_attach_type	expected_attach_type; /* For some prog types */
 	u32			len;		/* Number of filter blocks */
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 28a3630c48ee..b2535acfe9db 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -172,7 +172,7 @@ static int register_fentry(struct bpf_trampoline *tr, void *new_addr)
 }
 
 static struct bpf_tramp_progs *
-bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
+bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total, bool *ip_arg)
 {
 	const struct bpf_prog_aux *aux;
 	struct bpf_tramp_progs *tprogs;
@@ -189,8 +189,10 @@ bpf_trampoline_get_progs(const struct bpf_trampoline *tr, int *total)
 		*total += tr->progs_cnt[kind];
 		progs = tprogs[kind].progs;
 
-		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist)
+		hlist_for_each_entry(aux, &tr->progs_hlist[kind], tramp_hlist) {
+			*ip_arg |= aux->prog->call_get_func_ip;
 			*progs++ = aux->prog;
+		}
 	}
 	return tprogs;
 }
@@ -333,9 +335,10 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	struct bpf_tramp_image *im;
 	struct bpf_tramp_progs *tprogs;
 	u32 flags = BPF_TRAMP_F_RESTORE_REGS;
+	bool ip_arg = false;
 	int err, total;
 
-	tprogs = bpf_trampoline_get_progs(tr, &total);
+	tprogs = bpf_trampoline_get_progs(tr, &total, &ip_arg);
 	if (IS_ERR(tprogs))
 		return PTR_ERR(tprogs);
 
@@ -357,6 +360,9 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
 	    tprogs[BPF_TRAMP_MODIFY_RETURN].nr_progs)
 		flags = BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_SKIP_FRAME;
 
+	if (ip_arg)
+		flags |= BPF_TRAMP_F_IP_ARG;
+
 	err = arch_prepare_bpf_trampoline(im, im->image, im->image + PAGE_SIZE,
 					  &tr->func.model, flags, tprogs,
 					  tr->func.addr);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 1/8] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 2/8] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-16  1:09   ` Alexei Starovoitov
  2021-07-14  9:43 ` [PATCHv4 bpf-next 4/8] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: kernel test robot, Dan Carpenter, netdev, bpf, Martin KaFai Lau,
	Song Liu, Yonghong Song, John Fastabend, KP Singh,
	Masami Hiramatsu, Alan Maguire

Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
specifically for all trampoline attach types.

The trampoline's caller IP address is stored in (ctx - 8) address.
so there's no reason to actually call the helper, but rather fixup
the call instruction and return [ctx - 8] value directly (suggested
by Alexei).

[fixed has_get_func_ip wrong return type]
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/bpf.h       |  7 ++++++
 kernel/bpf/verifier.c          | 43 ++++++++++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c       | 15 ++++++++++++
 tools/include/uapi/linux/bpf.h |  7 ++++++
 4 files changed, 72 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index b46a383e8db7..31dd386b64ec 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4777,6 +4777,12 @@ union bpf_attr {
  * 		Execute close syscall for given FD.
  * 	Return
  * 		A syscall result.
+ *
+ * u64 bpf_get_func_ip(void *ctx)
+ * 	Description
+ * 		Get address of the traced function (for tracing programs).
+ * 	Return
+ * 		Address of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4948,6 +4954,7 @@ union bpf_attr {
 	FN(sys_bpf),			\
 	FN(btf_find_by_name_kind),	\
 	FN(sys_close),			\
+	FN(get_func_ip),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index be38bb930bf1..d27aa23fb572 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5955,6 +5955,27 @@ static int check_bpf_snprintf_call(struct bpf_verifier_env *env,
 	return err;
 }
 
+static int check_get_func_ip(struct bpf_verifier_env *env)
+{
+	enum bpf_attach_type eatype = env->prog->expected_attach_type;
+	enum bpf_prog_type type = resolve_prog_type(env->prog);
+	int func_id = BPF_FUNC_get_func_ip;
+
+	if (type == BPF_PROG_TYPE_TRACING) {
+		if (eatype != BPF_TRACE_FENTRY && eatype != BPF_TRACE_FEXIT &&
+		    eatype != BPF_MODIFY_RETURN) {
+			verbose(env, "func %s#%d supported only for fentry/fexit/fmod_ret programs\n",
+				func_id_name(func_id), func_id);
+			return -ENOTSUPP;
+		}
+		return 0;
+	}
+
+	verbose(env, "func %s#%d not supported for program type %d\n",
+		func_id_name(func_id), func_id, type);
+	return -ENOTSUPP;
+}
+
 static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
 			     int *insn_idx_p)
 {
@@ -6225,6 +6246,12 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 	if (func_id == BPF_FUNC_get_stackid || func_id == BPF_FUNC_get_stack)
 		env->prog->call_get_stack = true;
 
+	if (func_id == BPF_FUNC_get_func_ip) {
+		if (check_get_func_ip(env))
+			return -ENOTSUPP;
+		env->prog->call_get_func_ip = true;
+	}
+
 	if (changes_data)
 		clear_all_pkt_pointers(env);
 	return 0;
@@ -12369,6 +12396,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog;
 	bool expect_blinding = bpf_jit_blinding_enabled(prog);
+	enum bpf_prog_type prog_type = resolve_prog_type(prog);
 	struct bpf_insn *insn = prog->insnsi;
 	const struct bpf_func_proto *fn;
 	const int insn_cnt = prog->len;
@@ -12702,6 +12730,21 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
+		/* Implement bpf_get_func_ip inline. */
+		if (prog_type == BPF_PROG_TYPE_TRACING &&
+		    insn->imm == BPF_FUNC_get_func_ip) {
+			/* Load IP address from ctx - 8 */
+			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
+			if (!new_prog)
+				return -ENOMEM;
+
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
 patch_call_imm:
 		fn = env->ops->get_func_proto(insn->imm, env->prog);
 		/* all functions that have prototype and verifier allowed
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 64bd2d84367f..022cbe42ac57 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -948,6 +948,19 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
 	.arg5_type	= ARG_ANYTHING,
 };
 
+BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)
+{
+	/* This helper call is inlined by verifier. */
+	return ((u64 *)ctx)[-1];
+}
+
+static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
+	.func		= bpf_get_func_ip_tracing,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1058,6 +1071,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_for_each_map_elem_proto;
 	case BPF_FUNC_snprintf:
 		return &bpf_snprintf_proto;
+	case BPF_FUNC_get_func_ip:
+		return &bpf_get_func_ip_proto_tracing;
 	default:
 		return NULL;
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index bf9252c7381e..83e87ffdbb6e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4780,6 +4780,12 @@ union bpf_attr {
  * 		Execute close syscall for given FD.
  * 	Return
  * 		A syscall result.
+ *
+ * u64 bpf_get_func_ip(void *ctx)
+ * 	Description
+ * 		Get address of the traced function (for tracing programs).
+ * 	Return
+ * 		Address of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -4951,6 +4957,7 @@ union bpf_attr {
 	FN(sys_bpf),			\
 	FN(btf_find_by_name_kind),	\
 	FN(sys_close),			\
+	FN(get_func_ip),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 4/8] bpf: Add bpf_get_func_ip helper for kprobe programs
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (2 preceding siblings ...)
  2021-07-14  9:43 ` [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 5/8] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: kernel test robot, Masami Hiramatsu, netdev, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Alan Maguire

Adding bpf_get_func_ip helper for BPF_PROG_TYPE_KPROBE programs,
so it's now possible to call bpf_get_func_ip from both kprobe and
kretprobe programs.

Taking the caller's address from 'struct kprobe::addr', which is
defined for both kprobe and kretprobe.

[removed duplicate include]
Reported-by: kernel test robot <lkp@intel.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 include/uapi/linux/bpf.h       |  2 +-
 kernel/bpf/verifier.c          |  2 ++
 kernel/trace/bpf_trace.c       | 16 ++++++++++++++++
 tools/include/uapi/linux/bpf.h |  2 +-
 4 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 31dd386b64ec..3ea5874f603b 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4780,7 +4780,7 @@ union bpf_attr {
  *
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
- * 		Get address of the traced function (for tracing programs).
+ * 		Get address of the traced function (for tracing and kprobe programs).
  * 	Return
  * 		Address of the traced function.
  */
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index d27aa23fb572..9998ffc00bbd 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -5969,6 +5969,8 @@ static int check_get_func_ip(struct bpf_verifier_env *env)
 			return -ENOTSUPP;
 		}
 		return 0;
+	} else if (type == BPF_PROG_TYPE_KPROBE) {
+		return 0;
 	}
 
 	verbose(env, "func %s#%d not supported for program type %d\n",
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 022cbe42ac57..8af385fd5419 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -961,6 +961,20 @@ static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
 	.arg1_type	= ARG_PTR_TO_CTX,
 };
 
+BPF_CALL_1(bpf_get_func_ip_kprobe, struct pt_regs *, regs)
+{
+	struct kprobe *kp = kprobe_running();
+
+	return kp ? (u64) kp->addr : 0;
+}
+
+static const struct bpf_func_proto bpf_get_func_ip_proto_kprobe = {
+	.func		= bpf_get_func_ip_kprobe,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1092,6 +1106,8 @@ kprobe_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 	case BPF_FUNC_override_return:
 		return &bpf_override_return_proto;
 #endif
+	case BPF_FUNC_get_func_ip:
+		return &bpf_get_func_ip_proto_kprobe;
 	default:
 		return bpf_tracing_func_proto(func_id, prog);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 83e87ffdbb6e..4894f99a1993 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4783,7 +4783,7 @@ union bpf_attr {
  *
  * u64 bpf_get_func_ip(void *ctx)
  * 	Description
- * 		Get address of the traced function (for tracing programs).
+ * 		Get address of the traced function (for tracing and kprobe programs).
  * 	Return
  * 		Address of the traced function.
  */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 5/8] selftests/bpf: Add test for bpf_get_func_ip helper
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (3 preceding siblings ...)
  2021-07-14  9:43 ` [PATCHv4 bpf-next 4/8] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-14  9:43 ` [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function Jiri Olsa
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Adding test for bpf_get_func_ip helper for fentry, fexit,
kprobe, kretprobe and fmod_ret programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c         | 39 ++++++++++++
 .../selftests/bpf/progs/get_func_ip_test.c    | 62 +++++++++++++++++++
 2 files changed, 101 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_ip_test.c

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
new file mode 100644
index 000000000000..8bb18a8d31a0
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "get_func_ip_test.skel.h"
+
+void test_get_func_ip_test(void)
+{
+	struct get_func_ip_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd;
+
+	skel = get_func_ip_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open_and_load"))
+		return;
+
+	err = get_func_ip_test__attach(skel);
+	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test1);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	prog_fd = bpf_program__fd(skel->progs.test5);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+
+	ASSERT_OK(err, "test_run");
+
+	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
+	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
+	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
+	ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
+
+cleanup:
+	get_func_ip_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
new file mode 100644
index 000000000000..ba3e107b52dd
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+extern const void bpf_fentry_test1 __ksym;
+extern const void bpf_fentry_test2 __ksym;
+extern const void bpf_fentry_test3 __ksym;
+extern const void bpf_fentry_test4 __ksym;
+extern const void bpf_modify_return_test __ksym;
+
+__u64 test1_result = 0;
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1, int a)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test1_result = (const void *) addr == &bpf_fentry_test1;
+	return 0;
+}
+
+__u64 test2_result = 0;
+SEC("fexit/bpf_fentry_test2")
+int BPF_PROG(test2, int a)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test2_result = (const void *) addr == &bpf_fentry_test2;
+	return 0;
+}
+
+__u64 test3_result = 0;
+SEC("kprobe/bpf_fentry_test3")
+int test3(struct pt_regs *ctx)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test3_result = (const void *) addr == &bpf_fentry_test3;
+	return 0;
+}
+
+__u64 test4_result = 0;
+SEC("kretprobe/bpf_fentry_test4")
+int BPF_KRETPROBE(test4)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test4_result = (const void *) addr == &bpf_fentry_test4;
+	return 0;
+}
+
+__u64 test5_result = 0;
+SEC("fmod_ret/bpf_modify_return_test")
+int BPF_PROG(test5, int a, int *b, int ret)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test5_result = (const void *) addr == &bpf_modify_return_test;
+	return ret;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (4 preceding siblings ...)
  2021-07-14  9:43 ` [PATCHv4 bpf-next 5/8] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-17  1:41   ` Andrii Nakryiko
  2021-07-14  9:43 ` [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset" Jiri Olsa
  2021-07-14  9:44 ` [PATCHv4 bpf-next 8/8] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa
  7 siblings, 1 reply; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Adding bpf_program__attach_kprobe_opts that does the same
as bpf_program__attach_kprobe, but takes opts argument.

Currently opts struct holds just retprobe bool, but we will
add new field in following patch.

The function is not exported, so there's no need to add
size to the struct bpf_program_attach_kprobe_opts for now.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++---------
 1 file changed, 25 insertions(+), 9 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 88b99401040c..d93a6f9408d1 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -10346,19 +10346,24 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
 	return pfd;
 }
 
-struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
-					    bool retprobe,
-					    const char *func_name)
+struct bpf_program_attach_kprobe_opts {
+	bool retprobe;
+};
+
+static struct bpf_link*
+bpf_program__attach_kprobe_opts(struct bpf_program *prog,
+				const char *func_name,
+				struct bpf_program_attach_kprobe_opts *opts)
 {
 	char errmsg[STRERR_BUFSIZE];
 	struct bpf_link *link;
 	int pfd, err;
 
-	pfd = perf_event_open_probe(false /* uprobe */, retprobe, func_name,
+	pfd = perf_event_open_probe(false /* uprobe */, opts->retprobe, func_name,
 				    0 /* offset */, -1 /* pid */);
 	if (pfd < 0) {
 		pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
-			prog->name, retprobe ? "kretprobe" : "kprobe", func_name,
+			prog->name, opts->retprobe ? "kretprobe" : "kprobe", func_name,
 			libbpf_strerror_r(pfd, errmsg, sizeof(errmsg)));
 		return libbpf_err_ptr(pfd);
 	}
@@ -10367,23 +10372,34 @@ struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
 	if (err) {
 		close(pfd);
 		pr_warn("prog '%s': failed to attach to %s '%s': %s\n",
-			prog->name, retprobe ? "kretprobe" : "kprobe", func_name,
+			prog->name, opts->retprobe ? "kretprobe" : "kprobe", func_name,
 			libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
 		return libbpf_err_ptr(err);
 	}
 	return link;
 }
 
+struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
+					    bool retprobe,
+					    const char *func_name)
+{
+	struct bpf_program_attach_kprobe_opts opts = {
+		.retprobe = retprobe,
+	};
+
+	return bpf_program__attach_kprobe_opts(prog, func_name, &opts);
+}
+
 static struct bpf_link *attach_kprobe(const struct bpf_sec_def *sec,
 				      struct bpf_program *prog)
 {
+	struct bpf_program_attach_kprobe_opts opts;
 	const char *func_name;
-	bool retprobe;
 
 	func_name = prog->sec_name + sec->len;
-	retprobe = strcmp(sec->sec, "kretprobe/") == 0;
+	opts.retprobe = strcmp(sec->sec, "kretprobe/") == 0;
 
-	return bpf_program__attach_kprobe(prog, retprobe, func_name);
+	return bpf_program__attach_kprobe_opts(prog, func_name, &opts);
 }
 
 struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset"
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (5 preceding siblings ...)
  2021-07-14  9:43 ` [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function Jiri Olsa
@ 2021-07-14  9:43 ` Jiri Olsa
  2021-07-17  1:42   ` Andrii Nakryiko
  2021-07-14  9:44 ` [PATCHv4 bpf-next 8/8] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa
  7 siblings, 1 reply; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:43 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

From: Alan Maguire <alan.maguire@oracle.com>

kprobes can be placed on most instructions in a function, not
just entry, and ftrace and bpftrace support the function+offset
notification for probe placement.  Adding parsing of func_name
into func+offset to bpf_program__attach_kprobe() allows the
user to specify

SEC("kprobe/bpf_fentry_test5+0x6")

...for example, and the offset can be passed to perf_event_open_probe()
to support kprobe attachment.

[jolsa: changed original code to use bpf_program__attach_kprobe_opts
and use dynamic allocation in sscanf]

Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 tools/lib/bpf/libbpf.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d93a6f9408d1..abe6d4842bb0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -10348,6 +10348,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
 
 struct bpf_program_attach_kprobe_opts {
 	bool retprobe;
+	unsigned long offset;
 };
 
 static struct bpf_link*
@@ -10360,7 +10361,7 @@ bpf_program__attach_kprobe_opts(struct bpf_program *prog,
 	int pfd, err;
 
 	pfd = perf_event_open_probe(false /* uprobe */, opts->retprobe, func_name,
-				    0 /* offset */, -1 /* pid */);
+				    opts->offset, -1 /* pid */);
 	if (pfd < 0) {
 		pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
 			prog->name, opts->retprobe ? "kretprobe" : "kprobe", func_name,
@@ -10394,12 +10395,31 @@ static struct bpf_link *attach_kprobe(const struct bpf_sec_def *sec,
 				      struct bpf_program *prog)
 {
 	struct bpf_program_attach_kprobe_opts opts;
+	unsigned long offset = 0;
+	struct bpf_link *link;
 	const char *func_name;
+	char *func;
+	int n, err;
 
 	func_name = prog->sec_name + sec->len;
 	opts.retprobe = strcmp(sec->sec, "kretprobe/") == 0;
 
-	return bpf_program__attach_kprobe_opts(prog, func_name, &opts);
+	n = sscanf(func_name, "%m[a-zA-Z0-9_.]+%lx", &func, &offset);
+	if (n < 1) {
+		err = -EINVAL;
+		pr_warn("kprobe name is invalid: %s\n", func_name);
+		return libbpf_err_ptr(err);
+	}
+	if (opts.retprobe && offset != 0) {
+		err = -EINVAL;
+		pr_warn("kretprobes do not support offset specification\n");
+		return libbpf_err_ptr(err);
+	}
+
+	opts.offset = offset;
+	link = bpf_program__attach_kprobe_opts(prog, func, &opts);
+	free(func);
+	return link;
 }
 
 struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCHv4 bpf-next 8/8] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe
  2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
                   ` (6 preceding siblings ...)
  2021-07-14  9:43 ` [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset" Jiri Olsa
@ 2021-07-14  9:44 ` Jiri Olsa
  7 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-14  9:44 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, Masami Hiramatsu, Alan Maguire

Adding test for bpf_get_func_ip in kprobe+ofset probe.
Because of the offset value it's arch specific, enabling
the new test only for x86_64 architecture.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_ip_test.c          | 18 ++++++++++++++++--
 .../selftests/bpf/progs/get_func_ip_test.c     | 11 +++++++++++
 2 files changed, 27 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
index 8bb18a8d31a0..088b3653610d 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_ip_test.c
@@ -8,10 +8,21 @@ void test_get_func_ip_test(void)
 	__u32 duration = 0, retval;
 	int err, prog_fd;
 
-	skel = get_func_ip_test__open_and_load();
-	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open_and_load"))
+	skel = get_func_ip_test__open();
+	if (!ASSERT_OK_PTR(skel, "get_func_ip_test__open"))
 		return;
 
+	/* test6 is x86_64 specifc because of the instruction
+	 * offset, disabling it for all other archs
+	 */
+#ifndef __x86_64__
+	bpf_program__set_autoload(skel->progs.test6, false);
+#endif
+
+	err = get_func_ip_test__load(skel);
+	if (!ASSERT_OK(err, "get_func_ip_test__load"))
+		goto cleanup;
+
 	err = get_func_ip_test__attach(skel);
 	if (!ASSERT_OK(err, "get_func_ip_test__attach"))
 		goto cleanup;
@@ -33,6 +44,9 @@ void test_get_func_ip_test(void)
 	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
 	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
 	ASSERT_EQ(skel->bss->test5_result, 1, "test5_result");
+#ifdef __x86_64__
+	ASSERT_EQ(skel->bss->test6_result, 1, "test6_result");
+#endif
 
 cleanup:
 	get_func_ip_test__destroy(skel);
diff --git a/tools/testing/selftests/bpf/progs/get_func_ip_test.c b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
index ba3e107b52dd..acd587b6e859 100644
--- a/tools/testing/selftests/bpf/progs/get_func_ip_test.c
+++ b/tools/testing/selftests/bpf/progs/get_func_ip_test.c
@@ -10,6 +10,7 @@ extern const void bpf_fentry_test2 __ksym;
 extern const void bpf_fentry_test3 __ksym;
 extern const void bpf_fentry_test4 __ksym;
 extern const void bpf_modify_return_test __ksym;
+extern const void bpf_fentry_test6 __ksym;
 
 __u64 test1_result = 0;
 SEC("fentry/bpf_fentry_test1")
@@ -60,3 +61,13 @@ int BPF_PROG(test5, int a, int *b, int ret)
 	test5_result = (const void *) addr == &bpf_modify_return_test;
 	return ret;
 }
+
+__u64 test6_result = 0;
+SEC("kprobe/bpf_fentry_test6+0x5")
+int test6(struct pt_regs *ctx)
+{
+	__u64 addr = bpf_get_func_ip(ctx);
+
+	test6_result = (const void *) addr == &bpf_fentry_test6 + 5;
+	return 0;
+}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-14  9:43 ` [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
@ 2021-07-16  1:09   ` Alexei Starovoitov
  2021-07-18 19:30     ` Jiri Olsa
  0 siblings, 1 reply; 16+ messages in thread
From: Alexei Starovoitov @ 2021-07-16  1:09 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	kernel test robot, Dan Carpenter, Network Development, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 14, 2021 at 2:44 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
> specifically for all trampoline attach types.
>
> The trampoline's caller IP address is stored in (ctx - 8) address.
> so there's no reason to actually call the helper, but rather fixup
> the call instruction and return [ctx - 8] value directly (suggested
> by Alexei).
>
> [fixed has_get_func_ip wrong return type]
> Reported-by: kernel test robot <lkp@intel.com>
> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>

I removed these tags, since they don't correspond to any real commit in the git.
Otherwise all patches would have been full of such things when patch series
go through iterations. Also fixed a few typos here and there,
manually rebased and applied.
Thanks!

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function
  2021-07-14  9:43 ` [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function Jiri Olsa
@ 2021-07-17  1:41   ` Andrii Nakryiko
  2021-07-18 19:32     ` Jiri Olsa
  0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2021-07-17  1:41 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 14, 2021 at 2:45 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> Adding bpf_program__attach_kprobe_opts that does the same
> as bpf_program__attach_kprobe, but takes opts argument.
>
> Currently opts struct holds just retprobe bool, but we will
> add new field in following patch.
>
> The function is not exported, so there's no need to add
> size to the struct bpf_program_attach_kprobe_opts for now.

Why not exported? Please use a proper _opts struct just like others
(e.g., bpf_object_open_opts) and add is as a public API, it's a useful
addition. We are going to have a similar structure for attach_uprobe,
btw. Please send a follow up patch.

>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++---------
>  1 file changed, 25 insertions(+), 9 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 88b99401040c..d93a6f9408d1 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -10346,19 +10346,24 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
>         return pfd;
>  }
>
> -struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
> -                                           bool retprobe,
> -                                           const char *func_name)
> +struct bpf_program_attach_kprobe_opts {

when you make it part of libbpf API, let's call it something shorter,
like bpf_kprobe_opts, maybe? And later we'll have bpf_uprobe_opts for
uprobes. Short and unambiguous.

> +       bool retprobe;
> +};
> +
> +static struct bpf_link*
> +bpf_program__attach_kprobe_opts(struct bpf_program *prog,
> +                               const char *func_name,
> +                               struct bpf_program_attach_kprobe_opts *opts)
>  {
>         char errmsg[STRERR_BUFSIZE];
>         struct bpf_link *link;
>         int pfd, err;
>

[...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset"
  2021-07-14  9:43 ` [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset" Jiri Olsa
@ 2021-07-17  1:42   ` Andrii Nakryiko
  2021-07-18 19:32     ` Jiri Olsa
  0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2021-07-17  1:42 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Wed, Jul 14, 2021 at 2:45 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> From: Alan Maguire <alan.maguire@oracle.com>
>
> kprobes can be placed on most instructions in a function, not
> just entry, and ftrace and bpftrace support the function+offset
> notification for probe placement.  Adding parsing of func_name
> into func+offset to bpf_program__attach_kprobe() allows the
> user to specify
>
> SEC("kprobe/bpf_fentry_test5+0x6")
>
> ...for example, and the offset can be passed to perf_event_open_probe()
> to support kprobe attachment.
>
> [jolsa: changed original code to use bpf_program__attach_kprobe_opts
> and use dynamic allocation in sscanf]
>
> Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  tools/lib/bpf/libbpf.c | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index d93a6f9408d1..abe6d4842bb0 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -10348,6 +10348,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
>
>  struct bpf_program_attach_kprobe_opts {
>         bool retprobe;
> +       unsigned long offset;
>  };
>
>  static struct bpf_link*
> @@ -10360,7 +10361,7 @@ bpf_program__attach_kprobe_opts(struct bpf_program *prog,
>         int pfd, err;
>
>         pfd = perf_event_open_probe(false /* uprobe */, opts->retprobe, func_name,
> -                                   0 /* offset */, -1 /* pid */);
> +                                   opts->offset, -1 /* pid */);
>         if (pfd < 0) {
>                 pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
>                         prog->name, opts->retprobe ? "kretprobe" : "kprobe", func_name,
> @@ -10394,12 +10395,31 @@ static struct bpf_link *attach_kprobe(const struct bpf_sec_def *sec,
>                                       struct bpf_program *prog)
>  {
>         struct bpf_program_attach_kprobe_opts opts;
> +       unsigned long offset = 0;
> +       struct bpf_link *link;
>         const char *func_name;
> +       char *func;
> +       int n, err;
>
>         func_name = prog->sec_name + sec->len;
>         opts.retprobe = strcmp(sec->sec, "kretprobe/") == 0;
>
> -       return bpf_program__attach_kprobe_opts(prog, func_name, &opts);
> +       n = sscanf(func_name, "%m[a-zA-Z0-9_.]+%lx", &func, &offset);

could have used %li here to support both +0xabc and +123 forms

> +       if (n < 1) {
> +               err = -EINVAL;
> +               pr_warn("kprobe name is invalid: %s\n", func_name);
> +               return libbpf_err_ptr(err);
> +       }
> +       if (opts.retprobe && offset != 0) {
> +               err = -EINVAL;

leaking func here


> +               pr_warn("kretprobes do not support offset specification\n");
> +               return libbpf_err_ptr(err);
> +       }
> +
> +       opts.offset = offset;
> +       link = bpf_program__attach_kprobe_opts(prog, func, &opts);
> +       free(func);
> +       return link;
>  }
>
>  struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs
  2021-07-16  1:09   ` Alexei Starovoitov
@ 2021-07-18 19:30     ` Jiri Olsa
  0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-18 19:30 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	kernel test robot, Dan Carpenter, Network Development, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Thu, Jul 15, 2021 at 06:09:22PM -0700, Alexei Starovoitov wrote:
> On Wed, Jul 14, 2021 at 2:44 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding bpf_get_func_ip helper for BPF_PROG_TYPE_TRACING programs,
> > specifically for all trampoline attach types.
> >
> > The trampoline's caller IP address is stored in (ctx - 8) address.
> > so there's no reason to actually call the helper, but rather fixup
> > the call instruction and return [ctx - 8] value directly (suggested
> > by Alexei).
> >
> > [fixed has_get_func_ip wrong return type]
> > Reported-by: kernel test robot <lkp@intel.com>
> > Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
> 
> I removed these tags, since they don't correspond to any real commit in the git.
> Otherwise all patches would have been full of such things when patch series
> go through iterations. Also fixed a few typos here and there,
> manually rebased and applied.
> Thanks!
> 

thanks,
jirka


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function
  2021-07-17  1:41   ` Andrii Nakryiko
@ 2021-07-18 19:32     ` Jiri Olsa
  2021-07-18 21:30       ` Andrii Nakryiko
  0 siblings, 1 reply; 16+ messages in thread
From: Jiri Olsa @ 2021-07-18 19:32 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Fri, Jul 16, 2021 at 06:41:59PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 14, 2021 at 2:45 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > Adding bpf_program__attach_kprobe_opts that does the same
> > as bpf_program__attach_kprobe, but takes opts argument.
> >
> > Currently opts struct holds just retprobe bool, but we will
> > add new field in following patch.
> >
> > The function is not exported, so there's no need to add
> > size to the struct bpf_program_attach_kprobe_opts for now.
> 
> Why not exported? Please use a proper _opts struct just like others
> (e.g., bpf_object_open_opts) and add is as a public API, it's a useful
> addition. We are going to have a similar structure for attach_uprobe,
> btw. Please send a follow up patch.

there's no outside user.. ok

> 
> >
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++---------
> >  1 file changed, 25 insertions(+), 9 deletions(-)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index 88b99401040c..d93a6f9408d1 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -10346,19 +10346,24 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
> >         return pfd;
> >  }
> >
> > -struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
> > -                                           bool retprobe,
> > -                                           const char *func_name)
> > +struct bpf_program_attach_kprobe_opts {
> 
> when you make it part of libbpf API, let's call it something shorter,
> like bpf_kprobe_opts, maybe? And later we'll have bpf_uprobe_opts for
> uprobes. Short and unambiguous.

ok

jirka

> 
> > +       bool retprobe;
> > +};
> > +
> > +static struct bpf_link*
> > +bpf_program__attach_kprobe_opts(struct bpf_program *prog,
> > +                               const char *func_name,
> > +                               struct bpf_program_attach_kprobe_opts *opts)
> >  {
> >         char errmsg[STRERR_BUFSIZE];
> >         struct bpf_link *link;
> >         int pfd, err;
> >
> 
> [...]
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset"
  2021-07-17  1:42   ` Andrii Nakryiko
@ 2021-07-18 19:32     ` Jiri Olsa
  0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-07-18 19:32 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Fri, Jul 16, 2021 at 06:42:05PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 14, 2021 at 2:45 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > From: Alan Maguire <alan.maguire@oracle.com>
> >
> > kprobes can be placed on most instructions in a function, not
> > just entry, and ftrace and bpftrace support the function+offset
> > notification for probe placement.  Adding parsing of func_name
> > into func+offset to bpf_program__attach_kprobe() allows the
> > user to specify
> >
> > SEC("kprobe/bpf_fentry_test5+0x6")
> >
> > ...for example, and the offset can be passed to perf_event_open_probe()
> > to support kprobe attachment.
> >
> > [jolsa: changed original code to use bpf_program__attach_kprobe_opts
> > and use dynamic allocation in sscanf]
> >
> > Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  tools/lib/bpf/libbpf.c | 24 ++++++++++++++++++++++--
> >  1 file changed, 22 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index d93a6f9408d1..abe6d4842bb0 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -10348,6 +10348,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
> >
> >  struct bpf_program_attach_kprobe_opts {
> >         bool retprobe;
> > +       unsigned long offset;
> >  };
> >
> >  static struct bpf_link*
> > @@ -10360,7 +10361,7 @@ bpf_program__attach_kprobe_opts(struct bpf_program *prog,
> >         int pfd, err;
> >
> >         pfd = perf_event_open_probe(false /* uprobe */, opts->retprobe, func_name,
> > -                                   0 /* offset */, -1 /* pid */);
> > +                                   opts->offset, -1 /* pid */);
> >         if (pfd < 0) {
> >                 pr_warn("prog '%s': failed to create %s '%s' perf event: %s\n",
> >                         prog->name, opts->retprobe ? "kretprobe" : "kprobe", func_name,
> > @@ -10394,12 +10395,31 @@ static struct bpf_link *attach_kprobe(const struct bpf_sec_def *sec,
> >                                       struct bpf_program *prog)
> >  {
> >         struct bpf_program_attach_kprobe_opts opts;
> > +       unsigned long offset = 0;
> > +       struct bpf_link *link;
> >         const char *func_name;
> > +       char *func;
> > +       int n, err;
> >
> >         func_name = prog->sec_name + sec->len;
> >         opts.retprobe = strcmp(sec->sec, "kretprobe/") == 0;
> >
> > -       return bpf_program__attach_kprobe_opts(prog, func_name, &opts);
> > +       n = sscanf(func_name, "%m[a-zA-Z0-9_.]+%lx", &func, &offset);
> 
> could have used %li here to support both +0xabc and +123 forms

ok

> 
> > +       if (n < 1) {
> > +               err = -EINVAL;
> > +               pr_warn("kprobe name is invalid: %s\n", func_name);
> > +               return libbpf_err_ptr(err);
> > +       }
> > +       if (opts.retprobe && offset != 0) {
> > +               err = -EINVAL;
> 
> leaking func here

ugh.. thanks

jirka

> 
> 
> > +               pr_warn("kretprobes do not support offset specification\n");
> > +               return libbpf_err_ptr(err);
> > +       }
> > +
> > +       opts.offset = offset;
> > +       link = bpf_program__attach_kprobe_opts(prog, func, &opts);
> > +       free(func);
> > +       return link;
> >  }
> >
> >  struct bpf_link *bpf_program__attach_uprobe(struct bpf_program *prog,
> > --
> > 2.31.1
> >
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function
  2021-07-18 19:32     ` Jiri Olsa
@ 2021-07-18 21:30       ` Andrii Nakryiko
  0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2021-07-18 21:30 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Masami Hiramatsu, Alan Maguire

On Sun, Jul 18, 2021 at 12:32 PM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Fri, Jul 16, 2021 at 06:41:59PM -0700, Andrii Nakryiko wrote:
> > On Wed, Jul 14, 2021 at 2:45 AM Jiri Olsa <jolsa@redhat.com> wrote:
> > >
> > > Adding bpf_program__attach_kprobe_opts that does the same
> > > as bpf_program__attach_kprobe, but takes opts argument.
> > >
> > > Currently opts struct holds just retprobe bool, but we will
> > > add new field in following patch.
> > >
> > > The function is not exported, so there's no need to add
> > > size to the struct bpf_program_attach_kprobe_opts for now.
> >
> > Why not exported? Please use a proper _opts struct just like others
> > (e.g., bpf_object_open_opts) and add is as a public API, it's a useful
> > addition. We are going to have a similar structure for attach_uprobe,
> > btw. Please send a follow up patch.
>
> there's no outside user.. ok

because there is no API :) I've seen people asking about the ability
to attach to kprobe+offset in some PRs.

>
> >
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >  tools/lib/bpf/libbpf.c | 34 +++++++++++++++++++++++++---------
> > >  1 file changed, 25 insertions(+), 9 deletions(-)
> > >
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index 88b99401040c..d93a6f9408d1 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -10346,19 +10346,24 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
> > >         return pfd;
> > >  }
> > >
> > > -struct bpf_link *bpf_program__attach_kprobe(struct bpf_program *prog,
> > > -                                           bool retprobe,
> > > -                                           const char *func_name)
> > > +struct bpf_program_attach_kprobe_opts {
> >
> > when you make it part of libbpf API, let's call it something shorter,
> > like bpf_kprobe_opts, maybe? And later we'll have bpf_uprobe_opts for
> > uprobes. Short and unambiguous.
>
> ok
>
> jirka
>
> >
> > > +       bool retprobe;
> > > +};
> > > +
> > > +static struct bpf_link*
> > > +bpf_program__attach_kprobe_opts(struct bpf_program *prog,
> > > +                               const char *func_name,
> > > +                               struct bpf_program_attach_kprobe_opts *opts)
> > >  {
> > >         char errmsg[STRERR_BUFSIZE];
> > >         struct bpf_link *link;
> > >         int pfd, err;
> > >
> >
> > [...]
> >
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-07-18 21:30 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-14  9:43 [PATCHv4 bpf-next 0/8] bpf, x86: Add bpf_get_func_ip helper Jiri Olsa
2021-07-14  9:43 ` [PATCHv4 bpf-next 1/8] bpf, x86: Store caller's ip in trampoline stack Jiri Olsa
2021-07-14  9:43 ` [PATCHv4 bpf-next 2/8] bpf: Enable BPF_TRAMP_F_IP_ARG for trampolines with call_get_func_ip Jiri Olsa
2021-07-14  9:43 ` [PATCHv4 bpf-next 3/8] bpf: Add bpf_get_func_ip helper for tracing programs Jiri Olsa
2021-07-16  1:09   ` Alexei Starovoitov
2021-07-18 19:30     ` Jiri Olsa
2021-07-14  9:43 ` [PATCHv4 bpf-next 4/8] bpf: Add bpf_get_func_ip helper for kprobe programs Jiri Olsa
2021-07-14  9:43 ` [PATCHv4 bpf-next 5/8] selftests/bpf: Add test for bpf_get_func_ip helper Jiri Olsa
2021-07-14  9:43 ` [PATCHv4 bpf-next 6/8] libbpf: Add bpf_program__attach_kprobe_opts function Jiri Olsa
2021-07-17  1:41   ` Andrii Nakryiko
2021-07-18 19:32     ` Jiri Olsa
2021-07-18 21:30       ` Andrii Nakryiko
2021-07-14  9:43 ` [PATCHv4 bpf-next 7/8] libbpf: Allow specification of "kprobe/function+offset" Jiri Olsa
2021-07-17  1:42   ` Andrii Nakryiko
2021-07-18 19:32     ` Jiri Olsa
2021-07-14  9:44 ` [PATCHv4 bpf-next 8/8] selftests/bpf: Add test for bpf_get_func_ip in kprobe+offset probe Jiri Olsa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).