All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] bpf: Add helpers to access traced function arguments
@ 2021-12-04 14:06 Jiri Olsa
  2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-04 14:06 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh

hi,
adding new helpers to access traced function arguments that
came out of the trampoline batch changes [1].

  Get n-th argument of the traced function:
    long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
  
  Get return value of the traced function:
    long bpf_get_func_ret(void *ctx, u64 *value)
  
  Get arguments count of the traced funtion:
    long bpf_get_func_arg_cnt(void *ctx)

changes from original post [1]:
  - change helpers names to get_func_*
  - change helpers to return error values instead of
    direct values
  - replaced stack_size usage with specific offset
    variables in arch_prepare_bpf_trampoline
  - add comment on stack layout
  - add more tests
  - allow bpf_get_func_ret in FENTRY programs
  - use BPF_LSH instead of BPF_MUL

thanks,
jirka


[1] https://lore.kernel.org/bpf/20211118112455.475349-1-jolsa@kernel.org/
---
Jiri Olsa (3):
      bpf, x64: Replace some stack_size usage with offset variables
      bpf: Add get_func_[arg|ret|arg_cnt] helpers
      selftests/bpf: Add tests for get_func_[arg|ret|arg_cnt] helpers

 arch/x86/net/bpf_jit_comp.c                                 |  55 +++++++++++++++++++++++++++++++++------------
 include/uapi/linux/bpf.h                                    |  28 +++++++++++++++++++++++
 kernel/bpf/verifier.c                                       |  73 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++--
 kernel/trace/bpf_trace.c                                    |  58 ++++++++++++++++++++++++++++++++++++++++++++++-
 tools/include/uapi/linux/bpf.h                              |  28 +++++++++++++++++++++++
 tools/testing/selftests/bpf/prog_tests/get_func_args_test.c |  38 +++++++++++++++++++++++++++++++
 tools/testing/selftests/bpf/progs/get_func_args_test.c      | 112 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 7 files changed, 375 insertions(+), 17 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_args_test.c


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables
  2021-12-04 14:06 [PATCH bpf-next 0/3] bpf: Add helpers to access traced function arguments Jiri Olsa
@ 2021-12-04 14:06 ` Jiri Olsa
  2021-12-06 19:19   ` John Fastabend
                     ` (2 more replies)
  2021-12-04 14:06 ` [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers Jiri Olsa
  2021-12-04 14:07 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for " Jiri Olsa
  2 siblings, 3 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-04 14:06 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Andrii Nakryiko, netdev, bpf, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh

As suggested by Andrii, adding variables for registers and ip
address offsets, which makes the code more clear, rather than
abusing single stack_size variable for everything.

Also describing the stack layout in the comment.

There is no function change.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c | 42 ++++++++++++++++++++++++-------------
 1 file changed, 28 insertions(+), 14 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 1d7b0c69b644..b106e80e8d9c 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1941,7 +1941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 				void *orig_call)
 {
 	int ret, i, nr_args = m->nr_args;
-	int stack_size = nr_args * 8;
+	int regs_off, ip_off, stack_size = nr_args * 8;
 	struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
 	struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
 	struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
@@ -1956,14 +1956,33 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	if (!is_valid_bpf_tramp_flags(flags))
 		return -EINVAL;
 
+	/* Generated trampoline stack layout:
+	 *
+	 * RBP + 8         [ return address  ]
+	 * RBP + 0         [ RBP             ]
+	 *
+	 * RBP - 8         [ return value    ]  BPF_TRAMP_F_CALL_ORIG or
+	 *                                      BPF_TRAMP_F_RET_FENTRY_RET flags
+	 *
+	 *                 [ reg_argN        ]  always
+	 *                 [ ...             ]
+	 * RBP - regs_off  [ reg_arg1        ]
+	 *
+	 * RBP - ip_off    [ traced function ]  BPF_TRAMP_F_IP_ARG flag
+	 */
+
 	/* room for return value of orig_call or fentry prog */
 	save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
 	if (save_ret)
 		stack_size += 8;
 
+	regs_off = stack_size;
+
 	if (flags & BPF_TRAMP_F_IP_ARG)
 		stack_size += 8; /* room for IP address argument */
 
+	ip_off = stack_size;
+
 	if (flags & BPF_TRAMP_F_SKIP_FRAME)
 		/* skip patched call instruction and point orig_call to actual
 		 * body of the kernel function.
@@ -1981,19 +2000,14 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 		/* Store IP address of the traced function:
 		 * mov rax, QWORD PTR [rbp + 8]
 		 * sub rax, X86_PATCH_SIZE
-		 * mov QWORD PTR [rbp - stack_size], rax
+		 * mov QWORD PTR [rbp - ip_off], rax
 		 */
 		emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
 		EMIT4(0x48, 0x83, 0xe8, X86_PATCH_SIZE);
-		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -stack_size);
-
-		/* Continue with stack_size for regs storage, stack will
-		 * be correctly restored with 'leave' instruction.
-		 */
-		stack_size -= 8;
+		emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -ip_off);
 	}
 
-	save_regs(m, &prog, nr_args, stack_size);
+	save_regs(m, &prog, nr_args, regs_off);
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
 		/* arg1: mov rdi, im */
@@ -2005,7 +2019,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	}
 
 	if (fentry->nr_progs)
-		if (invoke_bpf(m, &prog, fentry, stack_size,
+		if (invoke_bpf(m, &prog, fentry, regs_off,
 			       flags & BPF_TRAMP_F_RET_FENTRY_RET))
 			return -EINVAL;
 
@@ -2015,7 +2029,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 		if (!branches)
 			return -ENOMEM;
 
-		if (invoke_bpf_mod_ret(m, &prog, fmod_ret, stack_size,
+		if (invoke_bpf_mod_ret(m, &prog, fmod_ret, regs_off,
 				       branches)) {
 			ret = -EINVAL;
 			goto cleanup;
@@ -2023,7 +2037,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	}
 
 	if (flags & BPF_TRAMP_F_CALL_ORIG) {
-		restore_regs(m, &prog, nr_args, stack_size);
+		restore_regs(m, &prog, nr_args, regs_off);
 
 		/* call original function */
 		if (emit_call(&prog, orig_call, prog)) {
@@ -2053,13 +2067,13 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	}
 
 	if (fexit->nr_progs)
-		if (invoke_bpf(m, &prog, fexit, stack_size, false)) {
+		if (invoke_bpf(m, &prog, fexit, regs_off, false)) {
 			ret = -EINVAL;
 			goto cleanup;
 		}
 
 	if (flags & BPF_TRAMP_F_RESTORE_REGS)
-		restore_regs(m, &prog, nr_args, stack_size);
+		restore_regs(m, &prog, nr_args, regs_off);
 
 	/* This needs to be done regardless. If there were fmod_ret programs,
 	 * the return value is only updated on the stack and still needs to be
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers
  2021-12-04 14:06 [PATCH bpf-next 0/3] bpf: Add helpers to access traced function arguments Jiri Olsa
  2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
@ 2021-12-04 14:06 ` Jiri Olsa
  2021-12-06 19:39   ` John Fastabend
  2021-12-06 21:54   ` Andrii Nakryiko
  2021-12-04 14:07 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for " Jiri Olsa
  2 siblings, 2 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-04 14:06 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh

Adding following helpers for tracing programs:

Get n-th argument of the traced function:
  long bpf_get_func_arg(void *ctx, u32 n, u64 *value)

Get return value of the traced function:
  long bpf_get_func_ret(void *ctx, u64 *value)

Get arguments count of the traced funtion:
  long bpf_get_func_arg_cnt(void *ctx)

The trampoline now stores number of arguments on ctx-8
address, so it's easy to verify argument index and find
return value argument's position.

Moving function ip address on the trampoline stack behind
the number of functions arguments, so it's now stored on
ctx-16 address if it's needed.

All helpers above are inlined by verifier.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 arch/x86/net/bpf_jit_comp.c    | 15 ++++++-
 include/uapi/linux/bpf.h       | 28 +++++++++++++
 kernel/bpf/verifier.c          | 73 +++++++++++++++++++++++++++++++++-
 kernel/trace/bpf_trace.c       | 58 ++++++++++++++++++++++++++-
 tools/include/uapi/linux/bpf.h | 28 +++++++++++++
 5 files changed, 198 insertions(+), 4 deletions(-)

diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index b106e80e8d9c..142e6b90fa52 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -1941,7 +1941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 				void *orig_call)
 {
 	int ret, i, nr_args = m->nr_args;
-	int regs_off, ip_off, stack_size = nr_args * 8;
+	int regs_off, ip_off, args_off, stack_size = nr_args * 8;
 	struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
 	struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
 	struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
@@ -1968,6 +1968,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	 *                 [ ...             ]
 	 * RBP - regs_off  [ reg_arg1        ]
 	 *
+	 * RBP - args_off  [ args count      ]  always
+	 *
 	 * RBP - ip_off    [ traced function ]  BPF_TRAMP_F_IP_ARG flag
 	 */
 
@@ -1978,6 +1980,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 
 	regs_off = stack_size;
 
+	/* args count  */
+	stack_size += 8;
+	args_off = stack_size;
+
 	if (flags & BPF_TRAMP_F_IP_ARG)
 		stack_size += 8; /* room for IP address argument */
 
@@ -1996,6 +2002,13 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
 	EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
 	EMIT1(0x53);		 /* push rbx */
 
+	/* Store number of arguments of the traced function:
+	 *   mov rax, nr_args
+	 *   mov QWORD PTR [rbp - args_off], rax
+	 */
+	emit_mov_imm64(&prog, BPF_REG_0, 0, (u32) nr_args);
+	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -args_off);
+
 	if (flags & BPF_TRAMP_F_IP_ARG) {
 		/* Store IP address of the traced function:
 		 * mov rax, QWORD PTR [rbp + 8]
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index c26871263f1f..d5a3791071d6 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4983,6 +4983,31 @@ union bpf_attr {
  *	Return
  *		The number of loops performed, **-EINVAL** for invalid **flags**,
  *		**-E2BIG** if **nr_loops** exceeds the maximum number of loops.
+ *
+ * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
+ *	Description
+ *		Get **n**-th argument (zero based) of the traced function (for tracing programs)
+ *		returned in **value**.
+ *
+ *	Return
+ *		0 on success.
+ *		**-EINVAL** if n >= arguments count of traced function.
+ *
+ * long bpf_get_func_ret(void *ctx, u64 *value)
+ *	Description
+ *		Get return value of the traced function (for tracing programs)
+ *		in **value**.
+ *
+ *	Return
+ *		0 on success.
+ *		**-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.
+ *
+ * long bpf_get_func_arg_cnt(void *ctx)
+ *	Description
+ *		Get number of arguments of the traced function (for tracing programs).
+ *
+ *	Return
+ *		The number of arguments of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5167,6 +5192,9 @@ union bpf_attr {
 	FN(kallsyms_lookup_name),	\
 	FN(find_vma),			\
 	FN(loop),			\
+	FN(get_func_arg),		\
+	FN(get_func_ret),		\
+	FN(get_func_arg_cnt),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 6522ffdea487..cf6853d3a8e9 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -12974,6 +12974,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
 static int do_misc_fixups(struct bpf_verifier_env *env)
 {
 	struct bpf_prog *prog = env->prog;
+	enum bpf_attach_type eatype = prog->expected_attach_type;
 	bool expect_blinding = bpf_jit_blinding_enabled(prog);
 	enum bpf_prog_type prog_type = resolve_prog_type(prog);
 	struct bpf_insn *insn = prog->insnsi;
@@ -13344,11 +13345,79 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
+		/* Implement bpf_get_func_arg inline. */
+		if (prog_type == BPF_PROG_TYPE_TRACING &&
+		    insn->imm == BPF_FUNC_get_func_arg) {
+			/* Load nr_args from ctx - 8 */
+			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+			insn_buf[1] = BPF_JMP32_REG(BPF_JGE, BPF_REG_2, BPF_REG_0, 6);
+			insn_buf[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 3);
+			insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1);
+			insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0);
+			insn_buf[5] = BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
+			insn_buf[6] = BPF_MOV64_IMM(BPF_REG_0, 0);
+			insn_buf[7] = BPF_JMP_A(1);
+			insn_buf[8] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
+			cnt = 9;
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+			if (!new_prog)
+				return -ENOMEM;
+
+			delta    += cnt - 1;
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
+		/* Implement bpf_get_func_ret inline. */
+		if (prog_type == BPF_PROG_TYPE_TRACING &&
+		    insn->imm == BPF_FUNC_get_func_ret) {
+			if (eatype == BPF_TRACE_FEXIT ||
+			    eatype == BPF_MODIFY_RETURN) {
+				/* Load nr_args from ctx - 8 */
+				insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+				insn_buf[1] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
+				insn_buf[2] = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
+				insn_buf[3] = BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
+				insn_buf[4] = BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, 0);
+				insn_buf[5] = BPF_MOV64_IMM(BPF_REG_0, 0);
+				cnt = 6;
+			} else {
+				insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
+				cnt = 1;
+			}
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+			if (!new_prog)
+				return -ENOMEM;
+
+			delta    += cnt - 1;
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
+		/* Implement get_func_arg_cnt inline. */
+		if (prog_type == BPF_PROG_TYPE_TRACING &&
+		    insn->imm == BPF_FUNC_get_func_arg_cnt) {
+			/* Load nr_args from ctx - 8 */
+			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+
+			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
+			if (!new_prog)
+				return -ENOMEM;
+
+			env->prog = prog = new_prog;
+			insn      = new_prog->insnsi + i + delta;
+			continue;
+		}
+
 		/* Implement bpf_get_func_ip inline. */
 		if (prog_type == BPF_PROG_TYPE_TRACING &&
 		    insn->imm == BPF_FUNC_get_func_ip) {
-			/* Load IP address from ctx - 8 */
-			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
+			/* Load IP address from ctx - 16 */
+			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -16);
 
 			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
 			if (!new_prog)
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 25ea521fb8f1..d1d4617c9fd5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1012,7 +1012,7 @@ const struct bpf_func_proto bpf_snprintf_btf_proto = {
 BPF_CALL_1(bpf_get_func_ip_tracing, void *, ctx)
 {
 	/* This helper call is inlined by verifier. */
-	return ((u64 *)ctx)[-1];
+	return ((u64 *)ctx)[-2];
 }
 
 static const struct bpf_func_proto bpf_get_func_ip_proto_tracing = {
@@ -1091,6 +1091,56 @@ static const struct bpf_func_proto bpf_get_branch_snapshot_proto = {
 	.arg2_type	= ARG_CONST_SIZE_OR_ZERO,
 };
 
+BPF_CALL_3(get_func_arg, void *, ctx, u32, n, u64 *, value)
+{
+	/* This helper call is inlined by verifier. */
+	u64 nr_args = ((u64 *)ctx)[-1];
+
+	if ((u64) n >= nr_args)
+		return -EINVAL;
+	*value = ((u64 *)ctx)[n];
+	return 0;
+}
+
+static const struct bpf_func_proto bpf_get_func_arg_proto = {
+	.func		= get_func_arg,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_ANYTHING,
+	.arg3_type	= ARG_PTR_TO_LONG,
+};
+
+BPF_CALL_2(get_func_ret, void *, ctx, u64 *, value)
+{
+	/* This helper call is inlined by verifier. */
+	u64 nr_args = ((u64 *)ctx)[-1];
+
+	*value = ((u64 *)ctx)[nr_args];
+	return 0;
+}
+
+static const struct bpf_func_proto bpf_get_func_ret_proto = {
+	.func		= get_func_ret,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+	.arg2_type	= ARG_PTR_TO_LONG,
+};
+
+BPF_CALL_1(get_func_arg_cnt, void *, ctx)
+{
+	/* This helper call is inlined by verifier. */
+	return ((u64 *)ctx)[-1];
+}
+
+static const struct bpf_func_proto bpf_get_func_arg_cnt_proto = {
+	.func		= get_func_arg_cnt,
+	.gpl_only	= true,
+	.ret_type	= RET_INTEGER,
+	.arg1_type	= ARG_PTR_TO_CTX,
+};
+
 static const struct bpf_func_proto *
 bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 {
@@ -1212,6 +1262,12 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_find_vma_proto;
 	case BPF_FUNC_trace_vprintk:
 		return bpf_get_trace_vprintk_proto();
+	case BPF_FUNC_get_func_arg:
+		return &bpf_get_func_arg_proto;
+	case BPF_FUNC_get_func_ret:
+		return &bpf_get_func_ret_proto;
+	case BPF_FUNC_get_func_arg_cnt:
+		return &bpf_get_func_arg_cnt_proto;
 	default:
 		return bpf_base_func_proto(func_id);
 	}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index c26871263f1f..d5a3791071d6 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4983,6 +4983,31 @@ union bpf_attr {
  *	Return
  *		The number of loops performed, **-EINVAL** for invalid **flags**,
  *		**-E2BIG** if **nr_loops** exceeds the maximum number of loops.
+ *
+ * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
+ *	Description
+ *		Get **n**-th argument (zero based) of the traced function (for tracing programs)
+ *		returned in **value**.
+ *
+ *	Return
+ *		0 on success.
+ *		**-EINVAL** if n >= arguments count of traced function.
+ *
+ * long bpf_get_func_ret(void *ctx, u64 *value)
+ *	Description
+ *		Get return value of the traced function (for tracing programs)
+ *		in **value**.
+ *
+ *	Return
+ *		0 on success.
+ *		**-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.
+ *
+ * long bpf_get_func_arg_cnt(void *ctx)
+ *	Description
+ *		Get number of arguments of the traced function (for tracing programs).
+ *
+ *	Return
+ *		The number of arguments of the traced function.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -5167,6 +5192,9 @@ union bpf_attr {
 	FN(kallsyms_lookup_name),	\
 	FN(find_vma),			\
 	FN(loop),			\
+	FN(get_func_arg),		\
+	FN(get_func_ret),		\
+	FN(get_func_arg_cnt),		\
 	/* */
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH bpf-next 3/3] selftests/bpf: Add tests for get_func_[arg|ret|arg_cnt] helpers
  2021-12-04 14:06 [PATCH bpf-next 0/3] bpf: Add helpers to access traced function arguments Jiri Olsa
  2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
  2021-12-04 14:06 ` [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers Jiri Olsa
@ 2021-12-04 14:07 ` Jiri Olsa
  2021-12-06 22:03   ` Andrii Nakryiko
  2 siblings, 1 reply; 16+ messages in thread
From: Jiri Olsa @ 2021-12-04 14:07 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh

Adding tests for get_func_[arg|ret|arg_cnt] helpers.
Using these helpers in fentry/fexit/fmod_ret programs.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
---
 .../bpf/prog_tests/get_func_args_test.c       |  38 ++++++
 .../selftests/bpf/progs/get_func_args_test.c  | 112 ++++++++++++++++++
 2 files changed, 150 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
 create mode 100644 tools/testing/selftests/bpf/progs/get_func_args_test.c

diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
new file mode 100644
index 000000000000..c24807ae4361
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "get_func_args_test.skel.h"
+
+void test_get_func_args_test(void)
+{
+	struct get_func_args_test *skel = NULL;
+	__u32 duration = 0, retval;
+	int err, prog_fd;
+
+	skel = get_func_args_test__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "get_func_args_test__open_and_load"))
+		return;
+
+	err = get_func_args_test__attach(skel);
+	if (!ASSERT_OK(err, "get_func_args_test__attach"))
+		goto cleanup;
+
+	prog_fd = bpf_program__fd(skel->progs.test1);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 0, "test_run");
+
+	prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
+	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
+				NULL, NULL, &retval, &duration);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(retval, 1234, "test_run");
+
+	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
+	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
+	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
+	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
+
+cleanup:
+	get_func_args_test__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/get_func_args_test.c b/tools/testing/selftests/bpf/progs/get_func_args_test.c
new file mode 100644
index 000000000000..0d0a67c849ae
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/get_func_args_test.c
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include <errno.h>
+
+char _license[] SEC("license") = "GPL";
+
+__u64 test1_result = 0;
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test1)
+{
+	__u64 cnt = bpf_get_func_arg_cnt(ctx);
+	__u64 a = 0, z = 0, ret = 0;
+	__s64 err;
+
+	test1_result = cnt == 1;
+
+	/* valid arguments */
+	err = bpf_get_func_arg(ctx, 0, &a);
+	test1_result &= err == 0 && (int) a == 1;
+
+	/* not valid argument */
+	err = bpf_get_func_arg(ctx, 1, &z);
+	test1_result &= err == -EINVAL;
+
+	/* return value fails in fentry */
+	err = bpf_get_func_ret(ctx, &ret);
+	test1_result &= err == -EINVAL;
+	return 0;
+}
+
+__u64 test2_result = 0;
+SEC("fexit/bpf_fentry_test2")
+int BPF_PROG(test2)
+{
+	__u64 cnt = bpf_get_func_arg_cnt(ctx);
+	__u64 a = 0, b = 0, z = 0, ret = 0;
+	__s64 err;
+
+	test2_result = cnt == 2;
+
+	/* valid arguments */
+	err = bpf_get_func_arg(ctx, 0, &a);
+	test2_result &= err == 0 && (int) a == 2;
+
+	err = bpf_get_func_arg(ctx, 1, &b);
+	test2_result &= err == 0 && b == 3;
+
+	/* not valid argument */
+	err = bpf_get_func_arg(ctx, 2, &z);
+	test2_result &= err == -EINVAL;
+
+	/* return value */
+	err = bpf_get_func_ret(ctx, &ret);
+	test2_result &= err == 0 && ret == 5;
+	return 0;
+}
+
+__u64 test3_result = 0;
+SEC("fmod_ret/bpf_modify_return_test")
+int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
+{
+	__u64 cnt = bpf_get_func_arg_cnt(ctx);
+	__u64 a = 0, b = 0, z = 0, ret = 0;
+	__s64 err;
+
+	test3_result = cnt == 2;
+
+	/* valid arguments */
+	err = bpf_get_func_arg(ctx, 0, &a);
+	test3_result &= err == 0 && (int) a == 1;
+
+	err = bpf_get_func_arg(ctx, 1, &b);
+	test3_result &= err == 0;
+
+	/* not valid argument */
+	err = bpf_get_func_arg(ctx, 2, &z);
+	test3_result &= err == -EINVAL;
+
+	/* return value */
+	err = bpf_get_func_ret(ctx, &ret);
+	test3_result &= err == 0 && ret == 0;
+	return 1234;
+}
+
+__u64 test4_result = 0;
+SEC("fexit/bpf_modify_return_test")
+int BPF_PROG(fexit_test, int _a, __u64 _b, int _ret)
+{
+	__u64 cnt = bpf_get_func_arg_cnt(ctx);
+	__u64 a = 0, b = 0, z = 0, ret = 0;
+	__s64 err;
+
+	test4_result = cnt == 2;
+
+	/* valid arguments */
+	err = bpf_get_func_arg(ctx, 0, &a);
+	test4_result &= err == 0 && (int) a == 1;
+
+	err = bpf_get_func_arg(ctx, 1, &b);
+	test4_result &= err == 0;
+
+	/* not valid argument */
+	err = bpf_get_func_arg(ctx, 2, &z);
+	test4_result &= err == -EINVAL;
+
+	/* return value */
+	err = bpf_get_func_ret(ctx, &ret);
+	test4_result &= err == 0 && ret == 1234;
+	return 0;
+}
-- 
2.33.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* RE: [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables
  2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
@ 2021-12-06 19:19   ` John Fastabend
  2021-12-06 21:26   ` Andrii Nakryiko
  2021-12-06 21:41   ` Andrii Nakryiko
  2 siblings, 0 replies; 16+ messages in thread
From: John Fastabend @ 2021-12-06 19:19 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Andrii Nakryiko, netdev, bpf, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh

Jiri Olsa wrote:
> As suggested by Andrii, adding variables for registers and ip
> address offsets, which makes the code more clear, rather than
> abusing single stack_size variable for everything.
> 
> Also describing the stack layout in the comment.
> 
> There is no function change.
> 
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---

LGTM.

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers
  2021-12-04 14:06 ` [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers Jiri Olsa
@ 2021-12-06 19:39   ` John Fastabend
  2021-12-06 20:17     ` Jiri Olsa
  2021-12-06 21:54   ` Andrii Nakryiko
  1 sibling, 1 reply; 16+ messages in thread
From: John Fastabend @ 2021-12-06 19:39 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh

Jiri Olsa wrote:
> Adding following helpers for tracing programs:
> 
> Get n-th argument of the traced function:
>   long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> 
> Get return value of the traced function:
>   long bpf_get_func_ret(void *ctx, u64 *value)
> 
> Get arguments count of the traced funtion:
>   long bpf_get_func_arg_cnt(void *ctx)
> 
> The trampoline now stores number of arguments on ctx-8
> address, so it's easy to verify argument index and find
> return value argument's position.
> 
> Moving function ip address on the trampoline stack behind
> the number of functions arguments, so it's now stored on
> ctx-16 address if it's needed.
> 
> All helpers above are inlined by verifier.
> 
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index c26871263f1f..d5a3791071d6 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -4983,6 +4983,31 @@ union bpf_attr {
>   *	Return
>   *		The number of loops performed, **-EINVAL** for invalid **flags**,
>   *		**-E2BIG** if **nr_loops** exceeds the maximum number of loops.
> + *
> + * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> + *	Description
> + *		Get **n**-th argument (zero based) of the traced function (for tracing programs)
> + *		returned in **value**.
> + *
> + *	Return
> + *		0 on success.
> + *		**-EINVAL** if n >= arguments count of traced function.
> + *
> + * long bpf_get_func_ret(void *ctx, u64 *value)
> + *	Description
> + *		Get return value of the traced function (for tracing programs)
> + *		in **value**.
> + *
> + *	Return
> + *		0 on success.
> + *		**-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.


Can we just throw a verifier error if the program type doesn't support
this? Then weget a void and ther is no error case.

> + *
> + * long bpf_get_func_arg_cnt(void *ctx)
> + *	Description
> + *		Get number of arguments of the traced function (for tracing programs).
> + *
> + *	Return
> + *		The number of arguments of the traced function.
>   */
>  #define __BPF_FUNC_MAPPER(FN)		\
>  	FN(unspec),			\
> @@ -5167,6 +5192,9 @@ union bpf_attr {
>  	FN(kallsyms_lookup_name),	\
>  	FN(find_vma),			\
>  	FN(loop),			\
> +	FN(get_func_arg),		\
> +	FN(get_func_ret),		\
> +	FN(get_func_arg_cnt),		\
>  	/* */
>  
>  /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6522ffdea487..cf6853d3a8e9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12974,6 +12974,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
>  static int do_misc_fixups(struct bpf_verifier_env *env)
>  {
>  	struct bpf_prog *prog = env->prog;
> +	enum bpf_attach_type eatype = prog->expected_attach_type;
>  	bool expect_blinding = bpf_jit_blinding_enabled(prog);
>  	enum bpf_prog_type prog_type = resolve_prog_type(prog);
>  	struct bpf_insn *insn = prog->insnsi;
> @@ -13344,11 +13345,79 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>  			continue;
>  		}
>  

[...]

> +		/* Implement get_func_arg_cnt inline. */
> +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> +		    insn->imm == BPF_FUNC_get_func_arg_cnt) {
> +			/* Load nr_args from ctx - 8 */
> +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> +
> +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
> +			if (!new_prog)
> +				return -ENOMEM;

How does this handle the !x86 case? The code above only touches the x86
jit? Perhaps its obvious with some code digging, but its not to me from
the patch description and code here.

> +
> +			env->prog = prog = new_prog;
> +			insn      = new_prog->insnsi + i + delta;
> +			continue;
> +		}
> +

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers
  2021-12-06 19:39   ` John Fastabend
@ 2021-12-06 20:17     ` Jiri Olsa
  0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-06 20:17 UTC (permalink / raw)
  To: John Fastabend
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, netdev,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh

On Mon, Dec 06, 2021 at 11:39:00AM -0800, John Fastabend wrote:
> Jiri Olsa wrote:
> > Adding following helpers for tracing programs:
> > 
> > Get n-th argument of the traced function:
> >   long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> > 
> > Get return value of the traced function:
> >   long bpf_get_func_ret(void *ctx, u64 *value)
> > 
> > Get arguments count of the traced funtion:
> >   long bpf_get_func_arg_cnt(void *ctx)
> > 
> > The trampoline now stores number of arguments on ctx-8
> > address, so it's easy to verify argument index and find
> > return value argument's position.
> > 
> > Moving function ip address on the trampoline stack behind
> > the number of functions arguments, so it's now stored on
> > ctx-16 address if it's needed.
> > 
> > All helpers above are inlined by verifier.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index c26871263f1f..d5a3791071d6 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -4983,6 +4983,31 @@ union bpf_attr {
> >   *	Return
> >   *		The number of loops performed, **-EINVAL** for invalid **flags**,
> >   *		**-E2BIG** if **nr_loops** exceeds the maximum number of loops.
> > + *
> > + * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> > + *	Description
> > + *		Get **n**-th argument (zero based) of the traced function (for tracing programs)
> > + *		returned in **value**.
> > + *
> > + *	Return
> > + *		0 on success.
> > + *		**-EINVAL** if n >= arguments count of traced function.
> > + *
> > + * long bpf_get_func_ret(void *ctx, u64 *value)
> > + *	Description
> > + *		Get return value of the traced function (for tracing programs)
> > + *		in **value**.
> > + *
> > + *	Return
> > + *		0 on success.
> > + *		**-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.
> 
> 
> Can we just throw a verifier error if the program type doesn't support
> this? Then weget a void and ther is no error case.

we discussed this with Andrii in previous version:
  https://lore.kernel.org/bpf/CAEf4BzbauHaDDJvGpx4oCRddd4KWpb4PkxUiUJvx-CXqEN2sdQ@mail.gmail.com/

having it this way will allow us for example to use one program
for both fentry and fexit hooks

> 
> > + *
> > + * long bpf_get_func_arg_cnt(void *ctx)
> > + *	Description
> > + *		Get number of arguments of the traced function (for tracing programs).
> > + *
> > + *	Return
> > + *		The number of arguments of the traced function.
> >   */
> >  #define __BPF_FUNC_MAPPER(FN)		\
> >  	FN(unspec),			\
> > @@ -5167,6 +5192,9 @@ union bpf_attr {
> >  	FN(kallsyms_lookup_name),	\
> >  	FN(find_vma),			\
> >  	FN(loop),			\
> > +	FN(get_func_arg),		\
> > +	FN(get_func_ret),		\
> > +	FN(get_func_arg_cnt),		\
> >  	/* */
> >  
> >  /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6522ffdea487..cf6853d3a8e9 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12974,6 +12974,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
> >  static int do_misc_fixups(struct bpf_verifier_env *env)
> >  {
> >  	struct bpf_prog *prog = env->prog;
> > +	enum bpf_attach_type eatype = prog->expected_attach_type;
> >  	bool expect_blinding = bpf_jit_blinding_enabled(prog);
> >  	enum bpf_prog_type prog_type = resolve_prog_type(prog);
> >  	struct bpf_insn *insn = prog->insnsi;
> > @@ -13344,11 +13345,79 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >  			continue;
> >  		}
> >  
> 
> [...]
> 
> > +		/* Implement get_func_arg_cnt inline. */
> > +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> > +		    insn->imm == BPF_FUNC_get_func_arg_cnt) {
> > +			/* Load nr_args from ctx - 8 */
> > +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > +
> > +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
> > +			if (!new_prog)
> > +				return -ENOMEM;
> 
> How does this handle the !x86 case? The code above only touches the x86
> jit? Perhaps its obvious with some code digging, but its not to me from
> the patch description and code here.

right, it assumes all 3 helpers are called from trampoline only 

so I think I mis-placed the verifier's bpf_func_proto 'get' for all 3 helpers,
it should be in tracing_prog_func_proto rather than in bpf_tracing_func_proto
plus perhaps check for proper attach type

thanks
jirka

> 
> > +
> > +			env->prog = prog = new_prog;
> > +			insn      = new_prog->insnsi + i + delta;
> > +			continue;
> > +		}
> > +
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables
  2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
  2021-12-06 19:19   ` John Fastabend
@ 2021-12-06 21:26   ` Andrii Nakryiko
  2021-12-06 21:41   ` Andrii Nakryiko
  2 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2021-12-06 21:26 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Andrii Nakryiko, Networking, bpf, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh

On Sat, Dec 4, 2021 at 6:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> As suggested by Andrii, adding variables for registers and ip
> address offsets, which makes the code more clear, rather than
> abusing single stack_size variable for everything.
>
> Also describing the stack layout in the comment.
>
> There is no function change.
>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---

Thanks for the stack layout diagram!

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  arch/x86/net/bpf_jit_comp.c | 42 ++++++++++++++++++++++++-------------
>  1 file changed, 28 insertions(+), 14 deletions(-)
>

[...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables
  2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
  2021-12-06 19:19   ` John Fastabend
  2021-12-06 21:26   ` Andrii Nakryiko
@ 2021-12-06 21:41   ` Andrii Nakryiko
  2021-12-07 14:25     ` Jiri Olsa
  2 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2021-12-06 21:41 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Andrii Nakryiko, Networking, bpf, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh

On Sat, Dec 4, 2021 at 6:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> As suggested by Andrii, adding variables for registers and ip
> address offsets, which makes the code more clear, rather than
> abusing single stack_size variable for everything.
>
> Also describing the stack layout in the comment.
>
> There is no function change.
>
> Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>  arch/x86/net/bpf_jit_comp.c | 42 ++++++++++++++++++++++++-------------
>  1 file changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index 1d7b0c69b644..b106e80e8d9c 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1941,7 +1941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>                                 void *orig_call)
>  {
>         int ret, i, nr_args = m->nr_args;
> -       int stack_size = nr_args * 8;
> +       int regs_off, ip_off, stack_size = nr_args * 8;
>         struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
>         struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
>         struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
> @@ -1956,14 +1956,33 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>         if (!is_valid_bpf_tramp_flags(flags))
>                 return -EINVAL;
>
> +       /* Generated trampoline stack layout:
> +        *
> +        * RBP + 8         [ return address  ]
> +        * RBP + 0         [ RBP             ]
> +        *
> +        * RBP - 8         [ return value    ]  BPF_TRAMP_F_CALL_ORIG or
> +        *                                      BPF_TRAMP_F_RET_FENTRY_RET flags
> +        *
> +        *                 [ reg_argN        ]  always
> +        *                 [ ...             ]
> +        * RBP - regs_off  [ reg_arg1        ]
> +        *

I think it's also worth mentioning that context passed into
fentry/fexit programs are pointing here (makes it a bit easier to
track those ctx[-1] and ctx[-2] in the next patch.


> +        * RBP - ip_off    [ traced function ]  BPF_TRAMP_F_IP_ARG flag
> +        */
> +
>         /* room for return value of orig_call or fentry prog */
>         save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
>         if (save_ret)
>                 stack_size += 8;
>
> +       regs_off = stack_size;
> +
>         if (flags & BPF_TRAMP_F_IP_ARG)
>                 stack_size += 8; /* room for IP address argument */
>
> +       ip_off = stack_size;
> +

[...]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers
  2021-12-04 14:06 ` [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers Jiri Olsa
  2021-12-06 19:39   ` John Fastabend
@ 2021-12-06 21:54   ` Andrii Nakryiko
  2021-12-07 17:23     ` Jiri Olsa
  1 sibling, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2021-12-06 21:54 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh


On 12/4/21 6:06 AM, Jiri Olsa wrote:
> Adding following helpers for tracing programs:
>
> Get n-th argument of the traced function:
>    long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
>
> Get return value of the traced function:
>    long bpf_get_func_ret(void *ctx, u64 *value)
>
> Get arguments count of the traced funtion:
>    long bpf_get_func_arg_cnt(void *ctx)
>
> The trampoline now stores number of arguments on ctx-8
> address, so it's easy to verify argument index and find
> return value argument's position.
>
> Moving function ip address on the trampoline stack behind
> the number of functions arguments, so it's now stored on
> ctx-16 address if it's needed.
>
> All helpers above are inlined by verifier.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---


Please cc me at andrii@kernel.org email for future emails, you'll save a 
lot of trouble with replying to your emails :) Thanks!


>   arch/x86/net/bpf_jit_comp.c    | 15 ++++++-
>   include/uapi/linux/bpf.h       | 28 +++++++++++++
>   kernel/bpf/verifier.c          | 73 +++++++++++++++++++++++++++++++++-
>   kernel/trace/bpf_trace.c       | 58 ++++++++++++++++++++++++++-
>   tools/include/uapi/linux/bpf.h | 28 +++++++++++++
>   5 files changed, 198 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> index b106e80e8d9c..142e6b90fa52 100644
> --- a/arch/x86/net/bpf_jit_comp.c
> +++ b/arch/x86/net/bpf_jit_comp.c
> @@ -1941,7 +1941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>   				void *orig_call)
>   {
>   	int ret, i, nr_args = m->nr_args;
> -	int regs_off, ip_off, stack_size = nr_args * 8;
> +	int regs_off, ip_off, args_off, stack_size = nr_args * 8;
>   	struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
>   	struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
>   	struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
> @@ -1968,6 +1968,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>   	 *                 [ ...             ]
>   	 * RBP - regs_off  [ reg_arg1        ]
>   	 *
> +	 * RBP - args_off  [ args count      ]  always
> +	 *
>   	 * RBP - ip_off    [ traced function ]  BPF_TRAMP_F_IP_ARG flag
>   	 */
>   
> @@ -1978,6 +1980,10 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>   
>   	regs_off = stack_size;
>   
> +	/* args count  */
> +	stack_size += 8;
> +	args_off = stack_size;
> +
>   	if (flags & BPF_TRAMP_F_IP_ARG)
>   		stack_size += 8; /* room for IP address argument */
>   
> @@ -1996,6 +2002,13 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
>   	EMIT4(0x48, 0x83, 0xEC, stack_size); /* sub rsp, stack_size */
>   	EMIT1(0x53);		 /* push rbx */
>   
> +	/* Store number of arguments of the traced function:
> +	 *   mov rax, nr_args
> +	 *   mov QWORD PTR [rbp - args_off], rax
> +	 */
> +	emit_mov_imm64(&prog, BPF_REG_0, 0, (u32) nr_args);
> +	emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_0, -args_off);
> +
>   	if (flags & BPF_TRAMP_F_IP_ARG) {
>   		/* Store IP address of the traced function:
>   		 * mov rax, QWORD PTR [rbp + 8]
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index c26871263f1f..d5a3791071d6 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -4983,6 +4983,31 @@ union bpf_attr {
>    *	Return
>    *		The number of loops performed, **-EINVAL** for invalid **flags**,
>    *		**-E2BIG** if **nr_loops** exceeds the maximum number of loops.
> + *
> + * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> + *	Description
> + *		Get **n**-th argument (zero based) of the traced function (for tracing programs)
> + *		returned in **value**.
> + *
> + *	Return
> + *		0 on success.
> + *		**-EINVAL** if n >= arguments count of traced function.
> + *
> + * long bpf_get_func_ret(void *ctx, u64 *value)
> + *	Description
> + *		Get return value of the traced function (for tracing programs)
> + *		in **value**.
> + *
> + *	Return
> + *		0 on success.
> + *		**-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.


-EOPNOTSUPP maybe?


> + *
> + * long bpf_get_func_arg_cnt(void *ctx)
> + *	Description
> + *		Get number of arguments of the traced function (for tracing programs).
> + *
> + *	Return
> + *		The number of arguments of the traced function.
>    */
>   #define __BPF_FUNC_MAPPER(FN)		\
>   	FN(unspec),			\
> @@ -5167,6 +5192,9 @@ union bpf_attr {
>   	FN(kallsyms_lookup_name),	\
>   	FN(find_vma),			\
>   	FN(loop),			\
> +	FN(get_func_arg),		\
> +	FN(get_func_ret),		\
> +	FN(get_func_arg_cnt),		\
>   	/* */
>   
>   /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 6522ffdea487..cf6853d3a8e9 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -12974,6 +12974,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
>   static int do_misc_fixups(struct bpf_verifier_env *env)
>   {
>   	struct bpf_prog *prog = env->prog;
> +	enum bpf_attach_type eatype = prog->expected_attach_type;
>   	bool expect_blinding = bpf_jit_blinding_enabled(prog);
>   	enum bpf_prog_type prog_type = resolve_prog_type(prog);
>   	struct bpf_insn *insn = prog->insnsi;
> @@ -13344,11 +13345,79 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>   			continue;
>   		}
>   
> +		/* Implement bpf_get_func_arg inline. */
> +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> +		    insn->imm == BPF_FUNC_get_func_arg) {
> +			/* Load nr_args from ctx - 8 */
> +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> +			insn_buf[1] = BPF_JMP32_REG(BPF_JGE, BPF_REG_2, BPF_REG_0, 6);
> +			insn_buf[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 3);
> +			insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1);
> +			insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0);
> +			insn_buf[5] = BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
> +			insn_buf[6] = BPF_MOV64_IMM(BPF_REG_0, 0);
> +			insn_buf[7] = BPF_JMP_A(1);
> +			insn_buf[8] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
> +			cnt = 9;
> +
> +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> +			if (!new_prog)
> +				return -ENOMEM;
> +
> +			delta    += cnt - 1;
> +			env->prog = prog = new_prog;
> +			insn      = new_prog->insnsi + i + delta;
> +			continue;
> +		}
> +
> +		/* Implement bpf_get_func_ret inline. */
> +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> +		    insn->imm == BPF_FUNC_get_func_ret) {
> +			if (eatype == BPF_TRACE_FEXIT ||
> +			    eatype == BPF_MODIFY_RETURN) {
> +				/* Load nr_args from ctx - 8 */
> +				insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> +				insn_buf[1] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
> +				insn_buf[2] = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
> +				insn_buf[3] = BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
> +				insn_buf[4] = BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, 0);
> +				insn_buf[5] = BPF_MOV64_IMM(BPF_REG_0, 0);
> +				cnt = 6;
> +			} else {
> +				insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
> +				cnt = 1;
> +			}
> +
> +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> +			if (!new_prog)
> +				return -ENOMEM;
> +
> +			delta    += cnt - 1;
> +			env->prog = prog = new_prog;
> +			insn      = new_prog->insnsi + i + delta;
> +			continue;
> +		}
> +
> +		/* Implement get_func_arg_cnt inline. */
> +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> +		    insn->imm == BPF_FUNC_get_func_arg_cnt) {
> +			/* Load nr_args from ctx - 8 */
> +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> +
> +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
> +			if (!new_prog)
> +				return -ENOMEM;
> +
> +			env->prog = prog = new_prog;
> +			insn      = new_prog->insnsi + i + delta;
> +			continue;
> +		}


To be entirely honest, I'm not even sure we need to inline them. In 
programs that care about performance they will be called at most once. 
In others it doesn't matter. But even if they weren't, is the function 
call really such a big overhead for tracing cases? I don't mind it 
either, I just can hardly follow it.


> +
>   		/* Implement bpf_get_func_ip inline. */
>   		if (prog_type == BPF_PROG_TYPE_TRACING &&
>   		    insn->imm == BPF_FUNC_get_func_ip) {
> -			/* Load IP address from ctx - 8 */
> -			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> +			/* Load IP address from ctx - 16 */
> +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -16);
>   
>   			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
>   			if (!new_prog)


[...]


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add tests for get_func_[arg|ret|arg_cnt] helpers
  2021-12-04 14:07 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for " Jiri Olsa
@ 2021-12-06 22:03   ` Andrii Nakryiko
  2021-12-07 18:14     ` Jiri Olsa
  0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2021-12-06 22:03 UTC (permalink / raw)
  To: Jiri Olsa, Alexei Starovoitov, Daniel Borkmann
  Cc: netdev, bpf, Martin KaFai Lau, Song Liu, Yonghong Song,
	John Fastabend, KP Singh, andrii


On 12/4/21 6:07 AM, Jiri Olsa wrote:
> Adding tests for get_func_[arg|ret|arg_cnt] helpers.
> Using these helpers in fentry/fexit/fmod_ret programs.
>
> Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> ---
>   .../bpf/prog_tests/get_func_args_test.c       |  38 ++++++
>   .../selftests/bpf/progs/get_func_args_test.c  | 112 ++++++++++++++++++
>   2 files changed, 150 insertions(+)
>   create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
>   create mode 100644 tools/testing/selftests/bpf/progs/get_func_args_test.c
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> new file mode 100644
> index 000000000000..c24807ae4361
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> @@ -0,0 +1,38 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <test_progs.h>
> +#include "get_func_args_test.skel.h"
> +
> +void test_get_func_args_test(void)
> +{
> +	struct get_func_args_test *skel = NULL;
> +	__u32 duration = 0, retval;
> +	int err, prog_fd;
> +
> +	skel = get_func_args_test__open_and_load();
> +	if (!ASSERT_OK_PTR(skel, "get_func_args_test__open_and_load"))
> +		return;
> +
> +	err = get_func_args_test__attach(skel);
> +	if (!ASSERT_OK(err, "get_func_args_test__attach"))
> +		goto cleanup;
> +
> +	prog_fd = bpf_program__fd(skel->progs.test1);
> +	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> +				NULL, NULL, &retval, &duration);
> +	ASSERT_OK(err, "test_run");
> +	ASSERT_EQ(retval, 0, "test_run");
> +
> +	prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
> +	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> +				NULL, NULL, &retval, &duration);
> +	ASSERT_OK(err, "test_run");
> +	ASSERT_EQ(retval, 1234, "test_run");


are the other two programs executed implicitly during one of those test 
runs? Can you please leave a small comment somewhere here if that's true?


> +
> +	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
> +	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
> +	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
> +	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
> +
> +cleanup:
> +	get_func_args_test__destroy(skel);
> +}
> diff --git a/tools/testing/selftests/bpf/progs/get_func_args_test.c b/tools/testing/selftests/bpf/progs/get_func_args_test.c
> new file mode 100644
> index 000000000000..0d0a67c849ae
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/get_func_args_test.c
> @@ -0,0 +1,112 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/bpf.h>
> +#include <bpf/bpf_helpers.h>
> +#include <bpf/bpf_tracing.h>
> +#include <errno.h>
> +
> +char _license[] SEC("license") = "GPL";
> +
> +__u64 test1_result = 0;
> +SEC("fentry/bpf_fentry_test1")
> +int BPF_PROG(test1)
> +{
> +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> +	__u64 a = 0, z = 0, ret = 0;
> +	__s64 err;
> +
> +	test1_result = cnt == 1;
> +
> +	/* valid arguments */
> +	err = bpf_get_func_arg(ctx, 0, &a);
> +	test1_result &= err == 0 && (int) a == 1;


int cast unnecessary? but some ()'s wouldn't hurt...


> +
> +	/* not valid argument */
> +	err = bpf_get_func_arg(ctx, 1, &z);
> +	test1_result &= err == -EINVAL;
> +
> +	/* return value fails in fentry */
> +	err = bpf_get_func_ret(ctx, &ret);
> +	test1_result &= err == -EINVAL;
> +	return 0;
> +}
> +
> +__u64 test2_result = 0;
> +SEC("fexit/bpf_fentry_test2")
> +int BPF_PROG(test2)
> +{
> +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> +	__u64 a = 0, b = 0, z = 0, ret = 0;
> +	__s64 err;
> +
> +	test2_result = cnt == 2;
> +
> +	/* valid arguments */
> +	err = bpf_get_func_arg(ctx, 0, &a);
> +	test2_result &= err == 0 && (int) a == 2;
> +
> +	err = bpf_get_func_arg(ctx, 1, &b);
> +	test2_result &= err == 0 && b == 3;
> +
> +	/* not valid argument */
> +	err = bpf_get_func_arg(ctx, 2, &z);
> +	test2_result &= err == -EINVAL;
> +
> +	/* return value */
> +	err = bpf_get_func_ret(ctx, &ret);
> +	test2_result &= err == 0 && ret == 5;
> +	return 0;
> +}
> +
> +__u64 test3_result = 0;
> +SEC("fmod_ret/bpf_modify_return_test")
> +int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
> +{
> +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> +	__u64 a = 0, b = 0, z = 0, ret = 0;
> +	__s64 err;
> +
> +	test3_result = cnt == 2;
> +
> +	/* valid arguments */
> +	err = bpf_get_func_arg(ctx, 0, &a);
> +	test3_result &= err == 0 && (int) a == 1;
> +
> +	err = bpf_get_func_arg(ctx, 1, &b);
> +	test3_result &= err == 0;


why no checking of b value here?


> +
> +	/* not valid argument */
> +	err = bpf_get_func_arg(ctx, 2, &z);
> +	test3_result &= err == -EINVAL;
> +
> +	/* return value */
> +	err = bpf_get_func_ret(ctx, &ret);
> +	test3_result &= err == 0 && ret == 0;
> +	return 1234;
> +}
> +
> +__u64 test4_result = 0;
> +SEC("fexit/bpf_modify_return_test")
> +int BPF_PROG(fexit_test, int _a, __u64 _b, int _ret)
> +{
> +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> +	__u64 a = 0, b = 0, z = 0, ret = 0;
> +	__s64 err;
> +
> +	test4_result = cnt == 2;
> +
> +	/* valid arguments */
> +	err = bpf_get_func_arg(ctx, 0, &a);
> +	test4_result &= err == 0 && (int) a == 1;
> +
> +	err = bpf_get_func_arg(ctx, 1, &b);
> +	test4_result &= err == 0;


same, for consistency, b should have been checked, no?


> +
> +	/* not valid argument */
> +	err = bpf_get_func_arg(ctx, 2, &z);
> +	test4_result &= err == -EINVAL;
> +
> +	/* return value */
> +	err = bpf_get_func_ret(ctx, &ret);
> +	test4_result &= err == 0 && ret == 1234;
> +	return 0;
> +}

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables
  2021-12-06 21:41   ` Andrii Nakryiko
@ 2021-12-07 14:25     ` Jiri Olsa
  0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-07 14:25 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Andrii Nakryiko, Networking, bpf, Martin KaFai Lau, Song Liu,
	Yonghong Song, John Fastabend, KP Singh

On Mon, Dec 06, 2021 at 01:41:15PM -0800, Andrii Nakryiko wrote:
> On Sat, Dec 4, 2021 at 6:07 AM Jiri Olsa <jolsa@redhat.com> wrote:
> >
> > As suggested by Andrii, adding variables for registers and ip
> > address offsets, which makes the code more clear, rather than
> > abusing single stack_size variable for everything.
> >
> > Also describing the stack layout in the comment.
> >
> > There is no function change.
> >
> > Suggested-by: Andrii Nakryiko <andrii@kernel.org>
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >  arch/x86/net/bpf_jit_comp.c | 42 ++++++++++++++++++++++++-------------
> >  1 file changed, 28 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
> > index 1d7b0c69b644..b106e80e8d9c 100644
> > --- a/arch/x86/net/bpf_jit_comp.c
> > +++ b/arch/x86/net/bpf_jit_comp.c
> > @@ -1941,7 +1941,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >                                 void *orig_call)
> >  {
> >         int ret, i, nr_args = m->nr_args;
> > -       int stack_size = nr_args * 8;
> > +       int regs_off, ip_off, stack_size = nr_args * 8;
> >         struct bpf_tramp_progs *fentry = &tprogs[BPF_TRAMP_FENTRY];
> >         struct bpf_tramp_progs *fexit = &tprogs[BPF_TRAMP_FEXIT];
> >         struct bpf_tramp_progs *fmod_ret = &tprogs[BPF_TRAMP_MODIFY_RETURN];
> > @@ -1956,14 +1956,33 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
> >         if (!is_valid_bpf_tramp_flags(flags))
> >                 return -EINVAL;
> >
> > +       /* Generated trampoline stack layout:
> > +        *
> > +        * RBP + 8         [ return address  ]
> > +        * RBP + 0         [ RBP             ]
> > +        *
> > +        * RBP - 8         [ return value    ]  BPF_TRAMP_F_CALL_ORIG or
> > +        *                                      BPF_TRAMP_F_RET_FENTRY_RET flags
> > +        *
> > +        *                 [ reg_argN        ]  always
> > +        *                 [ ...             ]
> > +        * RBP - regs_off  [ reg_arg1        ]
> > +        *
> 
> I think it's also worth mentioning that context passed into
> fentry/fexit programs are pointing here (makes it a bit easier to
> track those ctx[-1] and ctx[-2] in the next patch.

ok, jirka

> 
> 
> > +        * RBP - ip_off    [ traced function ]  BPF_TRAMP_F_IP_ARG flag
> > +        */
> > +
> >         /* room for return value of orig_call or fentry prog */
> >         save_ret = flags & (BPF_TRAMP_F_CALL_ORIG | BPF_TRAMP_F_RET_FENTRY_RET);
> >         if (save_ret)
> >                 stack_size += 8;
> >
> > +       regs_off = stack_size;
> > +
> >         if (flags & BPF_TRAMP_F_IP_ARG)
> >                 stack_size += 8; /* room for IP address argument */
> >
> > +       ip_off = stack_size;
> > +
> 
> [...]
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers
  2021-12-06 21:54   ` Andrii Nakryiko
@ 2021-12-07 17:23     ` Jiri Olsa
  0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-07 17:23 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, netdev, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh

On Mon, Dec 06, 2021 at 01:54:53PM -0800, Andrii Nakryiko wrote:
> 
> On 12/4/21 6:06 AM, Jiri Olsa wrote:
> > Adding following helpers for tracing programs:
> > 
> > Get n-th argument of the traced function:
> >    long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> > 
> > Get return value of the traced function:
> >    long bpf_get_func_ret(void *ctx, u64 *value)
> > 
> > Get arguments count of the traced funtion:
> >    long bpf_get_func_arg_cnt(void *ctx)
> > 
> > The trampoline now stores number of arguments on ctx-8
> > address, so it's easy to verify argument index and find
> > return value argument's position.
> > 
> > Moving function ip address on the trampoline stack behind
> > the number of functions arguments, so it's now stored on
> > ctx-16 address if it's needed.
> > 
> > All helpers above are inlined by verifier.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> 
> 
> Please cc me at andrii@kernel.org email for future emails, you'll save a lot
> of trouble with replying to your emails :) Thanks!

ugh, updated


SNIP
 
> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index c26871263f1f..d5a3791071d6 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -4983,6 +4983,31 @@ union bpf_attr {
> >    *	Return
> >    *		The number of loops performed, **-EINVAL** for invalid **flags**,
> >    *		**-E2BIG** if **nr_loops** exceeds the maximum number of loops.
> > + *
> > + * long bpf_get_func_arg(void *ctx, u32 n, u64 *value)
> > + *	Description
> > + *		Get **n**-th argument (zero based) of the traced function (for tracing programs)
> > + *		returned in **value**.
> > + *
> > + *	Return
> > + *		0 on success.
> > + *		**-EINVAL** if n >= arguments count of traced function.
> > + *
> > + * long bpf_get_func_ret(void *ctx, u64 *value)
> > + *	Description
> > + *		Get return value of the traced function (for tracing programs)
> > + *		in **value**.
> > + *
> > + *	Return
> > + *		0 on success.
> > + *		**-EINVAL** for tracing programs other than BPF_TRACE_FEXIT or BPF_MODIFY_RETURN.
> 
> 
> -EOPNOTSUPP maybe?

ok

> 
> 
> > + *
> > + * long bpf_get_func_arg_cnt(void *ctx)
> > + *	Description
> > + *		Get number of arguments of the traced function (for tracing programs).
> > + *
> > + *	Return
> > + *		The number of arguments of the traced function.
> >    */
> >   #define __BPF_FUNC_MAPPER(FN)		\
> >   	FN(unspec),			\
> > @@ -5167,6 +5192,9 @@ union bpf_attr {
> >   	FN(kallsyms_lookup_name),	\
> >   	FN(find_vma),			\
> >   	FN(loop),			\
> > +	FN(get_func_arg),		\
> > +	FN(get_func_ret),		\
> > +	FN(get_func_arg_cnt),		\
> >   	/* */
> >   /* integer value in 'imm' field of BPF_CALL instruction selects which helper
> > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> > index 6522ffdea487..cf6853d3a8e9 100644
> > --- a/kernel/bpf/verifier.c
> > +++ b/kernel/bpf/verifier.c
> > @@ -12974,6 +12974,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
> >   static int do_misc_fixups(struct bpf_verifier_env *env)
> >   {
> >   	struct bpf_prog *prog = env->prog;
> > +	enum bpf_attach_type eatype = prog->expected_attach_type;
> >   	bool expect_blinding = bpf_jit_blinding_enabled(prog);
> >   	enum bpf_prog_type prog_type = resolve_prog_type(prog);
> >   	struct bpf_insn *insn = prog->insnsi;
> > @@ -13344,11 +13345,79 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
> >   			continue;
> >   		}
> > +		/* Implement bpf_get_func_arg inline. */
> > +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> > +		    insn->imm == BPF_FUNC_get_func_arg) {
> > +			/* Load nr_args from ctx - 8 */
> > +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > +			insn_buf[1] = BPF_JMP32_REG(BPF_JGE, BPF_REG_2, BPF_REG_0, 6);
> > +			insn_buf[2] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 3);
> > +			insn_buf[3] = BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1);
> > +			insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0);
> > +			insn_buf[5] = BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
> > +			insn_buf[6] = BPF_MOV64_IMM(BPF_REG_0, 0);
> > +			insn_buf[7] = BPF_JMP_A(1);
> > +			insn_buf[8] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
> > +			cnt = 9;
> > +
> > +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> > +			if (!new_prog)
> > +				return -ENOMEM;
> > +
> > +			delta    += cnt - 1;
> > +			env->prog = prog = new_prog;
> > +			insn      = new_prog->insnsi + i + delta;
> > +			continue;
> > +		}
> > +
> > +		/* Implement bpf_get_func_ret inline. */
> > +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> > +		    insn->imm == BPF_FUNC_get_func_ret) {
> > +			if (eatype == BPF_TRACE_FEXIT ||
> > +			    eatype == BPF_MODIFY_RETURN) {
> > +				/* Load nr_args from ctx - 8 */
> > +				insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > +				insn_buf[1] = BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 3);
> > +				insn_buf[2] = BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1);
> > +				insn_buf[3] = BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
> > +				insn_buf[4] = BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, 0);
> > +				insn_buf[5] = BPF_MOV64_IMM(BPF_REG_0, 0);
> > +				cnt = 6;
> > +			} else {
> > +				insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
> > +				cnt = 1;
> > +			}
> > +
> > +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
> > +			if (!new_prog)
> > +				return -ENOMEM;
> > +
> > +			delta    += cnt - 1;
> > +			env->prog = prog = new_prog;
> > +			insn      = new_prog->insnsi + i + delta;
> > +			continue;
> > +		}
> > +
> > +		/* Implement get_func_arg_cnt inline. */
> > +		if (prog_type == BPF_PROG_TYPE_TRACING &&
> > +		    insn->imm == BPF_FUNC_get_func_arg_cnt) {
> > +			/* Load nr_args from ctx - 8 */
> > +			insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8);
> > +
> > +			new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, 1);
> > +			if (!new_prog)
> > +				return -ENOMEM;
> > +
> > +			env->prog = prog = new_prog;
> > +			insn      = new_prog->insnsi + i + delta;
> > +			continue;
> > +		}
> 
> 
> To be entirely honest, I'm not even sure we need to inline them. In programs
> that care about performance they will be called at most once. In others it
> doesn't matter. But even if they weren't, is the function call really such a
> big overhead for tracing cases? I don't mind it either, I just can hardly
> follow it.

maybe just inline get_func_arg_cnt, because it's just one instruction,
the other 2 I don't skipping the inline

jirka


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add tests for get_func_[arg|ret|arg_cnt] helpers
  2021-12-06 22:03   ` Andrii Nakryiko
@ 2021-12-07 18:14     ` Jiri Olsa
  2021-12-07 22:54       ` Andrii Nakryiko
  0 siblings, 1 reply; 16+ messages in thread
From: Jiri Olsa @ 2021-12-07 18:14 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Alexei Starovoitov, Daniel Borkmann, netdev, bpf,
	Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, andrii

On Mon, Dec 06, 2021 at 02:03:54PM -0800, Andrii Nakryiko wrote:
> 
> On 12/4/21 6:07 AM, Jiri Olsa wrote:
> > Adding tests for get_func_[arg|ret|arg_cnt] helpers.
> > Using these helpers in fentry/fexit/fmod_ret programs.
> > 
> > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > ---
> >   .../bpf/prog_tests/get_func_args_test.c       |  38 ++++++
> >   .../selftests/bpf/progs/get_func_args_test.c  | 112 ++++++++++++++++++
> >   2 files changed, 150 insertions(+)
> >   create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> >   create mode 100644 tools/testing/selftests/bpf/progs/get_func_args_test.c
> > 
> > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> > new file mode 100644
> > index 000000000000..c24807ae4361
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> > @@ -0,0 +1,38 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <test_progs.h>
> > +#include "get_func_args_test.skel.h"
> > +
> > +void test_get_func_args_test(void)
> > +{
> > +	struct get_func_args_test *skel = NULL;
> > +	__u32 duration = 0, retval;
> > +	int err, prog_fd;
> > +
> > +	skel = get_func_args_test__open_and_load();
> > +	if (!ASSERT_OK_PTR(skel, "get_func_args_test__open_and_load"))
> > +		return;
> > +
> > +	err = get_func_args_test__attach(skel);
> > +	if (!ASSERT_OK(err, "get_func_args_test__attach"))
> > +		goto cleanup;
> > +
> > +	prog_fd = bpf_program__fd(skel->progs.test1);
> > +	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> > +				NULL, NULL, &retval, &duration);
> > +	ASSERT_OK(err, "test_run");
> > +	ASSERT_EQ(retval, 0, "test_run");
> > +
> > +	prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
> > +	err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> > +				NULL, NULL, &retval, &duration);
> > +	ASSERT_OK(err, "test_run");
> > +	ASSERT_EQ(retval, 1234, "test_run");
> 
> 
> are the other two programs executed implicitly during one of those test
> runs? Can you please leave a small comment somewhere here if that's true?

test1 triggers all the bpf_fentry_test* fentry/fexits
fmod_ret_test triggers the rest, I'll put it in comment

> 
> 
> > +
> > +	ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
> > +	ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
> > +	ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
> > +	ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
> > +
> > +cleanup:
> > +	get_func_args_test__destroy(skel);
> > +}
> > diff --git a/tools/testing/selftests/bpf/progs/get_func_args_test.c b/tools/testing/selftests/bpf/progs/get_func_args_test.c
> > new file mode 100644
> > index 000000000000..0d0a67c849ae
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/get_func_args_test.c
> > @@ -0,0 +1,112 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <linux/bpf.h>
> > +#include <bpf/bpf_helpers.h>
> > +#include <bpf/bpf_tracing.h>
> > +#include <errno.h>
> > +
> > +char _license[] SEC("license") = "GPL";
> > +
> > +__u64 test1_result = 0;
> > +SEC("fentry/bpf_fentry_test1")
> > +int BPF_PROG(test1)
> > +{
> > +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> > +	__u64 a = 0, z = 0, ret = 0;
> > +	__s64 err;
> > +
> > +	test1_result = cnt == 1;
> > +
> > +	/* valid arguments */
> > +	err = bpf_get_func_arg(ctx, 0, &a);
> > +	test1_result &= err == 0 && (int) a == 1;
> 
> 
> int cast unnecessary? but some ()'s wouldn't hurt...

it is, 'a' is int and trampoline saves it with 32-bit register like:

  mov    %edi,-0x8(%rbp)

so the upper 4 bytes are not zeroed

> 
> 
> > +
> > +	/* not valid argument */
> > +	err = bpf_get_func_arg(ctx, 1, &z);
> > +	test1_result &= err == -EINVAL;
> > +
> > +	/* return value fails in fentry */
> > +	err = bpf_get_func_ret(ctx, &ret);
> > +	test1_result &= err == -EINVAL;
> > +	return 0;
> > +}
> > +
> > +__u64 test2_result = 0;
> > +SEC("fexit/bpf_fentry_test2")
> > +int BPF_PROG(test2)
> > +{
> > +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> > +	__u64 a = 0, b = 0, z = 0, ret = 0;
> > +	__s64 err;
> > +
> > +	test2_result = cnt == 2;
> > +
> > +	/* valid arguments */
> > +	err = bpf_get_func_arg(ctx, 0, &a);
> > +	test2_result &= err == 0 && (int) a == 2;
> > +
> > +	err = bpf_get_func_arg(ctx, 1, &b);
> > +	test2_result &= err == 0 && b == 3;
> > +
> > +	/* not valid argument */
> > +	err = bpf_get_func_arg(ctx, 2, &z);
> > +	test2_result &= err == -EINVAL;
> > +
> > +	/* return value */
> > +	err = bpf_get_func_ret(ctx, &ret);
> > +	test2_result &= err == 0 && ret == 5;
> > +	return 0;
> > +}
> > +
> > +__u64 test3_result = 0;
> > +SEC("fmod_ret/bpf_modify_return_test")
> > +int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
> > +{
> > +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> > +	__u64 a = 0, b = 0, z = 0, ret = 0;
> > +	__s64 err;
> > +
> > +	test3_result = cnt == 2;
> > +
> > +	/* valid arguments */
> > +	err = bpf_get_func_arg(ctx, 0, &a);
> > +	test3_result &= err == 0 && (int) a == 1;
> > +
> > +	err = bpf_get_func_arg(ctx, 1, &b);
> > +	test3_result &= err == 0;
> 
> 
> why no checking of b value here?

right, ok

> 
> > +
> > +	/* not valid argument */
> > +	err = bpf_get_func_arg(ctx, 2, &z);
> > +	test3_result &= err == -EINVAL;
> > +
> > +	/* return value */
> > +	err = bpf_get_func_ret(ctx, &ret);
> > +	test3_result &= err == 0 && ret == 0;
> > +	return 1234;
> > +}
> > +
> > +__u64 test4_result = 0;
> > +SEC("fexit/bpf_modify_return_test")
> > +int BPF_PROG(fexit_test, int _a, __u64 _b, int _ret)
> > +{
> > +	__u64 cnt = bpf_get_func_arg_cnt(ctx);
> > +	__u64 a = 0, b = 0, z = 0, ret = 0;
> > +	__s64 err;
> > +
> > +	test4_result = cnt == 2;
> > +
> > +	/* valid arguments */
> > +	err = bpf_get_func_arg(ctx, 0, &a);
> > +	test4_result &= err == 0 && (int) a == 1;
> > +
> > +	err = bpf_get_func_arg(ctx, 1, &b);
> > +	test4_result &= err == 0;
> 
> 
> same, for consistency, b should have been checked, no?

ok

thanks,
jirka


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add tests for get_func_[arg|ret|arg_cnt] helpers
  2021-12-07 18:14     ` Jiri Olsa
@ 2021-12-07 22:54       ` Andrii Nakryiko
  2021-12-08 16:38         ` Jiri Olsa
  0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2021-12-07 22:54 UTC (permalink / raw)
  To: Jiri Olsa
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Andrii Nakryiko

On Tue, Dec 7, 2021 at 10:14 AM Jiri Olsa <jolsa@redhat.com> wrote:
>
> On Mon, Dec 06, 2021 at 02:03:54PM -0800, Andrii Nakryiko wrote:
> >
> > On 12/4/21 6:07 AM, Jiri Olsa wrote:
> > > Adding tests for get_func_[arg|ret|arg_cnt] helpers.
> > > Using these helpers in fentry/fexit/fmod_ret programs.
> > >
> > > Signed-off-by: Jiri Olsa <jolsa@kernel.org>
> > > ---
> > >   .../bpf/prog_tests/get_func_args_test.c       |  38 ++++++
> > >   .../selftests/bpf/progs/get_func_args_test.c  | 112 ++++++++++++++++++
> > >   2 files changed, 150 insertions(+)
> > >   create mode 100644 tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> > >   create mode 100644 tools/testing/selftests/bpf/progs/get_func_args_test.c
> > >
> > > diff --git a/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> > > new file mode 100644
> > > index 000000000000..c24807ae4361
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/prog_tests/get_func_args_test.c
> > > @@ -0,0 +1,38 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#include <test_progs.h>
> > > +#include "get_func_args_test.skel.h"
> > > +
> > > +void test_get_func_args_test(void)
> > > +{
> > > +   struct get_func_args_test *skel = NULL;
> > > +   __u32 duration = 0, retval;
> > > +   int err, prog_fd;
> > > +
> > > +   skel = get_func_args_test__open_and_load();
> > > +   if (!ASSERT_OK_PTR(skel, "get_func_args_test__open_and_load"))
> > > +           return;
> > > +
> > > +   err = get_func_args_test__attach(skel);
> > > +   if (!ASSERT_OK(err, "get_func_args_test__attach"))
> > > +           goto cleanup;
> > > +
> > > +   prog_fd = bpf_program__fd(skel->progs.test1);
> > > +   err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> > > +                           NULL, NULL, &retval, &duration);
> > > +   ASSERT_OK(err, "test_run");
> > > +   ASSERT_EQ(retval, 0, "test_run");
> > > +
> > > +   prog_fd = bpf_program__fd(skel->progs.fmod_ret_test);
> > > +   err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
> > > +                           NULL, NULL, &retval, &duration);
> > > +   ASSERT_OK(err, "test_run");
> > > +   ASSERT_EQ(retval, 1234, "test_run");
> >
> >
> > are the other two programs executed implicitly during one of those test
> > runs? Can you please leave a small comment somewhere here if that's true?
>
> test1 triggers all the bpf_fentry_test* fentry/fexits
> fmod_ret_test triggers the rest, I'll put it in comment
>
> >
> >
> > > +
> > > +   ASSERT_EQ(skel->bss->test1_result, 1, "test1_result");
> > > +   ASSERT_EQ(skel->bss->test2_result, 1, "test2_result");
> > > +   ASSERT_EQ(skel->bss->test3_result, 1, "test3_result");
> > > +   ASSERT_EQ(skel->bss->test4_result, 1, "test4_result");
> > > +
> > > +cleanup:
> > > +   get_func_args_test__destroy(skel);
> > > +}
> > > diff --git a/tools/testing/selftests/bpf/progs/get_func_args_test.c b/tools/testing/selftests/bpf/progs/get_func_args_test.c
> > > new file mode 100644
> > > index 000000000000..0d0a67c849ae
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/progs/get_func_args_test.c
> > > @@ -0,0 +1,112 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#include <linux/bpf.h>
> > > +#include <bpf/bpf_helpers.h>
> > > +#include <bpf/bpf_tracing.h>
> > > +#include <errno.h>
> > > +
> > > +char _license[] SEC("license") = "GPL";
> > > +
> > > +__u64 test1_result = 0;
> > > +SEC("fentry/bpf_fentry_test1")
> > > +int BPF_PROG(test1)
> > > +{
> > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > +   __u64 a = 0, z = 0, ret = 0;
> > > +   __s64 err;
> > > +
> > > +   test1_result = cnt == 1;
> > > +
> > > +   /* valid arguments */
> > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > +   test1_result &= err == 0 && (int) a == 1;
> >
> >
> > int cast unnecessary? but some ()'s wouldn't hurt...
>
> it is, 'a' is int and trampoline saves it with 32-bit register like:
>
>   mov    %edi,-0x8(%rbp)
>
> so the upper 4 bytes are not zeroed

oh, this is definitely worth a comment, it's quite a big gotcha we'll
need to remember


>
> >
> >
> > > +
> > > +   /* not valid argument */
> > > +   err = bpf_get_func_arg(ctx, 1, &z);
> > > +   test1_result &= err == -EINVAL;
> > > +
> > > +   /* return value fails in fentry */
> > > +   err = bpf_get_func_ret(ctx, &ret);
> > > +   test1_result &= err == -EINVAL;
> > > +   return 0;
> > > +}
> > > +
> > > +__u64 test2_result = 0;
> > > +SEC("fexit/bpf_fentry_test2")
> > > +int BPF_PROG(test2)
> > > +{
> > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > +   __u64 a = 0, b = 0, z = 0, ret = 0;
> > > +   __s64 err;
> > > +
> > > +   test2_result = cnt == 2;
> > > +
> > > +   /* valid arguments */
> > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > +   test2_result &= err == 0 && (int) a == 2;
> > > +
> > > +   err = bpf_get_func_arg(ctx, 1, &b);
> > > +   test2_result &= err == 0 && b == 3;
> > > +
> > > +   /* not valid argument */
> > > +   err = bpf_get_func_arg(ctx, 2, &z);
> > > +   test2_result &= err == -EINVAL;
> > > +
> > > +   /* return value */
> > > +   err = bpf_get_func_ret(ctx, &ret);
> > > +   test2_result &= err == 0 && ret == 5;
> > > +   return 0;
> > > +}
> > > +
> > > +__u64 test3_result = 0;
> > > +SEC("fmod_ret/bpf_modify_return_test")
> > > +int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
> > > +{
> > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > +   __u64 a = 0, b = 0, z = 0, ret = 0;
> > > +   __s64 err;
> > > +
> > > +   test3_result = cnt == 2;
> > > +
> > > +   /* valid arguments */
> > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > +   test3_result &= err == 0 && (int) a == 1;
> > > +
> > > +   err = bpf_get_func_arg(ctx, 1, &b);
> > > +   test3_result &= err == 0;
> >
> >
> > why no checking of b value here?
>
> right, ok
>
> >
> > > +
> > > +   /* not valid argument */
> > > +   err = bpf_get_func_arg(ctx, 2, &z);
> > > +   test3_result &= err == -EINVAL;
> > > +
> > > +   /* return value */
> > > +   err = bpf_get_func_ret(ctx, &ret);
> > > +   test3_result &= err == 0 && ret == 0;
> > > +   return 1234;
> > > +}
> > > +
> > > +__u64 test4_result = 0;
> > > +SEC("fexit/bpf_modify_return_test")
> > > +int BPF_PROG(fexit_test, int _a, __u64 _b, int _ret)
> > > +{
> > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > +   __u64 a = 0, b = 0, z = 0, ret = 0;
> > > +   __s64 err;
> > > +
> > > +   test4_result = cnt == 2;
> > > +
> > > +   /* valid arguments */
> > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > +   test4_result &= err == 0 && (int) a == 1;
> > > +
> > > +   err = bpf_get_func_arg(ctx, 1, &b);
> > > +   test4_result &= err == 0;
> >
> >
> > same, for consistency, b should have been checked, no?
>
> ok
>
> thanks,
> jirka
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH bpf-next 3/3] selftests/bpf: Add tests for get_func_[arg|ret|arg_cnt] helpers
  2021-12-07 22:54       ` Andrii Nakryiko
@ 2021-12-08 16:38         ` Jiri Olsa
  0 siblings, 0 replies; 16+ messages in thread
From: Jiri Olsa @ 2021-12-08 16:38 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, Alexei Starovoitov, Daniel Borkmann, Networking,
	bpf, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, Andrii Nakryiko

On Tue, Dec 07, 2021 at 02:54:33PM -0800, Andrii Nakryiko wrote:

SNIP

> > > > +__u64 test1_result = 0;
> > > > +SEC("fentry/bpf_fentry_test1")
> > > > +int BPF_PROG(test1)
> > > > +{
> > > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > > +   __u64 a = 0, z = 0, ret = 0;
> > > > +   __s64 err;
> > > > +
> > > > +   test1_result = cnt == 1;
> > > > +
> > > > +   /* valid arguments */
> > > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > > +   test1_result &= err == 0 && (int) a == 1;
> > >
> > >
> > > int cast unnecessary? but some ()'s wouldn't hurt...
> >
> > it is, 'a' is int and trampoline saves it with 32-bit register like:
> >
> >   mov    %edi,-0x8(%rbp)
> >
> > so the upper 4 bytes are not zeroed
> 
> oh, this is definitely worth a comment, it's quite a big gotcha we'll
> need to remember


ok, will add comment for that

jirka

> 
> 
> >
> > >
> > >
> > > > +
> > > > +   /* not valid argument */
> > > > +   err = bpf_get_func_arg(ctx, 1, &z);
> > > > +   test1_result &= err == -EINVAL;
> > > > +
> > > > +   /* return value fails in fentry */
> > > > +   err = bpf_get_func_ret(ctx, &ret);
> > > > +   test1_result &= err == -EINVAL;
> > > > +   return 0;
> > > > +}
> > > > +
> > > > +__u64 test2_result = 0;
> > > > +SEC("fexit/bpf_fentry_test2")
> > > > +int BPF_PROG(test2)
> > > > +{
> > > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > > +   __u64 a = 0, b = 0, z = 0, ret = 0;
> > > > +   __s64 err;
> > > > +
> > > > +   test2_result = cnt == 2;
> > > > +
> > > > +   /* valid arguments */
> > > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > > +   test2_result &= err == 0 && (int) a == 2;
> > > > +
> > > > +   err = bpf_get_func_arg(ctx, 1, &b);
> > > > +   test2_result &= err == 0 && b == 3;
> > > > +
> > > > +   /* not valid argument */
> > > > +   err = bpf_get_func_arg(ctx, 2, &z);
> > > > +   test2_result &= err == -EINVAL;
> > > > +
> > > > +   /* return value */
> > > > +   err = bpf_get_func_ret(ctx, &ret);
> > > > +   test2_result &= err == 0 && ret == 5;
> > > > +   return 0;
> > > > +}
> > > > +
> > > > +__u64 test3_result = 0;
> > > > +SEC("fmod_ret/bpf_modify_return_test")
> > > > +int BPF_PROG(fmod_ret_test, int _a, int *_b, int _ret)
> > > > +{
> > > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > > +   __u64 a = 0, b = 0, z = 0, ret = 0;
> > > > +   __s64 err;
> > > > +
> > > > +   test3_result = cnt == 2;
> > > > +
> > > > +   /* valid arguments */
> > > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > > +   test3_result &= err == 0 && (int) a == 1;
> > > > +
> > > > +   err = bpf_get_func_arg(ctx, 1, &b);
> > > > +   test3_result &= err == 0;
> > >
> > >
> > > why no checking of b value here?
> >
> > right, ok
> >
> > >
> > > > +
> > > > +   /* not valid argument */
> > > > +   err = bpf_get_func_arg(ctx, 2, &z);
> > > > +   test3_result &= err == -EINVAL;
> > > > +
> > > > +   /* return value */
> > > > +   err = bpf_get_func_ret(ctx, &ret);
> > > > +   test3_result &= err == 0 && ret == 0;
> > > > +   return 1234;
> > > > +}
> > > > +
> > > > +__u64 test4_result = 0;
> > > > +SEC("fexit/bpf_modify_return_test")
> > > > +int BPF_PROG(fexit_test, int _a, __u64 _b, int _ret)
> > > > +{
> > > > +   __u64 cnt = bpf_get_func_arg_cnt(ctx);
> > > > +   __u64 a = 0, b = 0, z = 0, ret = 0;
> > > > +   __s64 err;
> > > > +
> > > > +   test4_result = cnt == 2;
> > > > +
> > > > +   /* valid arguments */
> > > > +   err = bpf_get_func_arg(ctx, 0, &a);
> > > > +   test4_result &= err == 0 && (int) a == 1;
> > > > +
> > > > +   err = bpf_get_func_arg(ctx, 1, &b);
> > > > +   test4_result &= err == 0;
> > >
> > >
> > > same, for consistency, b should have been checked, no?
> >
> > ok
> >
> > thanks,
> > jirka
> >
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-12-08 16:39 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-04 14:06 [PATCH bpf-next 0/3] bpf: Add helpers to access traced function arguments Jiri Olsa
2021-12-04 14:06 ` [PATCH bpf-next 1/3] bpf, x64: Replace some stack_size usage with offset variables Jiri Olsa
2021-12-06 19:19   ` John Fastabend
2021-12-06 21:26   ` Andrii Nakryiko
2021-12-06 21:41   ` Andrii Nakryiko
2021-12-07 14:25     ` Jiri Olsa
2021-12-04 14:06 ` [PATCH bpf-next 2/3] bpf: Add get_func_[arg|ret|arg_cnt] helpers Jiri Olsa
2021-12-06 19:39   ` John Fastabend
2021-12-06 20:17     ` Jiri Olsa
2021-12-06 21:54   ` Andrii Nakryiko
2021-12-07 17:23     ` Jiri Olsa
2021-12-04 14:07 ` [PATCH bpf-next 3/3] selftests/bpf: Add tests for " Jiri Olsa
2021-12-06 22:03   ` Andrii Nakryiko
2021-12-07 18:14     ` Jiri Olsa
2021-12-07 22:54       ` Andrii Nakryiko
2021-12-08 16:38         ` Jiri Olsa

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.