All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS
@ 2021-10-16 12:48 Hou Tao
  2021-10-16 12:48 ` [PATCH bpf-next v2 1/5] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Hou Tao @ 2021-10-16 12:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, houtao1

Hi,

Currently the test of BPF STRUCT_OPS depends on the specific bpf
implementation (e.g, tcp_congestion_ops), but it can not cover all
basic functionalities (e.g, return value handling), so introduce
a dummy BPF STRUCT_OPS for test purpose.

Instead of loading a userspace-implemeted bpf_dummy_ops map into
kernel and calling the specific function by writing to sysfs provided
by bpf_testmode.ko, only loading bpf_dummy_ops related prog into
kernel and calling these prog by bpf_prog_test_run(). The latter
is more flexible and has no dependency on extra kernel module.

Now the return value handling is supported by test_1(...) ops,
and passing multiple arguments is supported by test_2(...) ops.
If more is needed, test_x(...) ops can be added afterwards.

Comments are always welcome.
Regards,
Hou

Change Log:
v2:
 * rebase on bpf-next
 * add test_2(...) ops to test the passing of multiple arguments
 * a new patch (patch #2) is added to factor out ctx access helpers
 * address comments from Martin & Andrii

v1: https://www.spinics.net/lists/bpf/msg46787.html

RFC: https://www.spinics.net/lists/bpf/msg46117.html

Hou Tao (5):
  bpf: factor out a helper to prepare trampoline for struct_ops prog
  bpf: factor out helpers to check ctx access for BTF function
  bpf: add dummy BPF STRUCT_OPS for test purpose
  bpf: hook .test_run for struct_ops program
  selftests/bpf: add test cases for struct_ops prog

 include/linux/bpf.h                           |  45 ++++
 kernel/bpf/bpf_struct_ops.c                   |  46 +++-
 kernel/bpf/bpf_struct_ops_types.h             |   1 +
 kernel/trace/bpf_trace.c                      |  16 +-
 net/bpf/Makefile                              |   3 +
 net/bpf/bpf_dummy_struct_ops.c                | 206 ++++++++++++++++++
 net/ipv4/bpf_tcp_ca.c                         |   9 +-
 .../selftests/bpf/prog_tests/dummy_st_ops.c   | 114 ++++++++++
 .../selftests/bpf/progs/dummy_st_ops.c        |  50 +++++
 9 files changed, 458 insertions(+), 32 deletions(-)
 create mode 100644 net/bpf/bpf_dummy_struct_ops.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
 create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops.c

-- 
2.29.2


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 1/5] bpf: factor out a helper to prepare trampoline for struct_ops prog
  2021-10-16 12:48 [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS Hou Tao
@ 2021-10-16 12:48 ` Hou Tao
  2021-10-16 12:48 ` [PATCH bpf-next v2 2/5] bpf: factor out helpers to check ctx access for BTF function Hou Tao
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Hou Tao @ 2021-10-16 12:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, houtao1

Factor out a helper bpf_struct_ops_prepare_trampoline() to prepare
trampoline for BPF_PROG_TYPE_STRUCT_OPS prog. It will be used by
.test_run callback in following patch.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 include/linux/bpf.h         |  4 ++++
 kernel/bpf/bpf_struct_ops.c | 29 +++++++++++++++++++----------
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index d604c8251d88..b7c1e2bc93f7 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -998,6 +998,10 @@ bool bpf_struct_ops_get(const void *kdata);
 void bpf_struct_ops_put(const void *kdata);
 int bpf_struct_ops_map_sys_lookup_elem(struct bpf_map *map, void *key,
 				       void *value);
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs,
+				      struct bpf_prog *prog,
+				      const struct btf_func_model *model,
+				      void *image, void *image_end);
 static inline bool bpf_try_module_get(const void *data, struct module *owner)
 {
 	if (owner == BPF_MODULE_OWNER)
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 9abcc33f02cf..44be101f2562 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -312,6 +312,20 @@ static int check_zero_holes(const struct btf_type *t, void *data)
 	return 0;
 }
 
+int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_progs *tprogs,
+				      struct bpf_prog *prog,
+				      const struct btf_func_model *model,
+				      void *image, void *image_end)
+{
+	u32 flags;
+
+	tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
+	tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
+	flags = model->ret_size > 0 ? BPF_TRAMP_F_RET_FENTRY_RET : 0;
+	return arch_prepare_bpf_trampoline(NULL, image, image_end,
+					   model, flags, tprogs, NULL);
+}
+
 static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 					  void *value, u64 flags)
 {
@@ -323,7 +337,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	struct bpf_tramp_progs *tprogs = NULL;
 	void *udata, *kdata;
 	int prog_fd, err = 0;
-	void *image;
+	void *image, *image_end;
 	u32 i;
 
 	if (flags)
@@ -363,12 +377,12 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 	udata = &uvalue->data;
 	kdata = &kvalue->data;
 	image = st_map->image;
+	image_end = st_map->image + PAGE_SIZE;
 
 	for_each_member(i, t, member) {
 		const struct btf_type *mtype, *ptype;
 		struct bpf_prog *prog;
 		u32 moff;
-		u32 flags;
 
 		moff = btf_member_bit_offset(t, member) / 8;
 		ptype = btf_type_resolve_ptr(btf_vmlinux, member->type, NULL);
@@ -430,14 +444,9 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
 			goto reset_unlock;
 		}
 
-		tprogs[BPF_TRAMP_FENTRY].progs[0] = prog;
-		tprogs[BPF_TRAMP_FENTRY].nr_progs = 1;
-		flags = st_ops->func_models[i].ret_size > 0 ?
-			BPF_TRAMP_F_RET_FENTRY_RET : 0;
-		err = arch_prepare_bpf_trampoline(NULL, image,
-						  st_map->image + PAGE_SIZE,
-						  &st_ops->func_models[i],
-						  flags, tprogs, NULL);
+		err = bpf_struct_ops_prepare_trampoline(tprogs, prog,
+							&st_ops->func_models[i],
+							image, image_end);
 		if (err < 0)
 			goto reset_unlock;
 
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 2/5] bpf: factor out helpers to check ctx access for BTF function
  2021-10-16 12:48 [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS Hou Tao
  2021-10-16 12:48 ` [PATCH bpf-next v2 1/5] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
@ 2021-10-16 12:48 ` Hou Tao
  2021-10-20  5:00   ` Martin KaFai Lau
  2021-10-16 12:48 ` [PATCH bpf-next v2 3/5] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Hou Tao @ 2021-10-16 12:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, houtao1

Factor out two helpers to check the read access of ctx for BTF
function. bpf_check_btf_func_arg_access() is used to check the
read access to argument is valid, and bpf_check_btf_func_ctx_access()
also checks whether the btf type of argument is valid besides
the checking of arguments read. bpf_check_btf_func_ctx_access()
will be used by the following patch.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 include/linux/bpf.h      | 27 +++++++++++++++++++++++++++
 kernel/trace/bpf_trace.c | 16 ++--------------
 net/ipv4/bpf_tcp_ca.c    |  9 +--------
 3 files changed, 30 insertions(+), 22 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b7c1e2bc93f7..b503306da2ab 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1648,6 +1648,33 @@ bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info);
+
+/*
+ * The maximum number of BTF function arguments is MAX_BPF_FUNC_ARGS.
+ * And only aligned read is allowed.
+ */
+static inline bool bpf_check_btf_func_arg_access(int off, int size,
+						 enum bpf_access_type type)
+{
+	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
+		return false;
+	if (type != BPF_READ)
+		return false;
+	if (off % size != 0)
+		return false;
+	return true;
+}
+
+static inline bool bpf_check_btf_func_ctx_access(int off, int size,
+						 enum bpf_access_type type,
+						 const struct bpf_prog *prog,
+						 struct bpf_insn_access_aux *info)
+{
+	if (!bpf_check_btf_func_arg_access(off, size, type))
+		return false;
+	return btf_ctx_access(off, size, type, prog, info);
+}
+
 int btf_struct_access(struct bpf_verifier_log *log, const struct btf *btf,
 		      const struct btf_type *t, int off, int size,
 		      enum bpf_access_type atype,
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 6b3153841a33..f2858ef32e16 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1644,13 +1644,7 @@ static bool raw_tp_prog_is_valid_access(int off, int size,
 					const struct bpf_prog *prog,
 					struct bpf_insn_access_aux *info)
 {
-	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
-		return false;
-	if (type != BPF_READ)
-		return false;
-	if (off % size != 0)
-		return false;
-	return true;
+	return bpf_check_btf_func_arg_access(off, size, type);
 }
 
 static bool tracing_prog_is_valid_access(int off, int size,
@@ -1658,13 +1652,7 @@ static bool tracing_prog_is_valid_access(int off, int size,
 					 const struct bpf_prog *prog,
 					 struct bpf_insn_access_aux *info)
 {
-	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
-		return false;
-	if (type != BPF_READ)
-		return false;
-	if (off % size != 0)
-		return false;
-	return btf_ctx_access(off, size, type, prog, info);
+	return bpf_check_btf_func_ctx_access(off, size, type, prog, info);
 }
 
 int __weak bpf_prog_test_run_tracing(struct bpf_prog *prog,
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 57709ac09fb2..1eccc4974382 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -81,14 +81,7 @@ static bool bpf_tcp_ca_is_valid_access(int off, int size,
 				       const struct bpf_prog *prog,
 				       struct bpf_insn_access_aux *info)
 {
-	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
-		return false;
-	if (type != BPF_READ)
-		return false;
-	if (off % size != 0)
-		return false;
-
-	if (!btf_ctx_access(off, size, type, prog, info))
+	if (!bpf_check_btf_func_ctx_access(off, size, type, prog, info))
 		return false;
 
 	if (info->reg_type == PTR_TO_BTF_ID && info->btf_id == sock_id)
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 3/5] bpf: add dummy BPF STRUCT_OPS for test purpose
  2021-10-16 12:48 [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS Hou Tao
  2021-10-16 12:48 ` [PATCH bpf-next v2 1/5] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
  2021-10-16 12:48 ` [PATCH bpf-next v2 2/5] bpf: factor out helpers to check ctx access for BTF function Hou Tao
@ 2021-10-16 12:48 ` Hou Tao
  2021-10-20  1:22   ` Martin KaFai Lau
  2021-10-16 12:48 ` [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program Hou Tao
  2021-10-16 12:48 ` [PATCH bpf-next v2 5/5] selftests/bpf: add test cases for struct_ops prog Hou Tao
  4 siblings, 1 reply; 10+ messages in thread
From: Hou Tao @ 2021-10-16 12:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, houtao1

Currently the test of BPF STRUCT_OPS depends on the specific bpf
implementation of tcp_congestion_ops, but it can not cover all
basic functionalities (e.g, return value handling), so introduce
a dummy BPF STRUCT_OPS for test purpose.

Loading a bpf_dummy_ops implementation from userspace is prohibited,
and its only purpose is to run BPF_PROG_TYPE_STRUCT_OPS program
through bpf(BPF_PROG_TEST_RUN). Now programs for test_1() & test_2()
are supported. The following three cases are exercised in
bpf_dummy_struct_ops_test_run():

(1) test and check the value returned from state arg in test_1(state)
The content of state is copied from userspace pointer and copied back
after calling test_1(state). The user pointer is saved in an u64 array
and the array address is passed through ctx_in.

(2) test and check the return value of test_1(NULL)
Just simulate the case in which an invalid input argument is passed in.

(3) test multiple arguments passing in test_2(state, ...)
5 arguments are passed through ctx_in in form of u64 array. The first
element of array is userspace pointer of state and others 4 arguments
follow.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 include/linux/bpf.h               |  14 ++
 kernel/bpf/bpf_struct_ops_types.h |   1 +
 net/bpf/Makefile                  |   3 +
 net/bpf/bpf_dummy_struct_ops.c    | 206 ++++++++++++++++++++++++++++++
 4 files changed, 224 insertions(+)
 create mode 100644 net/bpf/bpf_dummy_struct_ops.c

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b503306da2ab..edcd84077bf1 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -1016,6 +1016,20 @@ static inline void bpf_module_put(const void *data, struct module *owner)
 	else
 		module_put(owner);
 }
+
+/* Define it here to avoid the use of forward declaration */
+struct bpf_dummy_ops_state {
+	int val;
+};
+
+struct bpf_dummy_ops {
+	int (*test_1)(struct bpf_dummy_ops_state *cb);
+	int (*test_2)(struct bpf_dummy_ops_state *cb, int a1, unsigned short a2,
+		      char a3, unsigned long a4);
+};
+int bpf_dummy_struct_ops_test_run(struct bpf_prog *prog,
+				  const union bpf_attr *kattr,
+				  union bpf_attr __user *uattr);
 #else
 static inline const struct bpf_struct_ops *bpf_struct_ops_find(u32 type_id)
 {
diff --git a/kernel/bpf/bpf_struct_ops_types.h b/kernel/bpf/bpf_struct_ops_types.h
index 066d83ea1c99..74733ee012e4 100644
--- a/kernel/bpf/bpf_struct_ops_types.h
+++ b/kernel/bpf/bpf_struct_ops_types.h
@@ -2,6 +2,7 @@
 /* internal file - do not include directly */
 
 #ifdef CONFIG_BPF_JIT
+BPF_STRUCT_OPS_TYPE(bpf_dummy_ops)
 #ifdef CONFIG_INET
 #include <net/tcp.h>
 BPF_STRUCT_OPS_TYPE(tcp_congestion_ops)
diff --git a/net/bpf/Makefile b/net/bpf/Makefile
index 1c0a98d8c28f..1ebe270bde23 100644
--- a/net/bpf/Makefile
+++ b/net/bpf/Makefile
@@ -1,2 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0-only
 obj-$(CONFIG_BPF_SYSCALL)	:= test_run.o
+ifeq ($(CONFIG_BPF_JIT),y)
+obj-$(CONFIG_BPF_SYSCALL)	+= bpf_dummy_struct_ops.o
+endif
diff --git a/net/bpf/bpf_dummy_struct_ops.c b/net/bpf/bpf_dummy_struct_ops.c
new file mode 100644
index 000000000000..478d7b656ab3
--- /dev/null
+++ b/net/bpf/bpf_dummy_struct_ops.c
@@ -0,0 +1,206 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021. Huawei Technologies Co., Ltd
+ */
+#include <linux/kernel.h>
+#include <linux/bpf_verifier.h>
+#include <linux/bpf.h>
+#include <linux/btf.h>
+
+extern struct bpf_struct_ops bpf_bpf_dummy_ops;
+
+/* A common type for test_N with return value in bpf_dummy_ops */
+typedef int (*dummy_ops_test_ret_fn)(struct bpf_dummy_ops_state *state, ...);
+
+struct bpf_dummy_ops_test_args {
+	u64 args[MAX_BPF_FUNC_ARGS];
+	struct bpf_dummy_ops_state state;
+};
+
+static const struct btf_type *dummy_ops_state;
+
+static struct bpf_dummy_ops_test_args *
+dummy_ops_init_args(const union bpf_attr *kattr, unsigned int nr)
+{
+	__u32 size_in;
+	struct bpf_dummy_ops_test_args *args;
+	void __user *ctx_in;
+	void __user *u_state;
+
+	if (!nr || nr > MAX_BPF_FUNC_ARGS)
+		return ERR_PTR(-EINVAL);
+
+	size_in = kattr->test.ctx_size_in;
+	if (size_in != sizeof(u64) * nr)
+		return ERR_PTR(-EINVAL);
+
+	args = kzalloc(sizeof(*args), GFP_KERNEL);
+	if (!args)
+		return ERR_PTR(-ENOMEM);
+
+	ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
+	if (copy_from_user(args->args, ctx_in, size_in))
+		return ERR_PTR(-EFAULT);
+
+	u_state = u64_to_user_ptr(args->args[0]);
+	if (!u_state)
+		return args;
+
+	if (copy_from_user(&args->state, u_state, sizeof(args->state))) {
+		kfree(args);
+		return ERR_PTR(-EFAULT);
+	}
+
+	return args;
+}
+
+static int dummy_ops_copy_args(struct bpf_dummy_ops_test_args *args)
+{
+	void __user *u_state;
+
+	u_state = u64_to_user_ptr(args->args[0]);
+	if (!u_state)
+		return 0;
+
+	if (copy_to_user(u_state, &args->state, sizeof(args->state)))
+		return -EFAULT;
+
+	return 0;
+}
+
+static int dummy_ops_call_op(void *image, struct bpf_dummy_ops_test_args *args)
+{
+	dummy_ops_test_ret_fn test = (void *)image;
+	struct bpf_dummy_ops_state *state = NULL;
+
+	/* state needs to be NULL if args[0] is 0 */
+	if (args->args[0])
+		state = &args->state;
+	return test(state, args->args[1], args->args[2],
+		    args->args[3], args->args[4]);
+}
+
+int bpf_dummy_struct_ops_test_run(struct bpf_prog *prog,
+				  const union bpf_attr *kattr,
+				  union bpf_attr __user *uattr)
+{
+	const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
+	const struct btf_type *func_proto = prog->aux->attach_func_proto;
+	struct bpf_dummy_ops_test_args *args = NULL;
+	struct bpf_tramp_progs *tprogs = NULL;
+	void *image = NULL;
+	unsigned int op_idx;
+	int err;
+	int prog_ret;
+
+	args = dummy_ops_init_args(kattr, btf_type_vlen(func_proto));
+	if (IS_ERR(args))
+		return PTR_ERR(args);
+
+	tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL);
+	if (!tprogs) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	image = bpf_jit_alloc_exec(PAGE_SIZE);
+	if (!image) {
+		err = -ENOMEM;
+		goto out;
+	}
+	set_vm_flush_reset_perms(image);
+
+	op_idx = prog->expected_attach_type;
+	err = bpf_struct_ops_prepare_trampoline(tprogs, prog,
+						&st_ops->func_models[op_idx],
+						image, image + PAGE_SIZE);
+	if (err < 0)
+		goto out;
+
+	set_memory_ro((long)image, 1);
+	set_memory_x((long)image, 1);
+	prog_ret = dummy_ops_call_op(image, args);
+
+	err = dummy_ops_copy_args(args);
+	if (err)
+		goto out;
+	if (put_user(prog_ret, &uattr->test.retval))
+		err = -EFAULT;
+out:
+	kfree(args);
+	bpf_jit_free_exec(image);
+	kfree(tprogs);
+	return err;
+}
+
+static int bpf_dummy_init(struct btf *btf)
+{
+	s32 type_id;
+
+	type_id = btf_find_by_name_kind(btf, "bpf_dummy_ops_state",
+					BTF_KIND_STRUCT);
+	if (type_id < 0)
+		return -EINVAL;
+
+	dummy_ops_state = btf_type_by_id(btf, type_id);
+
+	return 0;
+}
+
+static bool bpf_dummy_ops_is_valid_access(int off, int size,
+					  enum bpf_access_type type,
+					  const struct bpf_prog *prog,
+					  struct bpf_insn_access_aux *info)
+{
+	return bpf_check_btf_func_ctx_access(off, size, type, prog, info);
+}
+
+static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
+					   const struct btf *btf,
+					   const struct btf_type *t, int off,
+					   int size, enum bpf_access_type atype,
+					   u32 *next_btf_id)
+{
+	int err;
+
+	if (atype != BPF_READ && t != dummy_ops_state) {
+		bpf_log(log, "only write to bpf_dummy_ops_state is supported\n");
+		return -EACCES;
+	}
+
+	err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id);
+	if (err < 0)
+		return err;
+
+	return atype == BPF_READ ? err : NOT_INIT;
+}
+
+static const struct bpf_verifier_ops bpf_dummy_verifier_ops = {
+	.is_valid_access = bpf_dummy_ops_is_valid_access,
+	.btf_struct_access = bpf_dummy_ops_btf_struct_access,
+};
+
+static int bpf_dummy_init_member(const struct btf_type *t,
+				 const struct btf_member *member,
+				 void *kdata, const void *udata)
+{
+	return -EOPNOTSUPP;
+}
+
+static int bpf_dummy_reg(void *kdata)
+{
+	return -EOPNOTSUPP;
+}
+
+static void bpf_dummy_unreg(void *kdata)
+{
+}
+
+struct bpf_struct_ops bpf_bpf_dummy_ops = {
+	.verifier_ops = &bpf_dummy_verifier_ops,
+	.init = bpf_dummy_init,
+	.init_member = bpf_dummy_init_member,
+	.reg = bpf_dummy_reg,
+	.unreg = bpf_dummy_unreg,
+	.name = "bpf_dummy_ops",
+};
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program
  2021-10-16 12:48 [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS Hou Tao
                   ` (2 preceding siblings ...)
  2021-10-16 12:48 ` [PATCH bpf-next v2 3/5] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
@ 2021-10-16 12:48 ` Hou Tao
  2021-10-20  1:25   ` Martin KaFai Lau
  2021-10-20  5:24   ` Martin KaFai Lau
  2021-10-16 12:48 ` [PATCH bpf-next v2 5/5] selftests/bpf: add test cases for struct_ops prog Hou Tao
  4 siblings, 2 replies; 10+ messages in thread
From: Hou Tao @ 2021-10-16 12:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, houtao1

bpf_struct_ops_test_run() will be used to run struct_ops program
from bpf_dummy_ops and now its main purpose is to test the handling
of return value and multiple arguments.

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 kernel/bpf/bpf_struct_ops.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index 44be101f2562..ceedc9f0f786 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -11,6 +11,9 @@
 #include <linux/refcount.h>
 #include <linux/mutex.h>
 
+static int bpf_struct_ops_test_run(struct bpf_prog *prog,
+				   const union bpf_attr *kattr,
+				   union bpf_attr __user *uattr);
 enum bpf_struct_ops_state {
 	BPF_STRUCT_OPS_STATE_INIT,
 	BPF_STRUCT_OPS_STATE_INUSE,
@@ -93,6 +96,7 @@ const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = {
 };
 
 const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
+	.test_run = bpf_struct_ops_test_run,
 };
 
 static const struct btf_type *module_type;
@@ -667,3 +671,16 @@ void bpf_struct_ops_put(const void *kdata)
 		call_rcu(&st_map->rcu, bpf_struct_ops_put_rcu);
 	}
 }
+
+static int bpf_struct_ops_test_run(struct bpf_prog *prog,
+				   const union bpf_attr *kattr,
+				   union bpf_attr __user *uattr)
+{
+	const struct bpf_struct_ops *st_ops;
+
+	st_ops = bpf_struct_ops_find(prog->aux->attach_btf_id);
+	if (st_ops != &bpf_bpf_dummy_ops)
+		return -EOPNOTSUPP;
+
+	return bpf_dummy_struct_ops_test_run(prog, kattr, uattr);
+}
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH bpf-next v2 5/5] selftests/bpf: add test cases for struct_ops prog
  2021-10-16 12:48 [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS Hou Tao
                   ` (3 preceding siblings ...)
  2021-10-16 12:48 ` [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program Hou Tao
@ 2021-10-16 12:48 ` Hou Tao
  4 siblings, 0 replies; 10+ messages in thread
From: Hou Tao @ 2021-10-16 12:48 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Martin KaFai Lau, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf, houtao1

Running a BPF_PROG_TYPE_STRUCT_OPS prog for dummy_st_ops::test_N()
through bpf_prog_test_run(). Four test cases are added:
(1) attach dummy_st_ops should fail
(2) function return value of bpf_dummy_ops::test_1() is expected
(3) pointer argument of bpf_dummy_ops::test_1() works as expected
(4) multiple arguments passed to bpf_dummy_ops::test_2() are correct

Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 .../selftests/bpf/prog_tests/dummy_st_ops.c   | 114 ++++++++++++++++++
 .../selftests/bpf/progs/dummy_st_ops.c        |  50 ++++++++
 2 files changed, 164 insertions(+)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
 create mode 100644 tools/testing/selftests/bpf/progs/dummy_st_ops.c

diff --git a/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
new file mode 100644
index 000000000000..e59a674b5052
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/dummy_st_ops.c
@@ -0,0 +1,114 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021. Huawei Technologies Co., Ltd */
+#include <test_progs.h>
+#include "dummy_st_ops.skel.h"
+
+/* Need to keep consistent with definition in include/linux/bpf.h */
+struct bpf_dummy_ops_state {
+	int val;
+};
+
+static void test_dummy_st_ops_attach(void)
+{
+	struct dummy_st_ops *skel;
+	struct bpf_link *link;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	link = bpf_map__attach_struct_ops(skel->maps.dummy_1);
+	ASSERT_EQ(libbpf_get_error(link), -EOPNOTSUPP, "dummy_st_ops_attach");
+
+	dummy_st_ops__destroy(skel);
+}
+
+static void test_dummy_init_ret_value(void)
+{
+	__u64 args[1] = {0};
+	struct bpf_prog_test_run_attr attr = {
+		.ctx_size_in = sizeof(args),
+		.ctx_in = args,
+	};
+	struct dummy_st_ops *skel;
+	int fd, err;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	fd = bpf_program__fd(skel->progs.test_1);
+	attr.prog_fd = fd;
+	err = bpf_prog_test_run_xattr(&attr);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(attr.retval, 0xf2f3f4f5, "test_ret");
+
+	dummy_st_ops__destroy(skel);
+}
+
+static void test_dummy_init_ptr_arg(void)
+{
+	int exp_retval = 0xbeef;
+	struct bpf_dummy_ops_state in_state = {
+		.val = exp_retval,
+	};
+	__u64 args[1] = {(unsigned long)&in_state};
+	struct bpf_prog_test_run_attr attr = {
+		.ctx_size_in = sizeof(args),
+		.ctx_in = args,
+	};
+	struct dummy_st_ops *skel;
+	int fd, err;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	fd = bpf_program__fd(skel->progs.test_1);
+	attr.prog_fd = fd;
+	err = bpf_prog_test_run_xattr(&attr);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(in_state.val, 0x5a, "test_ptr_ret");
+	ASSERT_EQ(attr.retval, exp_retval, "test_ret");
+
+	dummy_st_ops__destroy(skel);
+}
+
+static void test_dummy_multiple_args(void)
+{
+	__u64 args[5] = {0, -100, 0x8a5f, 'c', 0x1234567887654321ULL};
+	struct bpf_prog_test_run_attr attr = {
+		.ctx_size_in = sizeof(args),
+		.ctx_in = args,
+	};
+	struct dummy_st_ops *skel;
+	int fd, err;
+
+	skel = dummy_st_ops__open_and_load();
+	if (!ASSERT_OK_PTR(skel, "dummy_st_ops_load"))
+		return;
+
+	fd = bpf_program__fd(skel->progs.test_2);
+	attr.prog_fd = fd;
+	err = bpf_prog_test_run_xattr(&attr);
+	ASSERT_OK(err, "test_run");
+	ASSERT_EQ(skel->bss->test_2_args[0], args[0], "arg 0");
+	ASSERT_EQ(skel->bss->test_2_args[1], args[1], "arg 1");
+	ASSERT_EQ(skel->bss->test_2_args[2], args[2], "arg 2");
+	ASSERT_EQ(skel->bss->test_2_args[3], args[3], "arg 3");
+	ASSERT_EQ(skel->bss->test_2_args[4], args[4], "arg 4");
+
+	dummy_st_ops__destroy(skel);
+}
+
+void test_dummy_st_ops(void)
+{
+	if (test__start_subtest("dummy_st_ops_attach"))
+		test_dummy_st_ops_attach();
+	if (test__start_subtest("dummy_init_ret_value"))
+		test_dummy_init_ret_value();
+	if (test__start_subtest("dummy_init_ptr_arg"))
+		test_dummy_init_ptr_arg();
+	if (test__start_subtest("dummy_multiple_args"))
+		test_dummy_multiple_args();
+}
diff --git a/tools/testing/selftests/bpf/progs/dummy_st_ops.c b/tools/testing/selftests/bpf/progs/dummy_st_ops.c
new file mode 100644
index 000000000000..ead87edb75e2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/dummy_st_ops.c
@@ -0,0 +1,50 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2021. Huawei Technologies Co., Ltd */
+#include <linux/bpf.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+struct bpf_dummy_ops_state {
+	int val;
+} __attribute__((preserve_access_index));
+
+struct bpf_dummy_ops {
+	int (*test_1)(struct bpf_dummy_ops_state *state);
+	int (*test_2)(struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
+		      char a3, unsigned long a4);
+};
+
+char _license[] SEC("license") = "GPL";
+
+SEC("struct_ops/test_1")
+int BPF_PROG(test_1, struct bpf_dummy_ops_state *state)
+{
+	int ret;
+
+	if (!state)
+		return 0xf2f3f4f5;
+
+	ret = state->val;
+	state->val = 0x5a;
+	return ret;
+}
+
+__u64 test_2_args[5];
+
+SEC("struct_ops/test_2")
+int BPF_PROG(test_2, struct bpf_dummy_ops_state *state, int a1, unsigned short a2,
+	     char a3, unsigned long a4)
+{
+	test_2_args[0] = (unsigned long)state;
+	test_2_args[1] = a1;
+	test_2_args[2] = a2;
+	test_2_args[3] = a3;
+	test_2_args[4] = a4;
+	return 0;
+}
+
+SEC(".struct_ops")
+struct bpf_dummy_ops dummy_1 = {
+	.test_1 = (void *)test_1,
+	.test_2 = (void *)test_2,
+};
-- 
2.29.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 3/5] bpf: add dummy BPF STRUCT_OPS for test purpose
  2021-10-16 12:48 ` [PATCH bpf-next v2 3/5] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
@ 2021-10-20  1:22   ` Martin KaFai Lau
  0 siblings, 0 replies; 10+ messages in thread
From: Martin KaFai Lau @ 2021-10-20  1:22 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Sat, Oct 16, 2021 at 08:48:04PM +0800, Hou Tao wrote:
> +static struct bpf_dummy_ops_test_args *
> +dummy_ops_init_args(const union bpf_attr *kattr, unsigned int nr)
> +{
> +	__u32 size_in;
> +	struct bpf_dummy_ops_test_args *args;
> +	void __user *ctx_in;
> +	void __user *u_state;
> +
> +	if (!nr || nr > MAX_BPF_FUNC_ARGS)
> +		return ERR_PTR(-EINVAL);
> +
> +	size_in = kattr->test.ctx_size_in;
> +	if (size_in != sizeof(u64) * nr)
> +		return ERR_PTR(-EINVAL);
> +
> +	args = kzalloc(sizeof(*args), GFP_KERNEL);
> +	if (!args)
> +		return ERR_PTR(-ENOMEM);
> +
> +	ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
> +	if (copy_from_user(args->args, ctx_in, size_in))
args is leaked.

> +		return ERR_PTR(-EFAULT);
> +
> +	u_state = u64_to_user_ptr(args->args[0]);
> +	if (!u_state)
> +		return args;
> +
> +	if (copy_from_user(&args->state, u_state, sizeof(args->state))) {
> +		kfree(args);
> +		return ERR_PTR(-EFAULT);
> +	}
> +
> +	return args;
> +}

[ ... ]

> +int bpf_dummy_struct_ops_test_run(struct bpf_prog *prog,
> +				  const union bpf_attr *kattr,
> +				  union bpf_attr __user *uattr)
> +{
> +	const struct bpf_struct_ops *st_ops = &bpf_bpf_dummy_ops;
> +	const struct btf_type *func_proto = prog->aux->attach_func_proto;
> +	struct bpf_dummy_ops_test_args *args = NULL;
> +	struct bpf_tramp_progs *tprogs = NULL;
args = NULL and tprogs = NULL are not needed.

> +	void *image = NULL;
> +	unsigned int op_idx;
> +	int err;
> +	int prog_ret;
> +
> +	args = dummy_ops_init_args(kattr, btf_type_vlen(func_proto));
> +	if (IS_ERR(args))
> +		return PTR_ERR(args);
> +
> +	tprogs = kcalloc(BPF_TRAMP_MAX, sizeof(*tprogs), GFP_KERNEL);
> +	if (!tprogs) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +
> +	image = bpf_jit_alloc_exec(PAGE_SIZE);
> +	if (!image) {
> +		err = -ENOMEM;
> +		goto out;
> +	}
> +	set_vm_flush_reset_perms(image);
> +
> +	op_idx = prog->expected_attach_type;
> +	err = bpf_struct_ops_prepare_trampoline(tprogs, prog,
> +						&st_ops->func_models[op_idx],
> +						image, image + PAGE_SIZE);
> +	if (err < 0)
> +		goto out;
> +
> +	set_memory_ro((long)image, 1);
> +	set_memory_x((long)image, 1);
> +	prog_ret = dummy_ops_call_op(image, args);
> +
> +	err = dummy_ops_copy_args(args);
> +	if (err)
> +		goto out;
> +	if (put_user(prog_ret, &uattr->test.retval))
> +		err = -EFAULT;
> +out:
> +	kfree(args);
> +	bpf_jit_free_exec(image);
> +	kfree(tprogs);
> +	return err;
> +}
> +
> +static int bpf_dummy_init(struct btf *btf)
> +{
> +	s32 type_id;
> +
> +	type_id = btf_find_by_name_kind(btf, "bpf_dummy_ops_state",
> +					BTF_KIND_STRUCT);
> +	if (type_id < 0)
> +		return -EINVAL;
> +
> +	dummy_ops_state = btf_type_by_id(btf, type_id);
Probably just do btf_find_by_name_kind("bpf_dummy_ops_state")
in bpf_dummy_ops_btf_struct_access() during each test.   There is
no need to optimize and cache it only for the testing purpose.

> +
> +	return 0;
> +}
> +
> +static bool bpf_dummy_ops_is_valid_access(int off, int size,
> +					  enum bpf_access_type type,
> +					  const struct bpf_prog *prog,
> +					  struct bpf_insn_access_aux *info)
> +{
> +	return bpf_check_btf_func_ctx_access(off, size, type, prog, info);
> +}
> +
> +static int bpf_dummy_ops_btf_struct_access(struct bpf_verifier_log *log,
> +					   const struct btf *btf,
> +					   const struct btf_type *t, int off,
> +					   int size, enum bpf_access_type atype,
> +					   u32 *next_btf_id)
> +{
> +	int err;
> +
> +	if (atype != BPF_READ && t != dummy_ops_state) {
I think this can be further simplified.  The only struct that
the bpf_prog can access is bpf_dummy_ops_state, so the
"atype != BPF_READ" test can be removed.  The log message
then needs to be adjusted.

Others lgtm.

> +		bpf_log(log, "only write to bpf_dummy_ops_state is supported\n");
> +		return -EACCES;
> +	}
> +
> +	err = btf_struct_access(log, btf, t, off, size, atype, next_btf_id);
> +	if (err < 0)
> +		return err;
> +
> +	return atype == BPF_READ ? err : NOT_INIT;
> +}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program
  2021-10-16 12:48 ` [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program Hou Tao
@ 2021-10-20  1:25   ` Martin KaFai Lau
  2021-10-20  5:24   ` Martin KaFai Lau
  1 sibling, 0 replies; 10+ messages in thread
From: Martin KaFai Lau @ 2021-10-20  1:25 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Sat, Oct 16, 2021 at 08:48:05PM +0800, Hou Tao wrote:
> bpf_struct_ops_test_run() will be used to run struct_ops program
> from bpf_dummy_ops and now its main purpose is to test the handling
> of return value and multiple arguments.
lgtm.  Please merge it with patch 3.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 2/5] bpf: factor out helpers to check ctx access for BTF function
  2021-10-16 12:48 ` [PATCH bpf-next v2 2/5] bpf: factor out helpers to check ctx access for BTF function Hou Tao
@ 2021-10-20  5:00   ` Martin KaFai Lau
  0 siblings, 0 replies; 10+ messages in thread
From: Martin KaFai Lau @ 2021-10-20  5:00 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Sat, Oct 16, 2021 at 08:48:03PM +0800, Hou Tao wrote:
> Factor out two helpers to check the read access of ctx for BTF
> function. bpf_check_btf_func_arg_access() is used to check the
> read access to argument is valid, and bpf_check_btf_func_ctx_access()
> also checks whether the btf type of argument is valid besides
> the checking of arguments read. bpf_check_btf_func_ctx_access()
> will be used by the following patch.
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
>  include/linux/bpf.h      | 27 +++++++++++++++++++++++++++
>  kernel/trace/bpf_trace.c | 16 ++--------------
>  net/ipv4/bpf_tcp_ca.c    |  9 +--------
>  3 files changed, 30 insertions(+), 22 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index b7c1e2bc93f7..b503306da2ab 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -1648,6 +1648,33 @@ bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
>  bool btf_ctx_access(int off, int size, enum bpf_access_type type,
>  		    const struct bpf_prog *prog,
>  		    struct bpf_insn_access_aux *info);
> +
> +/*
> + * The maximum number of BTF function arguments is MAX_BPF_FUNC_ARGS.
> + * And only aligned read is allowed.
> + */
> +static inline bool bpf_check_btf_func_arg_access(int off, int size,
> +						 enum bpf_access_type type)
Refactoring makes sense but naming could be hard to figure out here.

"_btf_func_arg" part is confusing with other btf_check_func_arg
functions in btf.c.  e.g. it is checking an arg of a bpf subprog or
checking a bpf_prog's ctx here?  The name sounds former but it is actually
the latter here (i.e. checking ctx).  It is a ctx with an array
of __u64 for tracing.  How about bpf_tracing_ctx_access()?

> +{
> +	if (off < 0 || off >= sizeof(__u64) * MAX_BPF_FUNC_ARGS)
> +		return false;
> +	if (type != BPF_READ)
> +		return false;
> +	if (off % size != 0)
> +		return false;
> +	return true;
> +}
> +
> +static inline bool bpf_check_btf_func_ctx_access(int off, int size,
> +						 enum bpf_access_type type,
> +						 const struct bpf_prog *prog,
> +						 struct bpf_insn_access_aux *info)
and may be bpf_tracing_btf_ctx_access() here?

> +{
> +	if (!bpf_check_btf_func_arg_access(off, size, type))
> +		return false;
> +	return btf_ctx_access(off, size, type, prog, info);
> +}

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program
  2021-10-16 12:48 ` [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program Hou Tao
  2021-10-20  1:25   ` Martin KaFai Lau
@ 2021-10-20  5:24   ` Martin KaFai Lau
  1 sibling, 0 replies; 10+ messages in thread
From: Martin KaFai Lau @ 2021-10-20  5:24 UTC (permalink / raw)
  To: Hou Tao
  Cc: Alexei Starovoitov, Yonghong Song, Daniel Borkmann,
	Andrii Nakryiko, netdev, bpf

On Sat, Oct 16, 2021 at 08:48:05PM +0800, Hou Tao wrote:
> bpf_struct_ops_test_run() will be used to run struct_ops program
> from bpf_dummy_ops and now its main purpose is to test the handling
> of return value and multiple arguments.
> 
> Signed-off-by: Hou Tao <houtao1@huawei.com>
> ---
>  kernel/bpf/bpf_struct_ops.c | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
> index 44be101f2562..ceedc9f0f786 100644
> --- a/kernel/bpf/bpf_struct_ops.c
> +++ b/kernel/bpf/bpf_struct_ops.c
> @@ -11,6 +11,9 @@
>  #include <linux/refcount.h>
>  #include <linux/mutex.h>
>  
> +static int bpf_struct_ops_test_run(struct bpf_prog *prog,
> +				   const union bpf_attr *kattr,
> +				   union bpf_attr __user *uattr);
>  enum bpf_struct_ops_state {
>  	BPF_STRUCT_OPS_STATE_INIT,
>  	BPF_STRUCT_OPS_STATE_INUSE,
> @@ -93,6 +96,7 @@ const struct bpf_verifier_ops bpf_struct_ops_verifier_ops = {
>  };
>  
>  const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
> +	.test_run = bpf_struct_ops_test_run,
>  };
>  
>  static const struct btf_type *module_type;
> @@ -667,3 +671,16 @@ void bpf_struct_ops_put(const void *kdata)
>  		call_rcu(&st_map->rcu, bpf_struct_ops_put_rcu);
>  	}
>  }
> +
> +static int bpf_struct_ops_test_run(struct bpf_prog *prog,
> +				   const union bpf_attr *kattr,
> +				   union bpf_attr __user *uattr)
> +{
> +	const struct bpf_struct_ops *st_ops;
> +
> +	st_ops = bpf_struct_ops_find(prog->aux->attach_btf_id);
Checking bpf_bpf_dummy_ops.type_id == prog->aux->attach_btf_id is as good?
then the bpf_struct_ops_find() should not be needed.

> +	if (st_ops != &bpf_bpf_dummy_ops)
> +		return -EOPNOTSUPP;
> +
> +	return bpf_dummy_struct_ops_test_run(prog, kattr, uattr);
The function bpf_dummy_struct_ops_test_run() is available under
CONFIG_NET.

How about checking the attach_btf_id in bpf_dummy_struct_ops_test_run().
and then rename s/bpf_dummy_struct_ops_test_run/bpf_struct_ops_test_run/.

and do this in bpf_struct_ops_prog_ops:

const struct bpf_prog_ops bpf_struct_ops_prog_ops = {
#ifdef CONFIG_NET
	.test_run = bpf_struct_ops_test_run,
#endif
};

Take a look at some test_run in bpf_trace.c as examples.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-10-20  5:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-16 12:48 [PATCH bpf-next v2 0/5] introduce dummy BPF STRUCT_OPS Hou Tao
2021-10-16 12:48 ` [PATCH bpf-next v2 1/5] bpf: factor out a helper to prepare trampoline for struct_ops prog Hou Tao
2021-10-16 12:48 ` [PATCH bpf-next v2 2/5] bpf: factor out helpers to check ctx access for BTF function Hou Tao
2021-10-20  5:00   ` Martin KaFai Lau
2021-10-16 12:48 ` [PATCH bpf-next v2 3/5] bpf: add dummy BPF STRUCT_OPS for test purpose Hou Tao
2021-10-20  1:22   ` Martin KaFai Lau
2021-10-16 12:48 ` [PATCH bpf-next v2 4/5] bpf: hook .test_run for struct_ops program Hou Tao
2021-10-20  1:25   ` Martin KaFai Lau
2021-10-20  5:24   ` Martin KaFai Lau
2021-10-16 12:48 ` [PATCH bpf-next v2 5/5] selftests/bpf: add test cases for struct_ops prog Hou Tao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.