bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF
@ 2021-08-30 17:34 Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This set enables kernel module function calls, and also modifies verifier logic
to permit invalid kernel function calls as long as they are pruned as part of
dead code elimination. This is done to provide better runtime portability for
BPF objects, which can conditionally disable parts of code that are pruned later
by the verifier (e.g. const volatile vars, kconfig options). libbpf
modifications are made along with kernel changes to support module function
calls.

It also converts TCP congestion control objects to use the module kfunc support
instead of relying on IS_BUILTIN ifdef.

Kumar Kartikeya Dwivedi (8):
  bpf: Introduce BPF support for kernel module function calls
  bpf: Be conservative during verification for invalid kfunc calls
  libbpf: Support kernel module function calls
  libbpf: Resolve invalid kfunc calls with imm = 0, off = 0
  tools: Allow specifying base BTF file in resolve_btfids
  bpf: btf: Introduce helpers for dynamic BTF set registration
  bpf: enable TCP congestion control kfunc from modules
  bpf, selftests: Add basic test for module kfunc call

 include/linux/bpf.h                           |   1 +
 include/linux/bpfptr.h                        |   1 +
 include/linux/btf.h                           |  18 +++
 include/linux/filter.h                        |   9 ++
 include/uapi/linux/bpf.h                      |   3 +-
 kernel/bpf/btf.c                              |  37 +++++++
 kernel/bpf/core.c                             |  14 +++
 kernel/bpf/syscall.c                          |  55 +++++++++-
 kernel/bpf/verifier.c                         | 103 ++++++++++++++----
 kernel/trace/bpf_trace.c                      |   1 +
 net/ipv4/bpf_tcp_ca.c                         |  34 +-----
 net/ipv4/tcp_bbr.c                            |  28 ++++-
 net/ipv4/tcp_cubic.c                          |  26 ++++-
 net/ipv4/tcp_dctcp.c                          |  26 ++++-
 scripts/Makefile.modfinal                     |   1 +
 tools/bpf/resolve_btfids/main.c               |  19 +++-
 tools/include/uapi/linux/bpf.h                |   3 +-
 tools/lib/bpf/bpf.c                           |   3 +
 tools/lib/bpf/libbpf.c                        |  91 ++++++++++++++--
 tools/lib/bpf/libbpf_internal.h               |   2 +
 tools/testing/selftests/bpf/Makefile          |   3 +-
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  23 +++-
 .../selftests/bpf/prog_tests/ksyms_module.c   |  10 +-
 .../selftests/bpf/progs/test_ksyms_module.c   |   9 ++
 24 files changed, 446 insertions(+), 74 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-08-30 20:01   ` Alexei Starovoitov
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 2/8] bpf: Be conservative during verification for invalid kfunc calls Kumar Kartikeya Dwivedi
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This change adds support on the kernel side to allow for BPF programs to
call kernel module functions. Userspace will prepare an array of module
BTF fds that is passed in during BPF_PROG_LOAD. In the kernel, the
module BTF array is placed in the auxilliary struct for bpf_prog.

The verifier then uses insn->off to index into this table. insn->off is
used by subtracting one from it, as userspace has to set the index of
array in insn->off incremented by 1. This lets us denote vmlinux btf by
insn->off == 0, and the prog->aux->kfunc_btf_tab[insn->off - 1] for
module BTFs.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpf.h            |  1 +
 include/linux/filter.h         |  9 ++++
 include/uapi/linux/bpf.h       |  3 +-
 kernel/bpf/core.c              | 14 ++++++
 kernel/bpf/syscall.c           | 55 +++++++++++++++++++++-
 kernel/bpf/verifier.c          | 85 ++++++++++++++++++++++++++--------
 tools/include/uapi/linux/bpf.h |  3 +-
 7 files changed, 147 insertions(+), 23 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index f4c16f19f83e..39f59e5f3a26 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -874,6 +874,7 @@ struct bpf_prog_aux {
 	void *jit_data; /* JIT specific data. arch dependent */
 	struct bpf_jit_poke_descriptor *poke_tab;
 	struct bpf_kfunc_desc_tab *kfunc_tab;
+	struct bpf_kfunc_btf_tab *kfunc_btf_tab;
 	u32 size_poke_tab;
 	struct bpf_ksym ksym;
 	const struct bpf_prog_ops *ops;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 7d248941ecea..46451891633d 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -592,6 +592,15 @@ struct bpf_prog {
 	struct bpf_insn		insnsi[];
 };
 
+#define MAX_KFUNC_DESCS 256
+/* There can only be at most MAX_KFUNC_DESCS module BTFs for kernel module
+ * function calls.
+ */
+struct bpf_kfunc_btf_tab {
+	u32 nr_btfs;
+	struct btf_mod_pair btfs[];
+};
+
 struct sk_filter {
 	refcount_t	refcnt;
 	struct rcu_head	rcu;
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 791f31dd0abe..4cbb2082a553 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -1334,8 +1334,9 @@ union bpf_attr {
 			/* or valid module BTF object fd or 0 to attach to vmlinux */
 			__u32		attach_btf_obj_fd;
 		};
-		__u32		:32;		/* pad */
+		__u32		kfunc_btf_fds_cnt; /* reuse hole for count of BTF fds below */
 		__aligned_u64	fd_array;	/* array of FDs */
+		__aligned_u64   kfunc_btf_fds;  /* array of BTF FDs for module kfunc support */
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 9f4636d021b1..73ba6d862df3 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2249,12 +2249,26 @@ static void bpf_free_used_btfs(struct bpf_prog_aux *aux)
 	kfree(aux->used_btfs);
 }
 
+static void bpf_free_kfunc_btf_tab(struct bpf_prog_aux *aux)
+{
+	struct bpf_kfunc_btf_tab *tab = aux->kfunc_btf_tab;
+
+	if (tab) {
+		while (tab->nr_btfs--) {
+			module_put(tab->btfs[tab->nr_btfs].module);
+			btf_put(tab->btfs[tab->nr_btfs].btf);
+		}
+		kfree(tab);
+	}
+}
+
 static void bpf_prog_free_deferred(struct work_struct *work)
 {
 	struct bpf_prog_aux *aux;
 	int i;
 
 	aux = container_of(work, struct bpf_prog_aux, work);
+	bpf_free_kfunc_btf_tab(aux);
 	bpf_free_used_maps(aux);
 	bpf_free_used_btfs(aux);
 	if (bpf_prog_is_dev_bound(aux))
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 4e50c0bfdb7d..bbbd664b2872 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2156,16 +2156,16 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
 }
 
 /* last field in 'union bpf_attr' used by this command */
-#define	BPF_PROG_LOAD_LAST_FIELD fd_array
+#define	BPF_PROG_LOAD_LAST_FIELD kfunc_btf_fds
 
 static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 {
 	enum bpf_prog_type type = attr->prog_type;
 	struct bpf_prog *prog, *dst_prog = NULL;
 	struct btf *attach_btf = NULL;
-	int err;
 	char license[128];
 	bool is_gpl;
+	int err;
 
 	if (CHECK_ATTR(BPF_PROG_LOAD))
 		return -EINVAL;
@@ -2204,6 +2204,8 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 		return -EPERM;
 	if (is_perfmon_prog_type(type) && !perfmon_capable())
 		return -EPERM;
+	if (attr->kfunc_btf_fds_cnt > MAX_KFUNC_DESCS)
+		return -E2BIG;
 
 	/* attach_prog_fd/attach_btf_obj_fd can specify fd of either bpf_prog
 	 * or btf, we need to check which one it is
@@ -2254,6 +2256,55 @@ static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
 		return -ENOMEM;
 	}
 
+	if (attr->kfunc_btf_fds_cnt) {
+		struct bpf_kfunc_btf_tab *tab;
+		int fds[MAX_KFUNC_DESCS], i;
+		bpfptr_t kfunc_btf_fds;
+		u32 kfunc_btf_size, n;
+
+		kfunc_btf_size = min_t(u32, MAX_KFUNC_DESCS, attr->kfunc_btf_fds_cnt);
+		kfunc_btf_fds = make_bpfptr(attr->kfunc_btf_fds, uattr.is_kernel);
+
+		err = -EFAULT;
+		if (copy_from_bpfptr(fds, kfunc_btf_fds, kfunc_btf_size * sizeof(int)))
+			goto free_prog;
+
+		err = -ENOMEM;
+
+		n = kfunc_btf_size;
+		kfunc_btf_size *= sizeof(prog->aux->kfunc_btf_tab->btfs[0]);
+		kfunc_btf_size += sizeof(*prog->aux->kfunc_btf_tab);
+		prog->aux->kfunc_btf_tab = kzalloc(kfunc_btf_size, GFP_KERNEL);
+		if (!prog->aux->kfunc_btf_tab)
+			goto free_prog;
+
+		tab = prog->aux->kfunc_btf_tab;
+		for (i = 0; i < n; i++) {
+			struct btf_mod_pair *p;
+			struct btf *mod_btf;
+
+			mod_btf = btf_get_by_fd(fds[i]);
+			if (IS_ERR(mod_btf)) {
+				err = PTR_ERR(mod_btf);
+				goto free_prog;
+			}
+			if (!btf_is_module(mod_btf)) {
+				err = -EINVAL;
+				btf_put(mod_btf);
+				goto free_prog;
+			}
+
+			p = &tab->btfs[tab->nr_btfs];
+			p->module = btf_try_get_module(mod_btf);
+			if (!p->module) {
+				btf_put(mod_btf);
+				goto free_prog;
+			}
+			p->btf = mod_btf;
+			tab->nr_btfs++;
+		}
+	}
+
 	prog->expected_attach_type = attr->expected_attach_type;
 	prog->aux->attach_btf = attach_btf;
 	prog->aux->attach_btf_id = attr->attach_btf_id;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 206c221453cf..de0670a8b1df 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1632,7 +1632,6 @@ struct bpf_kfunc_desc {
 	s32 imm;
 };
 
-#define MAX_KFUNC_DESCS 256
 struct bpf_kfunc_desc_tab {
 	struct bpf_kfunc_desc descs[MAX_KFUNC_DESCS];
 	u32 nr_descs;
@@ -1660,13 +1659,45 @@ find_kfunc_desc(const struct bpf_prog *prog, u32 func_id)
 		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id);
 }
 
-static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
+static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env,
+				       u32 func_id, s16 offset)
+{
+	struct bpf_kfunc_btf_tab *btf_tab;
+
+	btf_tab = env->prog->aux->kfunc_btf_tab;
+	/* offset can be MAX_KFUNC_DESCS, since index into the array is offset - 1,
+	 * as we reserve offset == 0 for btf_vmlinux
+	 */
+	if (offset < 0 || offset > MAX_KFUNC_DESCS) {
+		verbose(env, "offset %d is incorrect for kernel function call\n", (int)offset);
+		return ERR_PTR(-EINVAL);
+	}
+
+	if (offset) {
+		if (!btf_tab) {
+			verbose(env,
+				"offset %d for kfunc call but no kernel module BTFs passed\n",
+				(int)offset);
+			return ERR_PTR(-EINVAL);
+		} else if (offset > btf_tab->nr_btfs) {
+			verbose(env,
+				"offset %d incorrect for module BTF array with %u descriptors\n",
+				(int)offset, btf_tab->nr_btfs);
+			return ERR_PTR(-EINVAL);
+		}
+		return btf_tab->btfs[offset - 1].btf;
+	}
+	return btf_vmlinux ?: ERR_PTR(-ENOENT);
+}
+
+static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 {
 	const struct btf_type *func, *func_proto;
 	struct bpf_kfunc_desc_tab *tab;
 	struct bpf_prog_aux *prog_aux;
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
+	struct btf *desc_btf;
 	unsigned long addr;
 	int err;
 
@@ -1699,6 +1730,12 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
 		prog_aux->kfunc_tab = tab;
 	}
 
+	desc_btf = find_kfunc_desc_btf(env, func_id, offset);
+	if (IS_ERR(desc_btf)) {
+		verbose(env, "failed to find BTF for kernel function\n");
+		return PTR_ERR(desc_btf);
+	}
+
 	if (find_kfunc_desc(env->prog, func_id))
 		return 0;
 
@@ -1707,20 +1744,20 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
 		return -E2BIG;
 	}
 
-	func = btf_type_by_id(btf_vmlinux, func_id);
+	func = btf_type_by_id(desc_btf, func_id);
 	if (!func || !btf_type_is_func(func)) {
 		verbose(env, "kernel btf_id %u is not a function\n",
 			func_id);
 		return -EINVAL;
 	}
-	func_proto = btf_type_by_id(btf_vmlinux, func->type);
+	func_proto = btf_type_by_id(desc_btf, func->type);
 	if (!func_proto || !btf_type_is_func_proto(func_proto)) {
 		verbose(env, "kernel function btf_id %u does not have a valid func_proto\n",
 			func_id);
 		return -EINVAL;
 	}
 
-	func_name = btf_name_by_offset(btf_vmlinux, func->name_off);
+	func_name = btf_name_by_offset(desc_btf, func->name_off);
 	addr = kallsyms_lookup_name(func_name);
 	if (!addr) {
 		verbose(env, "cannot find address for kernel function %s\n",
@@ -1731,7 +1768,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
 	desc->imm = BPF_CAST_CALL(addr) - __bpf_call_base;
-	err = btf_distill_func_proto(&env->log, btf_vmlinux,
+	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
 	if (!err)
@@ -1815,7 +1852,7 @@ static int add_subprog_and_kfunc(struct bpf_verifier_env *env)
 		} else if (bpf_pseudo_call(insn)) {
 			ret = add_subprog(env, i + insn->imm + 1);
 		} else {
-			ret = add_kfunc_call(env, insn->imm);
+			ret = add_kfunc_call(env, insn->imm, insn->off);
 		}
 
 		if (ret < 0)
@@ -2152,12 +2189,17 @@ static int get_prev_insn_idx(struct bpf_verifier_state *st, int i,
 static const char *disasm_kfunc_name(void *data, const struct bpf_insn *insn)
 {
 	const struct btf_type *func;
+	struct btf *desc_btf;
 
 	if (insn->src_reg != BPF_PSEUDO_KFUNC_CALL)
 		return NULL;
 
-	func = btf_type_by_id(btf_vmlinux, insn->imm);
-	return btf_name_by_offset(btf_vmlinux, func->name_off);
+	desc_btf = find_kfunc_desc_btf(data, insn->imm, insn->off);
+	if (IS_ERR(desc_btf))
+		return "<error>";
+
+	func = btf_type_by_id(desc_btf, insn->imm);
+	return btf_name_by_offset(desc_btf, func->name_off);
 }
 
 /* For given verifier state backtrack_insn() is called from the last insn to
@@ -6482,12 +6524,17 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	const char *func_name, *ptr_type_name;
 	u32 i, nargs, func_id, ptr_type_id;
 	const struct btf_param *args;
+	struct btf *desc_btf;
 	int err;
 
+	desc_btf = find_kfunc_desc_btf(env, insn->imm, insn->off);
+	if (IS_ERR(desc_btf))
+		return PTR_ERR(desc_btf);
+
 	func_id = insn->imm;
-	func = btf_type_by_id(btf_vmlinux, func_id);
-	func_name = btf_name_by_offset(btf_vmlinux, func->name_off);
-	func_proto = btf_type_by_id(btf_vmlinux, func->type);
+	func = btf_type_by_id(desc_btf, func_id);
+	func_name = btf_name_by_offset(desc_btf, func->name_off);
+	func_proto = btf_type_by_id(desc_btf, func->type);
 
 	if (!env->ops->check_kfunc_call ||
 	    !env->ops->check_kfunc_call(func_id)) {
@@ -6497,7 +6544,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	}
 
 	/* Check the arguments */
-	err = btf_check_kfunc_arg_match(env, btf_vmlinux, func_id, regs);
+	err = btf_check_kfunc_arg_match(env, desc_btf, func_id, regs);
 	if (err)
 		return err;
 
@@ -6505,15 +6552,15 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		mark_reg_not_init(env, regs, caller_saved[i]);
 
 	/* Check return type */
-	t = btf_type_skip_modifiers(btf_vmlinux, func_proto->type, NULL);
+	t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL);
 	if (btf_type_is_scalar(t)) {
 		mark_reg_unknown(env, regs, BPF_REG_0);
 		mark_btf_func_reg_size(env, BPF_REG_0, t->size);
 	} else if (btf_type_is_ptr(t)) {
-		ptr_type = btf_type_skip_modifiers(btf_vmlinux, t->type,
+		ptr_type = btf_type_skip_modifiers(desc_btf, t->type,
 						   &ptr_type_id);
 		if (!btf_type_is_struct(ptr_type)) {
-			ptr_type_name = btf_name_by_offset(btf_vmlinux,
+			ptr_type_name = btf_name_by_offset(desc_btf,
 							   ptr_type->name_off);
 			verbose(env, "kernel function %s returns pointer type %s %s is not supported\n",
 				func_name, btf_type_str(ptr_type),
@@ -6521,7 +6568,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 			return -EINVAL;
 		}
 		mark_reg_known_zero(env, regs, BPF_REG_0);
-		regs[BPF_REG_0].btf = btf_vmlinux;
+		regs[BPF_REG_0].btf = desc_btf;
 		regs[BPF_REG_0].type = PTR_TO_BTF_ID;
 		regs[BPF_REG_0].btf_id = ptr_type_id;
 		mark_btf_func_reg_size(env, BPF_REG_0, sizeof(void *));
@@ -6532,7 +6579,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	for (i = 0; i < nargs; i++) {
 		u32 regno = i + 1;
 
-		t = btf_type_skip_modifiers(btf_vmlinux, args[i].type, NULL);
+		t = btf_type_skip_modifiers(desc_btf, args[i].type, NULL);
 		if (btf_type_is_ptr(t))
 			mark_btf_func_reg_size(env, regno, sizeof(void *));
 		else
@@ -11070,7 +11117,6 @@ static int do_check(struct bpf_verifier_env *env)
 			env->jmps_processed++;
 			if (opcode == BPF_CALL) {
 				if (BPF_SRC(insn->code) != BPF_K ||
-				    insn->off != 0 ||
 				    (insn->src_reg != BPF_REG_0 &&
 				     insn->src_reg != BPF_PSEUDO_CALL &&
 				     insn->src_reg != BPF_PSEUDO_KFUNC_CALL) ||
@@ -12425,6 +12471,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 		func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
 		func[i]->jit_requested = 1;
 		func[i]->aux->kfunc_tab = prog->aux->kfunc_tab;
+		func[i]->aux->kfunc_btf_tab = prog->aux->kfunc_btf_tab;
 		func[i]->aux->linfo = prog->aux->linfo;
 		func[i]->aux->nr_linfo = prog->aux->nr_linfo;
 		func[i]->aux->jited_linfo = prog->aux->jited_linfo;
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 791f31dd0abe..4cbb2082a553 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -1334,8 +1334,9 @@ union bpf_attr {
 			/* or valid module BTF object fd or 0 to attach to vmlinux */
 			__u32		attach_btf_obj_fd;
 		};
-		__u32		:32;		/* pad */
+		__u32		kfunc_btf_fds_cnt; /* reuse hole for count of BTF fds below */
 		__aligned_u64	fd_array;	/* array of FDs */
+		__aligned_u64   kfunc_btf_fds;  /* array of BTF FDs for module kfunc support */
 	};
 
 	struct { /* anonymous struct used by BPF_OBJ_* commands */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 2/8] bpf: Be conservative during verification for invalid kfunc calls
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This change modifies the BPF verifier to only return error for invalid
kfunc calls specially marked by userspace (with insn->imm == 0) after
the verifier has eliminated dead instructions. This can be handled in
the fixup stage, and skip processing during add and check stages.

If such an invalid call is dropped, the fixup stage will not encounter
insn->imm as 0, otherwise it bails out and returns an error.

This can be used by userspace to use branches to call old and new kfunc
helpers across kernel versions by setting the rodata map value before
loading the BPF program, enhancing runtime portability. The next patch
introduces libbpf support for this.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 kernel/bpf/verifier.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index de0670a8b1df..9904b9a96b04 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1730,6 +1730,15 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		prog_aux->kfunc_tab = tab;
 	}
 
+	/* btf_idr allocates IDs from 1, so func_id == 0 is always invalid, but
+	 * instead of returning an error, be conservative and wait until the
+	 * code elimination pass before returning error, so that invalid calls
+	 * that get pruned out can be in BPF programs loaded from userspace.
+	 * It is also required that offset be 0.
+	 */
+	if (!func_id && !offset)
+		return 0;
+
 	desc_btf = find_kfunc_desc_btf(env, func_id, offset);
 	if (IS_ERR(desc_btf)) {
 		verbose(env, "failed to find BTF for kernel function\n");
@@ -6527,6 +6536,10 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	struct btf *desc_btf;
 	int err;
 
+	/* skip for now, but return error when we find this in fixup_kfunc_call */
+	if (!insn->imm)
+		return 0;
+
 	desc_btf = find_kfunc_desc_btf(env, insn->imm, insn->off);
 	if (IS_ERR(desc_btf))
 		return PTR_ERR(desc_btf);
@@ -12658,6 +12671,11 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
 {
 	const struct bpf_kfunc_desc *desc;
 
+	if (!insn->imm) {
+		verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
+		return -EINVAL;
+	}
+
 	/* insn->imm has the btf func_id. Replace it with
 	 * an address (relative to __bpf_base_call).
 	 */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 2/8] bpf: Be conservative during verification for invalid kfunc calls Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-09-01  0:55   ` Andrii Nakryiko
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 4/8] libbpf: Resolve invalid kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/lib/bpf/bpf.c             |  3 ++
 tools/lib/bpf/libbpf.c          | 71 +++++++++++++++++++++++++++++++--
 tools/lib/bpf/libbpf_internal.h |  2 +
 3 files changed, 73 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 2401fad090c5..df2d1ceba146 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -265,6 +265,9 @@ int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr)
 	attr.line_info_cnt = load_attr->line_info_cnt;
 	attr.line_info = ptr_to_u64(load_attr->line_info);
 
+	attr.kfunc_btf_fds = ptr_to_u64(load_attr->kfunc_btf_fds);
+	attr.kfunc_btf_fds_cnt = load_attr->kfunc_btf_fds_cnt;
+
 	if (load_attr->name)
 		memcpy(attr.prog_name, load_attr->name,
 		       min(strlen(load_attr->name), (size_t)BPF_OBJ_NAME_LEN - 1));
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 88d8825fc6f6..c4677ef97caa 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -419,6 +419,12 @@ struct extern_desc {
 
 			/* local btf_id of the ksym extern's type. */
 			__u32 type_id;
+			/* offset to be patched in for insn->off,
+			 * this is 0 for btf_vmlinux, and index + 1
+			 * for module BTF, where index is BTF index in
+			 * obj->kfunc_btf_fds.fds array
+			 */
+			__u32 offset;
 		} ksym;
 	};
 };
@@ -515,6 +521,13 @@ struct bpf_object {
 	void *priv;
 	bpf_object_clear_priv_t clear_priv;
 
+	struct {
+		struct hashmap *map;
+		int *fds;
+		size_t cap_cnt;
+		__u32 n_fds;
+	} kfunc_btf_fds;
+
 	char path[];
 };
 #define obj_elf_valid(o)	((o)->efile.elf)
@@ -5327,6 +5340,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
 			ext = &obj->externs[relo->sym_off];
 			insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
 			insn[0].imm = ext->ksym.kernel_btf_id;
+			insn[0].off = ext->ksym.offset;
 			break;
 		case RELO_SUBPROG_ADDR:
 			if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
@@ -6122,6 +6136,11 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	load_attr.log_level = prog->log_level;
 	load_attr.prog_flags = prog->prog_flags;
 
+	if (prog->obj->kfunc_btf_fds.n_fds) {
+		load_attr.kfunc_btf_fds = prog->obj->kfunc_btf_fds.fds;
+		load_attr.kfunc_btf_fds_cnt = prog->obj->kfunc_btf_fds.n_fds;
+	}
+
 	if (prog->obj->gen_loader) {
 		bpf_gen__prog_load(prog->obj->gen_loader, &load_attr,
 				   prog - prog->obj->programs);
@@ -6723,9 +6742,49 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 	}
 
 	if (kern_btf != obj->btf_vmlinux) {
-		pr_warn("extern (func ksym) '%s': function in kernel module is not supported\n",
-			ext->name);
-		return -ENOTSUP;
+		size_t index;
+		void *value;
+
+		/* Lazy initialize btf->fd index map */
+		if (!obj->kfunc_btf_fds.map) {
+			obj->kfunc_btf_fds.map = hashmap__new(bpf_core_hash_fn, bpf_core_equal_fn,
+							      NULL);
+			if (!obj->kfunc_btf_fds.map)
+				return -ENOMEM;
+
+			obj->kfunc_btf_fds.fds = calloc(8, sizeof(*obj->kfunc_btf_fds.fds));
+			if (!obj->kfunc_btf_fds.fds) {
+				hashmap__free(obj->kfunc_btf_fds.map);
+				return -ENOMEM;
+			}
+			obj->kfunc_btf_fds.cap_cnt = 8;
+		}
+
+		if (!hashmap__find(obj->kfunc_btf_fds.map, kern_btf, &value)) {
+			size_t *cap_cnt = &obj->kfunc_btf_fds.cap_cnt;
+			/* Not found, insert BTF fd into slot, and grab next
+			 * index from the fd array.
+			 */
+			ret = libbpf_ensure_mem((void **)&obj->kfunc_btf_fds.fds,
+						cap_cnt, sizeof(int), obj->kfunc_btf_fds.n_fds + 1);
+			if (ret)
+				return ret;
+			index = obj->kfunc_btf_fds.n_fds++;
+			obj->kfunc_btf_fds.fds[index] = kern_btf_fd;
+			value = (void *)index;
+			ret = hashmap__add(obj->kfunc_btf_fds.map, kern_btf, &value);
+			if (ret)
+				return ret;
+
+		} else {
+			index = (size_t)value;
+		}
+		/* index starts from 0, so shift offset by 1 as offset == 0 is reserved
+		 * for btf_vmlinux in the kernel
+		 */
+		ext->ksym.offset = index + 1;
+	} else {
+		ext->ksym.offset = 0;
 	}
 
 	kern_func = btf__type_by_id(kern_btf, kfunc_id);
@@ -6901,6 +6960,12 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
 			err = bpf_gen__finish(obj->gen_loader);
 	}
 
+	/* clean up kfunc_btf */
+	hashmap__free(obj->kfunc_btf_fds.map);
+	obj->kfunc_btf_fds.map = NULL;
+	zfree(&obj->kfunc_btf_fds.fds);
+	obj->kfunc_btf_fds.cap_cnt = obj->kfunc_btf_fds.n_fds = 0;
+
 	/* clean up module BTFs */
 	for (i = 0; i < obj->btf_module_cnt; i++) {
 		close(obj->btf_modules[i].fd);
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index 533b0211f40a..701719d9caaf 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -276,6 +276,8 @@ struct bpf_prog_load_params {
 	__u32 log_level;
 	char *log_buf;
 	size_t log_buf_sz;
+	int *kfunc_btf_fds;
+	__u32 kfunc_btf_fds_cnt;
 };
 
 int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 4/8] libbpf: Resolve invalid kfunc calls with imm = 0, off = 0
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (2 preceding siblings ...)
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-09-01  0:35   ` Andrii Nakryiko
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 5/8] tools: Allow specifying base BTF file in resolve_btfids Kumar Kartikeya Dwivedi
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

Preserve these calls as it allows verifier to succeed in loading the
program if they are determined to be unreachable after dead code
elimination during program load. If not, the verifier will fail at
runtime.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/lib/bpf/libbpf.c | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index c4677ef97caa..9df90098f111 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -6736,9 +6736,14 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 	kfunc_id = find_ksym_btf_id(obj, ext->name, BTF_KIND_FUNC,
 				    &kern_btf, &kern_btf_fd);
 	if (kfunc_id < 0) {
-		pr_warn("extern (func ksym) '%s': not found in kernel BTF\n",
+		pr_warn("extern (func ksym) '%s': not found in kernel BTF, encoding btf_id as 0\n",
 			ext->name);
-		return kfunc_id;
+		/* keep invalid kfuncs, so that verifier can load the program if
+		 * they get removed during DCE pass in the verifier.
+		 * The encoding must be insn->imm = 0, insn->off = 0.
+		 */
+		kfunc_id = kern_btf_fd = 0;
+		goto resolve;
 	}
 
 	if (kern_btf != obj->btf_vmlinux) {
@@ -6798,11 +6803,18 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 		return -EINVAL;
 	}
 
+resolve:
 	ext->is_set = true;
 	ext->ksym.kernel_btf_obj_fd = kern_btf_fd;
 	ext->ksym.kernel_btf_id = kfunc_id;
-	pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n",
-		 ext->name, kfunc_id);
+	if (kfunc_id) {
+		pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n",
+			 ext->name, kfunc_id);
+	} else {
+		ext->ksym.offset = 0;
+		pr_debug("extern (func ksym) '%s': added special invalid kfunc with imm = 0\n",
+			 ext->name);
+	}
 
 	return 0;
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 5/8] tools: Allow specifying base BTF file in resolve_btfids
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (3 preceding siblings ...)
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 4/8] libbpf: Resolve invalid kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 6/8] bpf: btf: Introduce helpers for dynamic BTF set registration Kumar Kartikeya Dwivedi
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This commits allows specifying the base BTF for resolving btf id
lists/sets during link time in the resolve_btfids tool. The base BTF is
set to NULL if no path is passed. This allows resolving BTF ids for
module kernel objects.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/bpf/resolve_btfids/main.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index de6365b53c9c..206e1120082f 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -89,6 +89,7 @@ struct btf_id {
 struct object {
 	const char *path;
 	const char *btf;
+	const char *base_btf_path;
 
 	struct {
 		int		 fd;
@@ -477,16 +478,27 @@ static int symbols_resolve(struct object *obj)
 	int nr_structs  = obj->nr_structs;
 	int nr_unions   = obj->nr_unions;
 	int nr_funcs    = obj->nr_funcs;
+	struct btf *base_btf = NULL;
 	int err, type_id;
 	struct btf *btf;
 	__u32 nr_types;
 
-	btf = btf__parse(obj->btf ?: obj->path, NULL);
+	if (obj->base_btf_path) {
+		base_btf = btf__parse(obj->base_btf_path, NULL);
+		err = libbpf_get_error(base_btf);
+		if (err) {
+			pr_err("FAILED: load base BTF from %s: %s\n",
+			       obj->base_btf_path, strerror(-err));
+			return -1;
+		}
+	}
+
+	btf = btf__parse_split(obj->btf ?: obj->path, base_btf);
 	err = libbpf_get_error(btf);
 	if (err) {
 		pr_err("FAILED: load BTF from %s: %s\n",
 			obj->btf ?: obj->path, strerror(-err));
-		return -1;
+		goto out;
 	}
 
 	err = -1;
@@ -545,6 +557,7 @@ static int symbols_resolve(struct object *obj)
 
 	err = 0;
 out:
+	btf__free(base_btf);
 	btf__free(btf);
 	return err;
 }
@@ -697,6 +710,8 @@ int main(int argc, const char **argv)
 			   "BTF data"),
 		OPT_BOOLEAN(0, "no-fail", &no_fail,
 			   "do not fail if " BTF_IDS_SECTION " section is not found"),
+		OPT_STRING('s', "base-btf", &obj.base_btf_path, "file",
+			   "path of file providing base BTF data"),
 		OPT_END()
 	};
 	int err = -1;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 6/8] bpf: btf: Introduce helpers for dynamic BTF set registration
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (4 preceding siblings ...)
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 5/8] tools: Allow specifying base BTF file in resolve_btfids Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 7/8] bpf: enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 8/8] bpf, selftests: Add basic test for module kfunc call Kumar Kartikeya Dwivedi
  7 siblings, 0 replies; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This adds macros that generate BTF set registration APIs and
check_kfunc_call callback. These require a type, which namespaces each
BTF set. This is in preparation to allow for nf_conntrack registering
unstable helpers it wants to expose to XDP and SCHED_CLS programs in
subsequent patches.

With in kernel sets, the way this is supposed to work is, in kernel
callback looks up within the in-kernel kfunc whitelist, and then defers
to the dynamic BTF set lookup if it doesn't find the BTF id. If there is
no in-kernel BTF id set, this callback can be directly used.

Also fix includes for btf.h and bpfptr.h so that they can included in
isolation. This is in preparation for their usage in tcp_bbr, tcp_cubic
and tcp_dctcp modules in the next patch.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpfptr.h |  1 +
 include/linux/btf.h    | 15 +++++++++++++++
 kernel/bpf/btf.c       | 34 ++++++++++++++++++++++++++++++++++
 3 files changed, 50 insertions(+)

diff --git a/include/linux/bpfptr.h b/include/linux/bpfptr.h
index 546e27fc6d46..46e1757d06a3 100644
--- a/include/linux/bpfptr.h
+++ b/include/linux/bpfptr.h
@@ -3,6 +3,7 @@
 #ifndef _LINUX_BPFPTR_H
 #define _LINUX_BPFPTR_H
 
+#include <linux/mm.h>
 #include <linux/sockptr.h>
 
 typedef sockptr_t bpfptr_t;
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 214fde93214b..d024b0eb43f9 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -5,6 +5,7 @@
 #define _LINUX_BTF_H 1
 
 #include <linux/types.h>
+#include <linux/bpfptr.h>
 #include <uapi/linux/btf.h>
 #include <uapi/linux/bpf.h>
 
@@ -238,4 +239,18 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
 }
 #endif
 
+struct kfunc_btf_set {
+	struct list_head list;
+	struct btf_id_set *set;
+};
+
+/* Register set of BTF ids */
+#define DECLARE_KFUNC_BTF_SET_REG(type)                                        \
+	void register_##type##_kfunc_btf_set(struct kfunc_btf_set *s);         \
+	bool __bpf_check_##type##_kfunc_call(u32 kfunc_id);                    \
+	void unregister_##type##_kfunc_btf_set(struct kfunc_btf_set *s)
+
+#define DEFINE_KFUNC_BTF_SET(set, name)                                        \
+	struct kfunc_btf_set name = { LIST_HEAD_INIT(name.list), (set) }
+
 #endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index dfe61df4f974..35873495761d 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6215,3 +6215,37 @@ const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
 };
 
 BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
+
+/* Typesafe helpers to register BTF ID sets for modules */
+#define DEFINE_KFUNC_BTF_SET_REG(type)                                         \
+	static DEFINE_MUTEX(type##_kfunc_btf_set_mutex);                       \
+	static LIST_HEAD(type##_kfunc_btf_set_list);                           \
+	void register_##type##_kfunc_btf_set(struct kfunc_btf_set *s)          \
+	{                                                                      \
+		mutex_lock(&type##_kfunc_btf_set_mutex);                       \
+		list_add(&s->list, &type##_kfunc_btf_set_list);                \
+		mutex_unlock(&type##_kfunc_btf_set_mutex);                     \
+	}                                                                      \
+	EXPORT_SYMBOL_GPL(register_##type##_kfunc_btf_set);                    \
+	bool __bpf_check_##type##_kfunc_call(u32 kfunc_id)                     \
+	{                                                                      \
+		struct kfunc_btf_set *s;                                       \
+		mutex_lock(&type##_kfunc_btf_set_mutex);                       \
+		list_for_each_entry(s, &type##_kfunc_btf_set_list, list) {     \
+			if (btf_id_set_contains(s->set, kfunc_id)) {           \
+				mutex_unlock(&type##_kfunc_btf_set_mutex);     \
+				return true;                                   \
+			}                                                      \
+		}                                                              \
+		mutex_unlock(&type##_kfunc_btf_set_mutex);                     \
+		return false;                                                  \
+	}                                                                      \
+	void unregister_##type##_kfunc_btf_set(struct kfunc_btf_set *s)        \
+	{                                                                      \
+		if (!s)                                                        \
+			return;                                                \
+		mutex_lock(&type##_kfunc_btf_set_mutex);                       \
+		list_del_init(&s->list);                                       \
+		mutex_unlock(&type##_kfunc_btf_set_mutex);                     \
+	}                                                                      \
+	EXPORT_SYMBOL_GPL(unregister_##type##_kfunc_btf_set)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 7/8] bpf: enable TCP congestion control kfunc from modules
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (5 preceding siblings ...)
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 6/8] bpf: btf: Introduce helpers for dynamic BTF set registration Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-09-01  0:48   ` Andrii Nakryiko
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 8/8] bpf, selftests: Add basic test for module kfunc call Kumar Kartikeya Dwivedi
  7 siblings, 1 reply; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This commit moves BTF ID lookup into the newly added registration
helper, in a way that the bbr, cubic, and dctcp implementation set up
their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
dependent on modules are looked up from the wrapper function.

This lifts the restriction for them to be compiled as built in objects,
and can be loaded as modules if required. Also modify link-vmlinux.sh to
resolve_btfids in TCP congestion control modules if the config option is
set, using the base BTF support added in the previous commit.

See following commits for background on use of:

 CONFIG_X86 ifdef:
 569c484f9995 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)

 CONFIG_DYNAMIC_FTRACE ifdef:
 7aae231ac93b (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/btf.h       |  2 ++
 kernel/bpf/btf.c          |  2 ++
 net/ipv4/bpf_tcp_ca.c     | 34 +++-------------------------------
 net/ipv4/tcp_bbr.c        | 28 +++++++++++++++++++++++++++-
 net/ipv4/tcp_cubic.c      | 26 +++++++++++++++++++++++++-
 net/ipv4/tcp_dctcp.c      | 26 +++++++++++++++++++++++++-
 scripts/Makefile.modfinal |  1 +
 7 files changed, 85 insertions(+), 34 deletions(-)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index d024b0eb43f9..8c0f29ed2af9 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -253,4 +253,6 @@ struct kfunc_btf_set {
 #define DEFINE_KFUNC_BTF_SET(set, name)                                        \
 	struct kfunc_btf_set name = { LIST_HEAD_INIT(name.list), (set) }
 
+DECLARE_KFUNC_BTF_SET_REG(bpf_tcp_ca);
+
 #endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 35873495761d..cc12470a55f9 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6249,3 +6249,5 @@ BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
 		mutex_unlock(&type##_kfunc_btf_set_mutex);                     \
 	}                                                                      \
 	EXPORT_SYMBOL_GPL(unregister_##type##_kfunc_btf_set)
+
+DEFINE_KFUNC_BTF_SET_REG(bpf_tcp_ca);
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 0dcee9df1326..804f2f912fe9 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -223,41 +223,13 @@ BTF_ID(func, tcp_reno_cong_avoid)
 BTF_ID(func, tcp_reno_undo_cwnd)
 BTF_ID(func, tcp_slow_start)
 BTF_ID(func, tcp_cong_avoid_ai)
-#ifdef CONFIG_X86
-#ifdef CONFIG_DYNAMIC_FTRACE
-#if IS_BUILTIN(CONFIG_TCP_CONG_CUBIC)
-BTF_ID(func, cubictcp_init)
-BTF_ID(func, cubictcp_recalc_ssthresh)
-BTF_ID(func, cubictcp_cong_avoid)
-BTF_ID(func, cubictcp_state)
-BTF_ID(func, cubictcp_cwnd_event)
-BTF_ID(func, cubictcp_acked)
-#endif
-#if IS_BUILTIN(CONFIG_TCP_CONG_DCTCP)
-BTF_ID(func, dctcp_init)
-BTF_ID(func, dctcp_update_alpha)
-BTF_ID(func, dctcp_cwnd_event)
-BTF_ID(func, dctcp_ssthresh)
-BTF_ID(func, dctcp_cwnd_undo)
-BTF_ID(func, dctcp_state)
-#endif
-#if IS_BUILTIN(CONFIG_TCP_CONG_BBR)
-BTF_ID(func, bbr_init)
-BTF_ID(func, bbr_main)
-BTF_ID(func, bbr_sndbuf_expand)
-BTF_ID(func, bbr_undo_cwnd)
-BTF_ID(func, bbr_cwnd_event)
-BTF_ID(func, bbr_ssthresh)
-BTF_ID(func, bbr_min_tso_segs)
-BTF_ID(func, bbr_set_state)
-#endif
-#endif  /* CONFIG_DYNAMIC_FTRACE */
-#endif	/* CONFIG_X86 */
 BTF_SET_END(bpf_tcp_ca_kfunc_ids)
 
 static bool bpf_tcp_ca_check_kfunc_call(u32 kfunc_btf_id)
 {
-	return btf_id_set_contains(&bpf_tcp_ca_kfunc_ids, kfunc_btf_id);
+	if (btf_id_set_contains(&bpf_tcp_ca_kfunc_ids, kfunc_btf_id))
+		return true;
+	return __bpf_check_bpf_tcp_ca_kfunc_call(kfunc_btf_id);
 }
 
 static const struct bpf_verifier_ops bpf_tcp_ca_verifier_ops = {
diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
index 6274462b86b4..1fea15dd0e05 100644
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -56,6 +56,8 @@
  * otherwise TCP stack falls back to an internal pacing using one high
  * resolution timer per TCP socket and may use more resources.
  */
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/module.h>
 #include <net/tcp.h>
 #include <linux/inet_diag.h>
@@ -1152,14 +1154,38 @@ static struct tcp_congestion_ops tcp_bbr_cong_ops __read_mostly = {
 	.set_state	= bbr_set_state,
 };
 
+BTF_SET_START(tcp_bbr_kfunc_ids)
+#ifdef CONFIG_X86
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, bbr_init)
+BTF_ID(func, bbr_main)
+BTF_ID(func, bbr_sndbuf_expand)
+BTF_ID(func, bbr_undo_cwnd)
+BTF_ID(func, bbr_cwnd_event)
+BTF_ID(func, bbr_ssthresh)
+BTF_ID(func, bbr_min_tso_segs)
+BTF_ID(func, bbr_set_state)
+#endif
+#endif
+BTF_SET_END(tcp_bbr_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_SET(&tcp_bbr_kfunc_ids, tcp_bbr_kfunc_btf_set);
+
 static int __init bbr_register(void)
 {
+	int ret;
+
 	BUILD_BUG_ON(sizeof(struct bbr) > ICSK_CA_PRIV_SIZE);
-	return tcp_register_congestion_control(&tcp_bbr_cong_ops);
+	ret = tcp_register_congestion_control(&tcp_bbr_cong_ops);
+	if (ret)
+		return ret;
+	register_bpf_tcp_ca_kfunc_btf_set(&tcp_bbr_kfunc_btf_set);
+	return 0;
 }
 
 static void __exit bbr_unregister(void)
 {
+	unregister_bpf_tcp_ca_kfunc_btf_set(&tcp_bbr_kfunc_btf_set);
 	tcp_unregister_congestion_control(&tcp_bbr_cong_ops);
 }
 
diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
index 4a30deaa9a37..5b36b9442797 100644
--- a/net/ipv4/tcp_cubic.c
+++ b/net/ipv4/tcp_cubic.c
@@ -25,6 +25,8 @@
  */
 
 #include <linux/mm.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/module.h>
 #include <linux/math64.h>
 #include <net/tcp.h>
@@ -482,8 +484,25 @@ static struct tcp_congestion_ops cubictcp __read_mostly = {
 	.name		= "cubic",
 };
 
+BTF_SET_START(tcp_cubic_kfunc_ids)
+#ifdef CONFIG_X86
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, cubictcp_init)
+BTF_ID(func, cubictcp_recalc_ssthresh)
+BTF_ID(func, cubictcp_cong_avoid)
+BTF_ID(func, cubictcp_state)
+BTF_ID(func, cubictcp_cwnd_event)
+BTF_ID(func, cubictcp_acked)
+#endif
+#endif
+BTF_SET_END(tcp_cubic_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_SET(&tcp_cubic_kfunc_ids, tcp_cubic_kfunc_btf_set);
+
 static int __init cubictcp_register(void)
 {
+	int ret;
+
 	BUILD_BUG_ON(sizeof(struct bictcp) > ICSK_CA_PRIV_SIZE);
 
 	/* Precompute a bunch of the scaling factors that are used per-packet
@@ -514,11 +533,16 @@ static int __init cubictcp_register(void)
 	/* divide by bic_scale and by constant Srtt (100ms) */
 	do_div(cube_factor, bic_scale * 10);
 
-	return tcp_register_congestion_control(&cubictcp);
+	ret = tcp_register_congestion_control(&cubictcp);
+	if (ret)
+		return ret;
+	register_bpf_tcp_ca_kfunc_btf_set(&tcp_cubic_kfunc_btf_set);
+	return 0;
 }
 
 static void __exit cubictcp_unregister(void)
 {
+	unregister_bpf_tcp_ca_kfunc_btf_set(&tcp_cubic_kfunc_btf_set);
 	tcp_unregister_congestion_control(&cubictcp);
 }
 
diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
index 79f705450c16..efc47b4c7a11 100644
--- a/net/ipv4/tcp_dctcp.c
+++ b/net/ipv4/tcp_dctcp.c
@@ -36,6 +36,8 @@
  *	Glenn Judd <glenn.judd@morganstanley.com>
  */
 
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/module.h>
 #include <linux/mm.h>
 #include <net/tcp.h>
@@ -236,14 +238,36 @@ static struct tcp_congestion_ops dctcp_reno __read_mostly = {
 	.name		= "dctcp-reno",
 };
 
+BTF_SET_START(tcp_dctcp_kfunc_ids)
+#ifdef CONFIG_X86
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, dctcp_init)
+BTF_ID(func, dctcp_update_alpha)
+BTF_ID(func, dctcp_cwnd_event)
+BTF_ID(func, dctcp_ssthresh)
+BTF_ID(func, dctcp_cwnd_undo)
+BTF_ID(func, dctcp_state)
+#endif
+#endif
+BTF_SET_END(tcp_dctcp_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_SET(&tcp_dctcp_kfunc_ids, tcp_dctcp_kfunc_btf_set);
+
 static int __init dctcp_register(void)
 {
+	int ret;
+
 	BUILD_BUG_ON(sizeof(struct dctcp) > ICSK_CA_PRIV_SIZE);
-	return tcp_register_congestion_control(&dctcp);
+	ret = tcp_register_congestion_control(&dctcp);
+	if (ret)
+		return ret;
+	register_bpf_tcp_ca_kfunc_btf_set(&tcp_dctcp_kfunc_btf_set);
+	return 0;
 }
 
 static void __exit dctcp_unregister(void)
 {
+	unregister_bpf_tcp_ca_kfunc_btf_set(&tcp_dctcp_kfunc_btf_set);
 	tcp_unregister_congestion_control(&dctcp);
 }
 
diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
index 5e9b8057fb24..0755d4b8b74a 100644
--- a/scripts/Makefile.modfinal
+++ b/scripts/Makefile.modfinal
@@ -58,6 +58,7 @@ quiet_cmd_btf_ko = BTF [M] $@
       cmd_btf_ko = 							\
 	if [ -f vmlinux ]; then						\
 		LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \
+		$(RESOLVE_BTFIDS) --no-fail -s vmlinux $@; 		\
 	else								\
 		printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \
 	fi;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH bpf-next RFC v1 8/8] bpf, selftests: Add basic test for module kfunc call
  2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (6 preceding siblings ...)
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 7/8] bpf: enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
@ 2021-08-30 17:34 ` Kumar Kartikeya Dwivedi
  2021-08-30 20:07   ` Alexei Starovoitov
  7 siblings, 1 reply; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-08-30 17:34 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This has to drop light skeleton generation and instead use libbpf
skeleton support, as loader program does not support kfunc module calls,
yet. This also tests support for invalid kfunc calls we added in prior
changes, such that verifier handles invalid call as long as it is
removed by code elimination pass (before fixup_kfunc_call).

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/btf.h                           |  1 +
 kernel/bpf/btf.c                              |  1 +
 kernel/trace/bpf_trace.c                      |  1 +
 tools/testing/selftests/bpf/Makefile          |  3 ++-
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 23 ++++++++++++++++++-
 .../selftests/bpf/prog_tests/ksyms_module.c   | 10 ++++----
 .../selftests/bpf/progs/test_ksyms_module.c   |  9 ++++++++
 7 files changed, 40 insertions(+), 8 deletions(-)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index 8c0f29ed2af9..6e704981c475 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -254,5 +254,6 @@ struct kfunc_btf_set {
 	struct kfunc_btf_set name = { LIST_HEAD_INIT(name.list), (set) }
 
 DECLARE_KFUNC_BTF_SET_REG(bpf_tcp_ca);
+DECLARE_KFUNC_BTF_SET_REG(raw_tp);
 
 #endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index cc12470a55f9..85a0c2737ea1 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6251,3 +6251,4 @@ BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
 	EXPORT_SYMBOL_GPL(unregister_##type##_kfunc_btf_set)
 
 DEFINE_KFUNC_BTF_SET_REG(bpf_tcp_ca);
+DEFINE_KFUNC_BTF_SET_REG(raw_tp);
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 8e2eb950aa82..02fe14b5d005 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1599,6 +1599,7 @@ int __weak bpf_prog_test_run_tracing(struct bpf_prog *prog,
 const struct bpf_verifier_ops raw_tracepoint_verifier_ops = {
 	.get_func_proto  = raw_tp_prog_func_proto,
 	.is_valid_access = raw_tp_prog_is_valid_access,
+	.check_kfunc_call = __bpf_check_raw_tp_kfunc_call,
 };
 
 const struct bpf_prog_ops raw_tracepoint_prog_ops = {
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 866531c08e4f..1a4aa71e88f4 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -174,6 +174,7 @@ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_tes
 	$(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation
 	$(Q)$(MAKE) $(submake_extras) -C bpf_testmod
 	$(Q)cp bpf_testmod/bpf_testmod.ko $@
+	$(Q)$(RESOLVE_BTFIDS) -s ../../../../vmlinux bpf_testmod.ko
 
 $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
 	$(call msg,CC,,$@)
@@ -315,7 +316,7 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		linked_vars.skel.h linked_maps.skel.h
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
-	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
+	  test_ringbuf.c atomics.c trace_printk.c
 SKEL_BLACKLIST += $$(LSKELS)
 
 test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index 141d8da687d2..8242f2bb50b4 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -1,6 +1,8 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright (c) 2020 Facebook */
 #include <linux/error-injection.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/percpu-defs.h>
@@ -13,6 +15,12 @@
 
 DEFINE_PER_CPU(int, bpf_testmod_ksym_percpu) = 123;
 
+noinline void
+bpf_testmod_test_mod_kfunc(int i)
+{
+	pr_info("mod kfunc i=%d\n", i);
+}
+
 noinline ssize_t
 bpf_testmod_test_read(struct file *file, struct kobject *kobj,
 		      struct bin_attribute *bin_attr,
@@ -55,13 +63,26 @@ static struct bin_attribute bin_attr_bpf_testmod_file __ro_after_init = {
 	.write = bpf_testmod_test_write,
 };
 
+BTF_SET_START(bpf_testmod_kfunc_ids)
+BTF_ID(func, bpf_testmod_test_mod_kfunc)
+BTF_SET_END(bpf_testmod_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_SET(&bpf_testmod_kfunc_ids, bpf_testmod_kfunc_btf_set);
+
 static int bpf_testmod_init(void)
 {
-	return sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
+	int ret;
+
+	ret = sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
+	if (ret)
+		return ret;
+	register_raw_tp_kfunc_btf_set(&bpf_testmod_kfunc_btf_set);
+	return 0;
 }
 
 static void bpf_testmod_exit(void)
 {
+	unregister_raw_tp_kfunc_btf_set(&bpf_testmod_kfunc_btf_set);
 	return sysfs_remove_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
 }
 
diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
index 2cd5cded543f..d3b0adc2a495 100644
--- a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
+++ b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
@@ -4,21 +4,19 @@
 #include <test_progs.h>
 #include <bpf/libbpf.h>
 #include <bpf/btf.h>
-#include "test_ksyms_module.lskel.h"
-
-static int duration;
+#include "test_ksyms_module.skel.h"
 
 void test_ksyms_module(void)
 {
-	struct test_ksyms_module* skel;
+	struct test_ksyms_module *skel;
 	int err;
 
 	skel = test_ksyms_module__open_and_load();
-	if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
+	if (!ASSERT_OK_PTR(skel, "test_ksyms_module__open_and_load"))
 		return;
 
 	err = test_ksyms_module__attach(skel);
-	if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
+	if (!ASSERT_OK(err, "test_ksyms_module__attach"))
 		goto cleanup;
 
 	usleep(1);
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module.c b/tools/testing/selftests/bpf/progs/test_ksyms_module.c
index d6a0b3086b90..81f8790cb99d 100644
--- a/tools/testing/selftests/bpf/progs/test_ksyms_module.c
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_module.c
@@ -6,8 +6,11 @@
 #include <bpf/bpf_helpers.h>
 
 extern const int bpf_testmod_ksym_percpu __ksym;
+extern void bpf_testmod_test_mod_kfunc(int i) __ksym;
+extern void bpf_testmod_invalid_mod_kfunc(void) __ksym;
 
 int out_mod_ksym_global = 0;
+const volatile int x = 0;
 bool triggered = false;
 
 SEC("raw_tp/sys_enter")
@@ -16,6 +19,12 @@ int handler(const void *ctx)
 	int *val;
 	__u32 cpu;
 
+	/* This should be preserved by clang, but DCE'd by verifier, and still
+	 * allow loading the raw_tp prog
+	 */
+	if (x)
+		bpf_testmod_invalid_mod_kfunc();
+	bpf_testmod_test_mod_kfunc(42);
 	val = (int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
 	out_mod_ksym_global = *val;
 	triggered = true;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-08-30 20:01   ` Alexei Starovoitov
  0 siblings, 0 replies; 17+ messages in thread
From: Alexei Starovoitov @ 2021-08-30 20:01 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

On Mon, Aug 30, 2021 at 11:04:17PM +0530, Kumar Kartikeya Dwivedi wrote:
> This change adds support on the kernel side to allow for BPF programs to
> call kernel module functions. Userspace will prepare an array of module
> BTF fds that is passed in during BPF_PROG_LOAD. In the kernel, the
> module BTF array is placed in the auxilliary struct for bpf_prog.
> 
> The verifier then uses insn->off to index into this table. insn->off is
> used by subtracting one from it, as userspace has to set the index of
> array in insn->off incremented by 1. This lets us denote vmlinux btf by
> insn->off == 0, and the prog->aux->kfunc_btf_tab[insn->off - 1] for
> module BTFs.
> 
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  include/linux/bpf.h            |  1 +
>  include/linux/filter.h         |  9 ++++
>  include/uapi/linux/bpf.h       |  3 +-
>  kernel/bpf/core.c              | 14 ++++++
>  kernel/bpf/syscall.c           | 55 +++++++++++++++++++++-
>  kernel/bpf/verifier.c          | 85 ++++++++++++++++++++++++++--------
>  tools/include/uapi/linux/bpf.h |  3 +-
>  7 files changed, 147 insertions(+), 23 deletions(-)
> 
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index f4c16f19f83e..39f59e5f3a26 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -874,6 +874,7 @@ struct bpf_prog_aux {
>  	void *jit_data; /* JIT specific data. arch dependent */
>  	struct bpf_jit_poke_descriptor *poke_tab;
>  	struct bpf_kfunc_desc_tab *kfunc_tab;
> +	struct bpf_kfunc_btf_tab *kfunc_btf_tab;
>  	u32 size_poke_tab;
>  	struct bpf_ksym ksym;
>  	const struct bpf_prog_ops *ops;
> diff --git a/include/linux/filter.h b/include/linux/filter.h
> index 7d248941ecea..46451891633d 100644
> --- a/include/linux/filter.h
> +++ b/include/linux/filter.h
> @@ -592,6 +592,15 @@ struct bpf_prog {
>  	struct bpf_insn		insnsi[];
>  };
>  
> +#define MAX_KFUNC_DESCS 256
> +/* There can only be at most MAX_KFUNC_DESCS module BTFs for kernel module
> + * function calls.
> + */
> +struct bpf_kfunc_btf_tab {
> +	u32 nr_btfs;
> +	struct btf_mod_pair btfs[];
> +};
> +
>  struct sk_filter {
>  	refcount_t	refcnt;
>  	struct rcu_head	rcu;
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index 791f31dd0abe..4cbb2082a553 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -1334,8 +1334,9 @@ union bpf_attr {
>  			/* or valid module BTF object fd or 0 to attach to vmlinux */
>  			__u32		attach_btf_obj_fd;
>  		};
> -		__u32		:32;		/* pad */
> +		__u32		kfunc_btf_fds_cnt; /* reuse hole for count of BTF fds below */

No need for size.

>  		__aligned_u64	fd_array;	/* array of FDs */
> +		__aligned_u64   kfunc_btf_fds;  /* array of BTF FDs for module kfunc support */

Just reuse fd_array. No need for another array of FDs.

> +		tab = prog->aux->kfunc_btf_tab;
> +		for (i = 0; i < n; i++) {
> +			struct btf_mod_pair *p;
> +			struct btf *mod_btf;
> +
> +			mod_btf = btf_get_by_fd(fds[i]);
> +			if (IS_ERR(mod_btf)) {
> +				err = PTR_ERR(mod_btf);
> +				goto free_prog;
> +			}
> +			if (!btf_is_module(mod_btf)) {
> +				err = -EINVAL;
> +				btf_put(mod_btf);
> +				goto free_prog;
> +			}

just do that dynamically like access to fd_array is handled in other places.
no need to preload.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 8/8] bpf, selftests: Add basic test for module kfunc call
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 8/8] bpf, selftests: Add basic test for module kfunc call Kumar Kartikeya Dwivedi
@ 2021-08-30 20:07   ` Alexei Starovoitov
  0 siblings, 0 replies; 17+ messages in thread
From: Alexei Starovoitov @ 2021-08-30 20:07 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

On Mon, Aug 30, 2021 at 11:04:24PM +0530, Kumar Kartikeya Dwivedi wrote:
> diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
> index 2cd5cded543f..d3b0adc2a495 100644
> --- a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
> +++ b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
> @@ -4,21 +4,19 @@
>  #include <test_progs.h>
>  #include <bpf/libbpf.h>
>  #include <bpf/btf.h>
> -#include "test_ksyms_module.lskel.h"
> -
> -static int duration;
> +#include "test_ksyms_module.skel.h"

bpf_btf_find_by_name_kind supports searching in modules,
so adding support for kfunc in modules to lskel shouldn't be hard to add.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 4/8] libbpf: Resolve invalid kfunc calls with imm = 0, off = 0
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 4/8] libbpf: Resolve invalid kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
@ 2021-09-01  0:35   ` Andrii Nakryiko
  0 siblings, 0 replies; 17+ messages in thread
From: Andrii Nakryiko @ 2021-09-01  0:35 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Aug 30, 2021 at 10:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Preserve these calls as it allows verifier to succeed in loading the
> program if they are determined to be unreachable after dead code
> elimination during program load. If not, the verifier will fail at
> runtime.
>

This should be controlled by whether extern for func is weak or not,
just like we do for variables (see
bpf_object__resolve_ksym_var_btf_id()).

> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  tools/lib/bpf/libbpf.c | 20 ++++++++++++++++----
>  1 file changed, 16 insertions(+), 4 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index c4677ef97caa..9df90098f111 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -6736,9 +6736,14 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
>         kfunc_id = find_ksym_btf_id(obj, ext->name, BTF_KIND_FUNC,
>                                     &kern_btf, &kern_btf_fd);
>         if (kfunc_id < 0) {
> -               pr_warn("extern (func ksym) '%s': not found in kernel BTF\n",
> +               pr_warn("extern (func ksym) '%s': not found in kernel BTF, encoding btf_id as 0\n",
>                         ext->name);
> -               return kfunc_id;
> +               /* keep invalid kfuncs, so that verifier can load the program if
> +                * they get removed during DCE pass in the verifier.
> +                * The encoding must be insn->imm = 0, insn->off = 0.
> +                */
> +               kfunc_id = kern_btf_fd = 0;
> +               goto resolve;
>         }
>
>         if (kern_btf != obj->btf_vmlinux) {
> @@ -6798,11 +6803,18 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
>                 return -EINVAL;
>         }
>
> +resolve:
>         ext->is_set = true;
>         ext->ksym.kernel_btf_obj_fd = kern_btf_fd;
>         ext->ksym.kernel_btf_id = kfunc_id;
> -       pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n",
> -                ext->name, kfunc_id);
> +       if (kfunc_id) {
> +               pr_debug("extern (func ksym) '%s': resolved to kernel [%d]\n",
> +                        ext->name, kfunc_id);
> +       } else {
> +               ext->ksym.offset = 0;
> +               pr_debug("extern (func ksym) '%s': added special invalid kfunc with imm = 0\n",
> +                        ext->name);
> +       }
>
>         return 0;
>  }
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 7/8] bpf: enable TCP congestion control kfunc from modules
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 7/8] bpf: enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
@ 2021-09-01  0:48   ` Andrii Nakryiko
  0 siblings, 0 replies; 17+ messages in thread
From: Andrii Nakryiko @ 2021-09-01  0:48 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Aug 30, 2021 at 10:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This commit moves BTF ID lookup into the newly added registration
> helper, in a way that the bbr, cubic, and dctcp implementation set up
> their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
> dependent on modules are looked up from the wrapper function.
>
> This lifts the restriction for them to be compiled as built in objects,
> and can be loaded as modules if required. Also modify link-vmlinux.sh to
> resolve_btfids in TCP congestion control modules if the config option is
> set, using the base BTF support added in the previous commit.
>
> See following commits for background on use of:
>
>  CONFIG_X86 ifdef:
>  569c484f9995 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)
>
>  CONFIG_DYNAMIC_FTRACE ifdef:
>  7aae231ac93b (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  include/linux/btf.h       |  2 ++
>  kernel/bpf/btf.c          |  2 ++
>  net/ipv4/bpf_tcp_ca.c     | 34 +++-------------------------------
>  net/ipv4/tcp_bbr.c        | 28 +++++++++++++++++++++++++++-
>  net/ipv4/tcp_cubic.c      | 26 +++++++++++++++++++++++++-
>  net/ipv4/tcp_dctcp.c      | 26 +++++++++++++++++++++++++-
>  scripts/Makefile.modfinal |  1 +
>  7 files changed, 85 insertions(+), 34 deletions(-)
>

[...]

> diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
> index 5e9b8057fb24..0755d4b8b74a 100644
> --- a/scripts/Makefile.modfinal
> +++ b/scripts/Makefile.modfinal
> @@ -58,6 +58,7 @@ quiet_cmd_btf_ko = BTF [M] $@
>        cmd_btf_ko =                                                     \
>         if [ -f vmlinux ]; then                                         \
>                 LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \
> +               $(RESOLVE_BTFIDS) --no-fail -s vmlinux $@;              \

why is this --no-fail?


>         else                                                            \
>                 printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \
>         fi;
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls
  2021-08-30 17:34 ` [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-09-01  0:55   ` Andrii Nakryiko
  2021-09-01  2:27     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 17+ messages in thread
From: Andrii Nakryiko @ 2021-09-01  0:55 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Aug 30, 2021 at 10:34 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>

-ENOCOMMITMESSAGE?

> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  tools/lib/bpf/bpf.c             |  3 ++
>  tools/lib/bpf/libbpf.c          | 71 +++++++++++++++++++++++++++++++--
>  tools/lib/bpf/libbpf_internal.h |  2 +
>  3 files changed, 73 insertions(+), 3 deletions(-)
>

[...]

> @@ -515,6 +521,13 @@ struct bpf_object {
>         void *priv;
>         bpf_object_clear_priv_t clear_priv;
>
> +       struct {
> +               struct hashmap *map;
> +               int *fds;
> +               size_t cap_cnt;
> +               __u32 n_fds;
> +       } kfunc_btf_fds;
> +
>         char path[];
>  };
>  #define obj_elf_valid(o)       ((o)->efile.elf)
> @@ -5327,6 +5340,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
>                         ext = &obj->externs[relo->sym_off];
>                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
>                         insn[0].imm = ext->ksym.kernel_btf_id;
> +                       insn[0].off = ext->ksym.offset;

Just a few lines above we use insn[1].imm =
ext->ksym.kernel_btf_obj_fd; for EXT_KSYM (for variables). Why are you
inventing a new form if we already have a pretty consistent pattern?

>                         break;
>                 case RELO_SUBPROG_ADDR:
>                         if (insn[0].src_reg != BPF_PSEUDO_FUNC) {

[...]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls
  2021-09-01  0:55   ` Andrii Nakryiko
@ 2021-09-01  2:27     ` Kumar Kartikeya Dwivedi
  2021-09-01  2:59       ` Alexei Starovoitov
  0 siblings, 1 reply; 17+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-01  2:27 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Wed, Sep 01, 2021 at 06:25:14AM IST, Andrii Nakryiko wrote:
> On Mon, Aug 30, 2021 at 10:34 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
>
> -ENOCOMMITMESSAGE?
>
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  tools/lib/bpf/bpf.c             |  3 ++
> >  tools/lib/bpf/libbpf.c          | 71 +++++++++++++++++++++++++++++++--
> >  tools/lib/bpf/libbpf_internal.h |  2 +
> >  3 files changed, 73 insertions(+), 3 deletions(-)
> >
>
> [...]
>
> > @@ -515,6 +521,13 @@ struct bpf_object {
> >         void *priv;
> >         bpf_object_clear_priv_t clear_priv;
> >
> > +       struct {
> > +               struct hashmap *map;
> > +               int *fds;
> > +               size_t cap_cnt;
> > +               __u32 n_fds;
> > +       } kfunc_btf_fds;
> > +
> >         char path[];
> >  };
> >  #define obj_elf_valid(o)       ((o)->efile.elf)
> > @@ -5327,6 +5340,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
> >                         ext = &obj->externs[relo->sym_off];
> >                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
> >                         insn[0].imm = ext->ksym.kernel_btf_id;
> > +                       insn[0].off = ext->ksym.offset;
>
> Just a few lines above we use insn[1].imm =
> ext->ksym.kernel_btf_obj_fd; for EXT_KSYM (for variables). Why are you
> inventing a new form if we already have a pretty consistent pattern?
>

That makes sense. This is all new to me, so I went with what was described in
e6ac2450d6de (bpf: Support bpf program calling kernel function), but I'll rework
it to encode the btf fd like that in the next spin. It also makes the everything
far simpler.

> >                         break;
> >                 case RELO_SUBPROG_ADDR:
> >                         if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
>
> [...]

--
Kartikeya

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls
  2021-09-01  2:27     ` Kumar Kartikeya Dwivedi
@ 2021-09-01  2:59       ` Alexei Starovoitov
  2021-09-01  3:38         ` Andrii Nakryiko
  0 siblings, 1 reply; 17+ messages in thread
From: Alexei Starovoitov @ 2021-09-01  2:59 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Tue, Aug 31, 2021 at 7:27 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
> > > @@ -5327,6 +5340,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
> > >                         ext = &obj->externs[relo->sym_off];
> > >                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
> > >                         insn[0].imm = ext->ksym.kernel_btf_id;
> > > +                       insn[0].off = ext->ksym.offset;
> >
> > Just a few lines above we use insn[1].imm =
> > ext->ksym.kernel_btf_obj_fd; for EXT_KSYM (for variables). Why are you
> > inventing a new form if we already have a pretty consistent pattern?
> >
>
> That makes sense. This is all new to me, so I went with what was described in
> e6ac2450d6de (bpf: Support bpf program calling kernel function), but I'll rework
> it to encode the btf fd like that in the next spin. It also makes the everything
> far simpler.

Hmm. kfunc call is a call insn. There is no imm[1].

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls
  2021-09-01  2:59       ` Alexei Starovoitov
@ 2021-09-01  3:38         ` Andrii Nakryiko
  0 siblings, 0 replies; 17+ messages in thread
From: Andrii Nakryiko @ 2021-09-01  3:38 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Kumar Kartikeya Dwivedi, bpf, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu,
	Yonghong Song, Jesper Dangaard Brouer,
	Toke Høiland-Jørgensen, Networking

On Tue, Aug 31, 2021 at 7:59 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Aug 31, 2021 at 7:27 PM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> > > > @@ -5327,6 +5340,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
> > > >                         ext = &obj->externs[relo->sym_off];
> > > >                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
> > > >                         insn[0].imm = ext->ksym.kernel_btf_id;
> > > > +                       insn[0].off = ext->ksym.offset;
> > >
> > > Just a few lines above we use insn[1].imm =
> > > ext->ksym.kernel_btf_obj_fd; for EXT_KSYM (for variables). Why are you
> > > inventing a new form if we already have a pretty consistent pattern?
> > >
> >
> > That makes sense. This is all new to me, so I went with what was described in
> > e6ac2450d6de (bpf: Support bpf program calling kernel function), but I'll rework
> > it to encode the btf fd like that in the next spin. It also makes the everything
> > far simpler.
>
> Hmm. kfunc call is a call insn. There is no imm[1].

Doh, right :( Never mind, we'll need to use fd_array for this.

Either way, I don't think hashmap use is warranted here to find a BTF
slot. Let's just do linear search, it's not like we are going to have
thousands of module BTFs used by any single BPF program, right?

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-09-01  3:38 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-30 17:34 [PATCH bpf-next RFC v1 0/8] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 1/8] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
2021-08-30 20:01   ` Alexei Starovoitov
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 2/8] bpf: Be conservative during verification for invalid kfunc calls Kumar Kartikeya Dwivedi
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 3/8] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
2021-09-01  0:55   ` Andrii Nakryiko
2021-09-01  2:27     ` Kumar Kartikeya Dwivedi
2021-09-01  2:59       ` Alexei Starovoitov
2021-09-01  3:38         ` Andrii Nakryiko
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 4/8] libbpf: Resolve invalid kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
2021-09-01  0:35   ` Andrii Nakryiko
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 5/8] tools: Allow specifying base BTF file in resolve_btfids Kumar Kartikeya Dwivedi
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 6/8] bpf: btf: Introduce helpers for dynamic BTF set registration Kumar Kartikeya Dwivedi
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 7/8] bpf: enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
2021-09-01  0:48   ` Andrii Nakryiko
2021-08-30 17:34 ` [PATCH bpf-next RFC v1 8/8] bpf, selftests: Add basic test for module kfunc call Kumar Kartikeya Dwivedi
2021-08-30 20:07   ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).