All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF
@ 2021-09-20 14:15 Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 01/11] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
                   ` (11 more replies)
  0 siblings, 12 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This set enables kernel module function calls, and also modifies verifier logic
to permit invalid kernel function calls as long as they are pruned as part of
dead code elimination. This is done to provide better runtime portability for
BPF objects, which can conditionally disable parts of code that are pruned later
by the verifier (e.g. const volatile vars, kconfig options). libbpf
modifications are made along with kernel changes to support module function
calls. The set includes gen_loader support for emitting kfunc relocations.

It also converts TCP congestion control objects to use the module kfunc support
instead of relying on IS_BUILTIN ifdef.

Changelog:
----------
v3 -> v4
v3: https://lore.kernel.org/bpf/20210915050943.679062-1-memxor@gmail.com

 * Address comments from Alexei
   * Drop MAX_BPF_STACK change, instead move map_fd and BTF fd to BPF array map
     and pass fd_array using BPF_PSEUDO_MAP_IDX_VALUE
 * Address comments from Andrii
   * Fix selftest to store to variable for observing function call instead of
     printk and polluting CI logs
 * Drop use of raw_tp for testing, instead reuse classifier based prog_test_run
 * Drop index + 1 based insn->off convention for kfunc module calls
 * Expand selftests to cover more corner cases
 * Misc cleanups

v2 -> v3
v2: https://lore.kernel.org/bpf/20210914123750.460750-1-memxor@gmail.com

 * Fix issues pointed out by Kernel Test Robot
 * Fix find_kfunc_desc to also take offset into consideration when comparing

RFC v1 -> v2
v1: https://lore.kernel.org/bpf/20210830173424.1385796-1-memxor@gmail.com

 * Address comments from Alexei
   * Reuse fd_array instead of introducing kfunc_btf_fds array
   * Take btf and module reference as needed, instead of preloading
   * Add BTF_KIND_FUNC relocation support to gen_loader infrastructure
 * Address comments from Andrii
   * Drop hashmap in libbpf for finding index of existing BTF in fd_array
   * Preserve invalid kfunc calls only when the symbol is weak
 * Adjust verifier selftests

Kumar Kartikeya Dwivedi (11):
  bpf: Introduce BPF support for kernel module function calls
  bpf: Be conservative while processing invalid kfunc calls
  bpf: btf: Introduce helpers for dynamic BTF set registration
  tools: Allow specifying base BTF file in resolve_btfids
  bpf: Enable TCP congestion control kfunc from modules
  libbpf: Support kernel module function calls
  libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0
  libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
  tools: bpftool: Add separate fd_array map support for light skeleton
  libbpf: Fix skel_internal.h to set errno on loader retval < 0
  bpf: selftests: Add selftests for module kfunc support

 include/linux/bpf.h                           |   8 +-
 include/linux/bpf_verifier.h                  |   2 +
 include/linux/bpfptr.h                        |   1 +
 include/linux/btf.h                           |  37 +++
 kernel/bpf/btf.c                              |  56 +++++
 kernel/bpf/core.c                             |   4 +
 kernel/bpf/verifier.c                         | 220 ++++++++++++++---
 net/bpf/test_run.c                            |   7 +-
 net/ipv4/bpf_tcp_ca.c                         |  36 +--
 net/ipv4/tcp_bbr.c                            |  28 ++-
 net/ipv4/tcp_cubic.c                          |  26 +-
 net/ipv4/tcp_dctcp.c                          |  26 +-
 scripts/Makefile.modfinal                     |   1 +
 tools/bpf/bpftool/gen.c                       |   3 +-
 tools/bpf/bpftool/prog.c                      |   1 +
 tools/bpf/resolve_btfids/main.c               |  19 +-
 tools/lib/bpf/bpf.c                           |   1 +
 tools/lib/bpf/bpf_gen_internal.h              |  16 +-
 tools/lib/bpf/gen_loader.c                    | 222 +++++++++++++++---
 tools/lib/bpf/libbpf.c                        |  83 +++++--
 tools/lib/bpf/libbpf.h                        |   1 +
 tools/lib/bpf/libbpf_internal.h               |   1 +
 tools/lib/bpf/skel_internal.h                 |  33 ++-
 tools/testing/selftests/bpf/Makefile          |   5 +-
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  26 +-
 .../selftests/bpf/prog_tests/ksyms_module.c   |  52 ++--
 .../bpf/prog_tests/ksyms_module_libbpf.c      |  44 ++++
 .../selftests/bpf/progs/test_ksyms_module.c   |  41 +++-
 .../bpf/progs/test_ksyms_module_fail.c        |  29 +++
 .../progs/test_ksyms_module_fail_toomany.c    |  19 ++
 .../bpf/progs/test_ksyms_module_libbpf.c      |  71 ++++++
 .../bpf/progs/test_ksyms_module_util.h        |  48 ++++
 32 files changed, 1014 insertions(+), 153 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h

-- 
2.33.0


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 01/11] bpf: Introduce BPF support for kernel module function calls
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 02/11] bpf: Be conservative while processing invalid kfunc calls Kumar Kartikeya Dwivedi
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This change adds support on the kernel side to allow for BPF programs to
call kernel module functions. Userspace will prepare an array of module
BTF fds that is passed in during BPF_PROG_LOAD using fd_array parameter.
In the kernel, the module BTFs are placed in the auxilliary struct for
bpf_prog, and loaded as needed.

The verifier then uses insn->off to index into the fd_array. insn->off
0 is reserved for vmlinux BTF (for backwards compat), so userspace must
use an fd_array index > 0 for module kfunc support. kfunc_btf_tab is
sorted based on offset in an array, and each offset corresponds to one
descriptor, with a max limit up to 256 such module BTFs.

Another change is to check_kfunc_call callback, which now include a
struct module * pointer, this is to be used in later patch such that the
kfunc_id and module pointer are matched for dynamically registered BTF
sets from loadable modules, so that same kfunc_id in two modules doesn't
lead to check_kfunc_call succeeding. For the duration of the
check_kfunc_call, the reference to struct module exists, as it returns
the pointer stored in kfunc_btf_tab.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpf.h          |   8 +-
 include/linux/bpf_verifier.h |   2 +
 kernel/bpf/core.c            |   4 +
 kernel/bpf/verifier.c        | 202 ++++++++++++++++++++++++++++++-----
 net/bpf/test_run.c           |   2 +-
 net/ipv4/bpf_tcp_ca.c        |   2 +-
 6 files changed, 188 insertions(+), 32 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index b6c45a6cbbba..81b1b3d0d546 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -511,7 +511,7 @@ struct bpf_verifier_ops {
 				 const struct btf_type *t, int off, int size,
 				 enum bpf_access_type atype,
 				 u32 *next_btf_id);
-	bool (*check_kfunc_call)(u32 kfunc_btf_id);
+	bool (*check_kfunc_call)(u32 kfunc_btf_id, struct module *owner);
 };
 
 struct bpf_prog_offload_ops {
@@ -874,6 +874,7 @@ struct bpf_prog_aux {
 	void *jit_data; /* JIT specific data. arch dependent */
 	struct bpf_jit_poke_descriptor *poke_tab;
 	struct bpf_kfunc_desc_tab *kfunc_tab;
+	struct bpf_kfunc_btf_tab *kfunc_btf_tab;
 	u32 size_poke_tab;
 	struct bpf_ksym ksym;
 	const struct bpf_prog_ops *ops;
@@ -1636,7 +1637,7 @@ int bpf_prog_test_run_raw_tp(struct bpf_prog *prog,
 int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
 				const union bpf_attr *kattr,
 				union bpf_attr __user *uattr);
-bool bpf_prog_test_check_kfunc_call(u32 kfunc_id);
+bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner);
 bool btf_ctx_access(int off, int size, enum bpf_access_type type,
 		    const struct bpf_prog *prog,
 		    struct bpf_insn_access_aux *info);
@@ -1857,7 +1858,8 @@ static inline int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog,
 	return -ENOTSUPP;
 }
 
-static inline bool bpf_prog_test_check_kfunc_call(u32 kfunc_id)
+static inline bool bpf_prog_test_check_kfunc_call(u32 kfunc_id,
+						  struct module *owner)
 {
 	return false;
 }
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index 5424124dbe36..c8a78e830fca 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -527,5 +527,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
 			    const struct bpf_prog *tgt_prog,
 			    u32 btf_id,
 			    struct bpf_attach_target_info *tgt_info);
+void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab);
+
 
 #endif /* _LINUX_BPF_VERIFIER_H */
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 6fddc13fe67f..084df01625cd 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -32,6 +32,7 @@
 #include <linux/perf_event.h>
 #include <linux/extable.h>
 #include <linux/log2.h>
+#include <linux/bpf_verifier.h>
 
 #include <asm/barrier.h>
 #include <asm/unaligned.h>
@@ -2255,6 +2256,9 @@ static void bpf_prog_free_deferred(struct work_struct *work)
 	int i;
 
 	aux = container_of(work, struct bpf_prog_aux, work);
+#ifdef CONFIG_BPF_SYSCALL
+	bpf_free_kfunc_btf_tab(aux->kfunc_btf_tab);
+#endif
 	bpf_free_used_maps(aux);
 	bpf_free_used_btfs(aux);
 	if (bpf_prog_is_dev_bound(aux))
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index e76b55917905..5b465d67cf65 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1626,52 +1626,173 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
 	return env->subprog_cnt - 1;
 }
 
+#define MAX_KFUNC_DESCS 256
+#define MAX_KFUNC_BTFS	256
+
 struct bpf_kfunc_desc {
 	struct btf_func_model func_model;
 	u32 func_id;
 	s32 imm;
+	u16 offset;
+};
+
+struct bpf_kfunc_btf {
+	struct btf *btf;
+	struct module *module;
+	u16 offset;
 };
 
-#define MAX_KFUNC_DESCS 256
 struct bpf_kfunc_desc_tab {
 	struct bpf_kfunc_desc descs[MAX_KFUNC_DESCS];
 	u32 nr_descs;
 };
 
-static int kfunc_desc_cmp_by_id(const void *a, const void *b)
+struct bpf_kfunc_btf_tab {
+	struct bpf_kfunc_btf descs[MAX_KFUNC_BTFS];
+	u32 nr_descs;
+};
+
+static int kfunc_desc_cmp_by_id_off(const void *a, const void *b)
 {
 	const struct bpf_kfunc_desc *d0 = a;
 	const struct bpf_kfunc_desc *d1 = b;
 
 	/* func_id is not greater than BTF_MAX_TYPE */
-	return d0->func_id - d1->func_id;
+	return d0->func_id - d1->func_id ?: d0->offset - d1->offset;
+}
+
+static int kfunc_btf_cmp_by_off(const void *a, const void *b)
+{
+	const struct bpf_kfunc_btf *d0 = a;
+	const struct bpf_kfunc_btf *d1 = b;
+
+	return d0->offset - d1->offset;
 }
 
 static const struct bpf_kfunc_desc *
-find_kfunc_desc(const struct bpf_prog *prog, u32 func_id)
+find_kfunc_desc(const struct bpf_prog *prog, u32 func_id, u16 offset)
 {
 	struct bpf_kfunc_desc desc = {
 		.func_id = func_id,
+		.offset = offset,
 	};
 	struct bpf_kfunc_desc_tab *tab;
 
 	tab = prog->aux->kfunc_tab;
 	return bsearch(&desc, tab->descs, tab->nr_descs,
-		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id);
+		       sizeof(tab->descs[0]), kfunc_desc_cmp_by_id_off);
+}
+
+static struct btf *__find_kfunc_desc_btf(struct bpf_verifier_env *env,
+					 s16 offset, struct module **btf_modp)
+{
+	struct bpf_kfunc_btf kf_btf = { .offset = offset };
+	struct bpf_kfunc_btf_tab *tab;
+	struct bpf_kfunc_btf *b;
+	struct module *mod;
+	struct btf *btf;
+	int btf_fd;
+
+	tab = env->prog->aux->kfunc_btf_tab;
+	b = bsearch(&kf_btf, tab->descs, tab->nr_descs,
+		    sizeof(tab->descs[0]), kfunc_btf_cmp_by_off);
+	if (!b) {
+		if (tab->nr_descs == MAX_KFUNC_BTFS) {
+			verbose(env, "too many different module BTFs\n");
+			return ERR_PTR(-E2BIG);
+		}
+
+		if (bpfptr_is_null(env->fd_array)) {
+			verbose(env, "kfunc offset > 0 without fd_array is invalid\n");
+			return ERR_PTR(-EPROTO);
+		}
+
+		if (copy_from_bpfptr_offset(&btf_fd, env->fd_array,
+					    offset * sizeof(btf_fd),
+					    sizeof(btf_fd)))
+			return ERR_PTR(-EFAULT);
+
+		btf = btf_get_by_fd(btf_fd);
+		if (IS_ERR(btf))
+			return btf;
+
+		if (!btf_is_module(btf)) {
+			verbose(env, "BTF fd for kfunc is not a module BTF\n");
+			btf_put(btf);
+			return ERR_PTR(-EINVAL);
+		}
+
+		mod = btf_try_get_module(btf);
+		if (!mod) {
+			btf_put(btf);
+			return ERR_PTR(-ENXIO);
+		}
+
+		b = &tab->descs[tab->nr_descs++];
+		b->btf = btf;
+		b->module = mod;
+		b->offset = offset;
+
+		sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
+		     kfunc_btf_cmp_by_off, NULL);
+	}
+	if (btf_modp)
+		*btf_modp = b->module;
+	return b->btf;
 }
 
-static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
+void bpf_free_kfunc_btf_tab(struct bpf_kfunc_btf_tab *tab)
+{
+	if (!tab)
+		return;
+
+	while (tab->nr_descs--) {
+		module_put(tab->descs[tab->nr_descs].module);
+		btf_put(tab->descs[tab->nr_descs].btf);
+	}
+	kfree(tab);
+}
+
+static struct btf *find_kfunc_desc_btf(struct bpf_verifier_env *env,
+				       u32 func_id, s16 offset,
+				       struct module **btf_modp)
+{
+	struct btf *kfunc_btf;
+
+	if (offset) {
+		if (offset < 0) {
+			/* In the future, this can be allowed to increase limit
+			 * of fd index into fd_array, interpreted as u16.
+			 */
+			verbose(env, "negative offset disallowed for kernel module function call\n");
+			return ERR_PTR(-EINVAL);
+		}
+
+		kfunc_btf = __find_kfunc_desc_btf(env, offset, btf_modp);
+		if (IS_ERR_OR_NULL(kfunc_btf)) {
+			verbose(env, "cannot find module BTF for func_id %u\n", func_id);
+			return kfunc_btf ?: ERR_PTR(-ENOENT);
+		}
+		return kfunc_btf;
+	}
+	return btf_vmlinux ?: ERR_PTR(-ENOENT);
+}
+
+static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 {
 	const struct btf_type *func, *func_proto;
+	struct bpf_kfunc_btf_tab *btf_tab;
 	struct bpf_kfunc_desc_tab *tab;
 	struct bpf_prog_aux *prog_aux;
 	struct bpf_kfunc_desc *desc;
 	const char *func_name;
+	struct btf *desc_btf;
 	unsigned long addr;
 	int err;
 
 	prog_aux = env->prog->aux;
 	tab = prog_aux->kfunc_tab;
+	btf_tab = prog_aux->kfunc_btf_tab;
 	if (!tab) {
 		if (!btf_vmlinux) {
 			verbose(env, "calling kernel function is not supported without CONFIG_DEBUG_INFO_BTF\n");
@@ -1699,7 +1820,20 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
 		prog_aux->kfunc_tab = tab;
 	}
 
-	if (find_kfunc_desc(env->prog, func_id))
+	if (!btf_tab && offset) {
+		btf_tab = kzalloc(sizeof(*btf_tab), GFP_KERNEL);
+		if (!btf_tab)
+			return -ENOMEM;
+		prog_aux->kfunc_btf_tab = btf_tab;
+	}
+
+	desc_btf = find_kfunc_desc_btf(env, func_id, offset, NULL);
+	if (IS_ERR(desc_btf)) {
+		verbose(env, "failed to find BTF for kernel function\n");
+		return PTR_ERR(desc_btf);
+	}
+
+	if (find_kfunc_desc(env->prog, func_id, offset))
 		return 0;
 
 	if (tab->nr_descs == MAX_KFUNC_DESCS) {
@@ -1707,20 +1841,20 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
 		return -E2BIG;
 	}
 
-	func = btf_type_by_id(btf_vmlinux, func_id);
+	func = btf_type_by_id(desc_btf, func_id);
 	if (!func || !btf_type_is_func(func)) {
 		verbose(env, "kernel btf_id %u is not a function\n",
 			func_id);
 		return -EINVAL;
 	}
-	func_proto = btf_type_by_id(btf_vmlinux, func->type);
+	func_proto = btf_type_by_id(desc_btf, func->type);
 	if (!func_proto || !btf_type_is_func_proto(func_proto)) {
 		verbose(env, "kernel function btf_id %u does not have a valid func_proto\n",
 			func_id);
 		return -EINVAL;
 	}
 
-	func_name = btf_name_by_offset(btf_vmlinux, func->name_off);
+	func_name = btf_name_by_offset(desc_btf, func->name_off);
 	addr = kallsyms_lookup_name(func_name);
 	if (!addr) {
 		verbose(env, "cannot find address for kernel function %s\n",
@@ -1731,12 +1865,13 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id)
 	desc = &tab->descs[tab->nr_descs++];
 	desc->func_id = func_id;
 	desc->imm = BPF_CAST_CALL(addr) - __bpf_call_base;
-	err = btf_distill_func_proto(&env->log, btf_vmlinux,
+	desc->offset = offset;
+	err = btf_distill_func_proto(&env->log, desc_btf,
 				     func_proto, func_name,
 				     &desc->func_model);
 	if (!err)
 		sort(tab->descs, tab->nr_descs, sizeof(tab->descs[0]),
-		     kfunc_desc_cmp_by_id, NULL);
+		     kfunc_desc_cmp_by_id_off, NULL);
 	return err;
 }
 
@@ -1815,7 +1950,7 @@ static int add_subprog_and_kfunc(struct bpf_verifier_env *env)
 		} else if (bpf_pseudo_call(insn)) {
 			ret = add_subprog(env, i + insn->imm + 1);
 		} else {
-			ret = add_kfunc_call(env, insn->imm);
+			ret = add_kfunc_call(env, insn->imm, insn->off);
 		}
 
 		if (ret < 0)
@@ -2152,12 +2287,17 @@ static int get_prev_insn_idx(struct bpf_verifier_state *st, int i,
 static const char *disasm_kfunc_name(void *data, const struct bpf_insn *insn)
 {
 	const struct btf_type *func;
+	struct btf *desc_btf;
 
 	if (insn->src_reg != BPF_PSEUDO_KFUNC_CALL)
 		return NULL;
 
-	func = btf_type_by_id(btf_vmlinux, insn->imm);
-	return btf_name_by_offset(btf_vmlinux, func->name_off);
+	desc_btf = find_kfunc_desc_btf(data, insn->imm, insn->off, NULL);
+	if (IS_ERR(desc_btf))
+		return "<error>";
+
+	func = btf_type_by_id(desc_btf, insn->imm);
+	return btf_name_by_offset(desc_btf, func->name_off);
 }
 
 /* For given verifier state backtrack_insn() is called from the last insn to
@@ -6485,23 +6625,29 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	struct bpf_reg_state *regs = cur_regs(env);
 	const char *func_name, *ptr_type_name;
 	u32 i, nargs, func_id, ptr_type_id;
+	struct module *btf_mod = NULL;
 	const struct btf_param *args;
+	struct btf *desc_btf;
 	int err;
 
+	desc_btf = find_kfunc_desc_btf(env, insn->imm, insn->off, &btf_mod);
+	if (IS_ERR(desc_btf))
+		return PTR_ERR(desc_btf);
+
 	func_id = insn->imm;
-	func = btf_type_by_id(btf_vmlinux, func_id);
-	func_name = btf_name_by_offset(btf_vmlinux, func->name_off);
-	func_proto = btf_type_by_id(btf_vmlinux, func->type);
+	func = btf_type_by_id(desc_btf, func_id);
+	func_name = btf_name_by_offset(desc_btf, func->name_off);
+	func_proto = btf_type_by_id(desc_btf, func->type);
 
 	if (!env->ops->check_kfunc_call ||
-	    !env->ops->check_kfunc_call(func_id)) {
+	    !env->ops->check_kfunc_call(func_id, btf_mod)) {
 		verbose(env, "calling kernel function %s is not allowed\n",
 			func_name);
 		return -EACCES;
 	}
 
 	/* Check the arguments */
-	err = btf_check_kfunc_arg_match(env, btf_vmlinux, func_id, regs);
+	err = btf_check_kfunc_arg_match(env, desc_btf, func_id, regs);
 	if (err)
 		return err;
 
@@ -6509,15 +6655,15 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		mark_reg_not_init(env, regs, caller_saved[i]);
 
 	/* Check return type */
-	t = btf_type_skip_modifiers(btf_vmlinux, func_proto->type, NULL);
+	t = btf_type_skip_modifiers(desc_btf, func_proto->type, NULL);
 	if (btf_type_is_scalar(t)) {
 		mark_reg_unknown(env, regs, BPF_REG_0);
 		mark_btf_func_reg_size(env, BPF_REG_0, t->size);
 	} else if (btf_type_is_ptr(t)) {
-		ptr_type = btf_type_skip_modifiers(btf_vmlinux, t->type,
+		ptr_type = btf_type_skip_modifiers(desc_btf, t->type,
 						   &ptr_type_id);
 		if (!btf_type_is_struct(ptr_type)) {
-			ptr_type_name = btf_name_by_offset(btf_vmlinux,
+			ptr_type_name = btf_name_by_offset(desc_btf,
 							   ptr_type->name_off);
 			verbose(env, "kernel function %s returns pointer type %s %s is not supported\n",
 				func_name, btf_type_str(ptr_type),
@@ -6525,7 +6671,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 			return -EINVAL;
 		}
 		mark_reg_known_zero(env, regs, BPF_REG_0);
-		regs[BPF_REG_0].btf = btf_vmlinux;
+		regs[BPF_REG_0].btf = desc_btf;
 		regs[BPF_REG_0].type = PTR_TO_BTF_ID;
 		regs[BPF_REG_0].btf_id = ptr_type_id;
 		mark_btf_func_reg_size(env, BPF_REG_0, sizeof(void *));
@@ -6536,7 +6682,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	for (i = 0; i < nargs; i++) {
 		u32 regno = i + 1;
 
-		t = btf_type_skip_modifiers(btf_vmlinux, args[i].type, NULL);
+		t = btf_type_skip_modifiers(desc_btf, args[i].type, NULL);
 		if (btf_type_is_ptr(t))
 			mark_btf_func_reg_size(env, regno, sizeof(void *));
 		else
@@ -11076,7 +11222,8 @@ static int do_check(struct bpf_verifier_env *env)
 			env->jmps_processed++;
 			if (opcode == BPF_CALL) {
 				if (BPF_SRC(insn->code) != BPF_K ||
-				    insn->off != 0 ||
+				    (insn->src_reg != BPF_PSEUDO_KFUNC_CALL
+				     && insn->off != 0) ||
 				    (insn->src_reg != BPF_REG_0 &&
 				     insn->src_reg != BPF_PSEUDO_CALL &&
 				     insn->src_reg != BPF_PSEUDO_KFUNC_CALL) ||
@@ -12432,6 +12579,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
 		func[i]->aux->stack_depth = env->subprog_info[i].stack_depth;
 		func[i]->jit_requested = 1;
 		func[i]->aux->kfunc_tab = prog->aux->kfunc_tab;
+		func[i]->aux->kfunc_btf_tab = prog->aux->kfunc_btf_tab;
 		func[i]->aux->linfo = prog->aux->linfo;
 		func[i]->aux->nr_linfo = prog->aux->nr_linfo;
 		func[i]->aux->jited_linfo = prog->aux->jited_linfo;
@@ -12621,7 +12769,7 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
 	/* insn->imm has the btf func_id. Replace it with
 	 * an address (relative to __bpf_base_call).
 	 */
-	desc = find_kfunc_desc(env->prog, insn->imm);
+	desc = find_kfunc_desc(env->prog, insn->imm, insn->off);
 	if (!desc) {
 		verbose(env, "verifier internal error: kernel function descriptor not found for func_id %u\n",
 			insn->imm);
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index fcb2f493f710..fe5c34f414a2 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -241,7 +241,7 @@ BTF_ID(func, bpf_kfunc_call_test2)
 BTF_ID(func, bpf_kfunc_call_test3)
 BTF_SET_END(test_sk_kfunc_ids)
 
-bool bpf_prog_test_check_kfunc_call(u32 kfunc_id)
+bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner)
 {
 	return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
 }
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index 0dcee9df1326..b3afd3361f34 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -255,7 +255,7 @@ BTF_ID(func, bbr_set_state)
 #endif	/* CONFIG_X86 */
 BTF_SET_END(bpf_tcp_ca_kfunc_ids)
 
-static bool bpf_tcp_ca_check_kfunc_call(u32 kfunc_btf_id)
+static bool bpf_tcp_ca_check_kfunc_call(u32 kfunc_btf_id, struct module *owner)
 {
 	return btf_id_set_contains(&bpf_tcp_ca_kfunc_ids, kfunc_btf_id);
 }
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 02/11] bpf: Be conservative while processing invalid kfunc calls
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 01/11] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 03/11] bpf: btf: Introduce helpers for dynamic BTF set registration Kumar Kartikeya Dwivedi
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This patch also modifies the BPF verifier to only return error for
invalid kfunc calls specially marked by userspace (with insn->imm == 0,
insn->off == 0) after the verifier has eliminated dead instructions.
This can be handled in the fixup stage, and skip processing during add
and check stages.

If such an invalid call is dropped, the fixup stage will not encounter
insn->imm as 0, otherwise it bails out and returns an error.

This will be exposed as weak ksym support in libbpf in later patches.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 kernel/bpf/verifier.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 5b465d67cf65..58c86670dc55 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -1820,6 +1820,15 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id, s16 offset)
 		prog_aux->kfunc_tab = tab;
 	}
 
+	/* func_id == 0 is always invalid, but instead of returning an error, be
+	 * conservative and wait until the code elimination pass before returning
+	 * error, so that invalid calls that get pruned out can be in BPF programs
+	 * loaded from userspace.  It is also required that offset be untouched
+	 * for such calls.
+	 */
+	if (!func_id && !offset)
+		return 0;
+
 	if (!btf_tab && offset) {
 		btf_tab = kzalloc(sizeof(*btf_tab), GFP_KERNEL);
 		if (!btf_tab)
@@ -6630,6 +6639,10 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn)
 	struct btf *desc_btf;
 	int err;
 
+	/* skip for now, but return error when we find this in fixup_kfunc_call */
+	if (!insn->imm)
+		return 0;
+
 	desc_btf = find_kfunc_desc_btf(env, insn->imm, insn->off, &btf_mod);
 	if (IS_ERR(desc_btf))
 		return PTR_ERR(desc_btf);
@@ -12766,6 +12779,11 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env,
 {
 	const struct bpf_kfunc_desc *desc;
 
+	if (!insn->imm) {
+		verbose(env, "invalid kernel function call not eliminated in verifier pass\n");
+		return -EINVAL;
+	}
+
 	/* insn->imm has the btf func_id. Replace it with
 	 * an address (relative to __bpf_base_call).
 	 */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 03/11] bpf: btf: Introduce helpers for dynamic BTF set registration
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 01/11] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 02/11] bpf: Be conservative while processing invalid kfunc calls Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 04/11] tools: Allow specifying base BTF file in resolve_btfids Kumar Kartikeya Dwivedi
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This adds helpers for registering btf_id_set from modules and the
check_kfunc_call callback that can be used to look them up.

With in kernel sets, the way this is supposed to work is, in kernel
callback looks up within the in-kernel kfunc whitelist, and then defers
to the dynamic BTF set lookup if it doesn't find the BTF id. If there is
no in-kernel BTF id set, this callback can be used directly.

Also fix includes for btf.h and bpfptr.h so that they can included in
isolation. This is in preparation for their usage in tcp_bbr, tcp_cubic
and tcp_dctcp modules in the next patch.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/bpfptr.h |  1 +
 include/linux/btf.h    | 31 +++++++++++++++++++++++++
 kernel/bpf/btf.c       | 51 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 83 insertions(+)

diff --git a/include/linux/bpfptr.h b/include/linux/bpfptr.h
index 546e27fc6d46..46e1757d06a3 100644
--- a/include/linux/bpfptr.h
+++ b/include/linux/bpfptr.h
@@ -3,6 +3,7 @@
 #ifndef _LINUX_BPFPTR_H
 #define _LINUX_BPFPTR_H
 
+#include <linux/mm.h>
 #include <linux/sockptr.h>
 
 typedef sockptr_t bpfptr_t;
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 214fde93214b..7884983857d9 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -5,6 +5,7 @@
 #define _LINUX_BTF_H 1
 
 #include <linux/types.h>
+#include <linux/bpfptr.h>
 #include <uapi/linux/btf.h>
 #include <uapi/linux/bpf.h>
 
@@ -238,4 +239,34 @@ static inline const char *btf_name_by_offset(const struct btf *btf,
 }
 #endif
 
+struct kfunc_btf_id_set {
+	struct list_head list;
+	struct btf_id_set *set;
+	struct module *owner;
+};
+
+struct kfunc_btf_id_list;
+
+#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
+void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
+			       struct kfunc_btf_id_set *s);
+void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
+				 struct kfunc_btf_id_set *s);
+#else
+static inline void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
+					     struct kfunc_btf_id_set *s)
+{
+}
+static inline void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
+					       struct kfunc_btf_id_set *s)
+{
+}
+#endif
+
+#define DECLARE_CHECK_KFUNC_CALLBACK(type)                                     \
+	bool __bpf_check_##type##_kfunc_call(u32 kfunc_id, struct module *owner)
+#define DEFINE_KFUNC_BTF_ID_SET(set, name)                                     \
+	struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set),     \
+					 THIS_MODULE }
+
 #endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index c3d605b22473..d17f45b163f5 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6343,3 +6343,54 @@ const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
 };
 
 BTF_ID_LIST_GLOBAL_SINGLE(btf_task_struct_ids, struct, task_struct)
+
+struct kfunc_btf_id_list {
+	struct list_head list;
+	struct mutex mutex;
+};
+
+#ifdef CONFIG_DEBUG_INFO_BTF_MODULES
+
+void register_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
+			       struct kfunc_btf_id_set *s)
+{
+	mutex_lock(&l->mutex);
+	list_add(&s->list, &l->list);
+	mutex_unlock(&l->mutex);
+}
+EXPORT_SYMBOL_GPL(register_kfunc_btf_id_set);
+
+void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
+				 struct kfunc_btf_id_set *s)
+{
+	mutex_lock(&l->mutex);
+	list_del_init(&s->list);
+	mutex_unlock(&l->mutex);
+}
+EXPORT_SYMBOL_GPL(unregister_kfunc_btf_id_set);
+
+#endif
+
+#define DEFINE_KFUNC_BTF_ID_LIST(name)                                         \
+	struct kfunc_btf_id_list name = { LIST_HEAD_INIT(name.list),           \
+					  __MUTEX_INITIALIZER(name.mutex) }; \
+	EXPORT_SYMBOL_GPL(name)
+
+#define DEFINE_CHECK_KFUNC_CALLBACK(type, list_name)                           \
+	bool __bpf_check_##type##_kfunc_call(u32 kfunc_id,                     \
+					     struct module *owner)             \
+	{                                                                      \
+		struct kfunc_btf_id_set *s;                                    \
+		if (!owner || !IS_ENABLED(CONFIG_DEBUG_INFO_BTF_MODULES))      \
+			return false;                                          \
+		mutex_lock(&list_name.mutex);                                  \
+		list_for_each_entry(s, &list_name.list, list) {                \
+			if (s->owner == owner &&                               \
+			    btf_id_set_contains(s->set, kfunc_id)) {           \
+				mutex_unlock(&list_name.mutex);                \
+				return true;                                   \
+			}                                                      \
+		}                                                              \
+		mutex_unlock(&list_name.mutex);                                \
+		return false;                                                  \
+	}
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 04/11] tools: Allow specifying base BTF file in resolve_btfids
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (2 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 03/11] bpf: btf: Introduce helpers for dynamic BTF set registration Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This commits allows specifying the base BTF for resolving btf id
lists/sets during link time in the resolve_btfids tool. The base BTF is
set to NULL if no path is passed. This allows resolving BTF ids for
module kernel objects.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/bpf/resolve_btfids/main.c | 19 +++++++++++++++++--
 1 file changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/bpf/resolve_btfids/main.c b/tools/bpf/resolve_btfids/main.c
index de6365b53c9c..206e1120082f 100644
--- a/tools/bpf/resolve_btfids/main.c
+++ b/tools/bpf/resolve_btfids/main.c
@@ -89,6 +89,7 @@ struct btf_id {
 struct object {
 	const char *path;
 	const char *btf;
+	const char *base_btf_path;
 
 	struct {
 		int		 fd;
@@ -477,16 +478,27 @@ static int symbols_resolve(struct object *obj)
 	int nr_structs  = obj->nr_structs;
 	int nr_unions   = obj->nr_unions;
 	int nr_funcs    = obj->nr_funcs;
+	struct btf *base_btf = NULL;
 	int err, type_id;
 	struct btf *btf;
 	__u32 nr_types;
 
-	btf = btf__parse(obj->btf ?: obj->path, NULL);
+	if (obj->base_btf_path) {
+		base_btf = btf__parse(obj->base_btf_path, NULL);
+		err = libbpf_get_error(base_btf);
+		if (err) {
+			pr_err("FAILED: load base BTF from %s: %s\n",
+			       obj->base_btf_path, strerror(-err));
+			return -1;
+		}
+	}
+
+	btf = btf__parse_split(obj->btf ?: obj->path, base_btf);
 	err = libbpf_get_error(btf);
 	if (err) {
 		pr_err("FAILED: load BTF from %s: %s\n",
 			obj->btf ?: obj->path, strerror(-err));
-		return -1;
+		goto out;
 	}
 
 	err = -1;
@@ -545,6 +557,7 @@ static int symbols_resolve(struct object *obj)
 
 	err = 0;
 out:
+	btf__free(base_btf);
 	btf__free(btf);
 	return err;
 }
@@ -697,6 +710,8 @@ int main(int argc, const char **argv)
 			   "BTF data"),
 		OPT_BOOLEAN(0, "no-fail", &no_fail,
 			   "do not fail if " BTF_IDS_SECTION " section is not found"),
+		OPT_STRING('s', "base-btf", &obj.base_btf_path, "file",
+			   "path of file providing base BTF data"),
 		OPT_END()
 	};
 	int err = -1;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (3 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 04/11] tools: Allow specifying base BTF file in resolve_btfids Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-21 22:18   ` Andrii Nakryiko
  2021-09-20 14:15 ` [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This commit moves BTF ID lookup into the newly added registration
helper, in a way that the bbr, cubic, and dctcp implementation set up
their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
dependent on modules are looked up from the wrapper function.

This lifts the restriction for them to be compiled as built in objects,
and can be loaded as modules if required. Also modify Makefile.modfinal
to resolve_btfids in TCP congestion control modules if the config option
is set, using the base BTF support added in the previous commit.

See following commits for background on use of:

 CONFIG_X86 ifdef:
 569c484f9995 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)

 CONFIG_DYNAMIC_FTRACE ifdef:
 7aae231ac93b (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)

[ resolve_btfids uses --no-fail because some crypto kernel modules
  under arch/x86/crypto generated from ASM do not have the .BTF sections ]

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/btf.h       |  4 ++++
 kernel/bpf/btf.c          |  3 +++
 net/ipv4/bpf_tcp_ca.c     | 34 +++-------------------------------
 net/ipv4/tcp_bbr.c        | 28 +++++++++++++++++++++++++++-
 net/ipv4/tcp_cubic.c      | 26 +++++++++++++++++++++++++-
 net/ipv4/tcp_dctcp.c      | 26 +++++++++++++++++++++++++-
 scripts/Makefile.modfinal |  1 +
 7 files changed, 88 insertions(+), 34 deletions(-)

diff --git a/include/linux/btf.h b/include/linux/btf.h
index 7884983857d9..f5ae81e225be 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -269,4 +269,8 @@ static inline void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
 	struct kfunc_btf_id_set name = { LIST_HEAD_INIT(name.list), (set),     \
 					 THIS_MODULE }
 
+extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list;
+
+DECLARE_CHECK_KFUNC_CALLBACK(bpf_tcp_ca);
+
 #endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index d17f45b163f5..671b4f713a51 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6394,3 +6394,6 @@ EXPORT_SYMBOL_GPL(unregister_kfunc_btf_id_set);
 		mutex_unlock(&list_name.mutex);                                \
 		return false;                                                  \
 	}
+
+DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list);
+DEFINE_CHECK_KFUNC_CALLBACK(bpf_tcp_ca, bpf_tcp_ca_kfunc_list);
diff --git a/net/ipv4/bpf_tcp_ca.c b/net/ipv4/bpf_tcp_ca.c
index b3afd3361f34..c9f1d2dcf67b 100644
--- a/net/ipv4/bpf_tcp_ca.c
+++ b/net/ipv4/bpf_tcp_ca.c
@@ -223,41 +223,13 @@ BTF_ID(func, tcp_reno_cong_avoid)
 BTF_ID(func, tcp_reno_undo_cwnd)
 BTF_ID(func, tcp_slow_start)
 BTF_ID(func, tcp_cong_avoid_ai)
-#ifdef CONFIG_X86
-#ifdef CONFIG_DYNAMIC_FTRACE
-#if IS_BUILTIN(CONFIG_TCP_CONG_CUBIC)
-BTF_ID(func, cubictcp_init)
-BTF_ID(func, cubictcp_recalc_ssthresh)
-BTF_ID(func, cubictcp_cong_avoid)
-BTF_ID(func, cubictcp_state)
-BTF_ID(func, cubictcp_cwnd_event)
-BTF_ID(func, cubictcp_acked)
-#endif
-#if IS_BUILTIN(CONFIG_TCP_CONG_DCTCP)
-BTF_ID(func, dctcp_init)
-BTF_ID(func, dctcp_update_alpha)
-BTF_ID(func, dctcp_cwnd_event)
-BTF_ID(func, dctcp_ssthresh)
-BTF_ID(func, dctcp_cwnd_undo)
-BTF_ID(func, dctcp_state)
-#endif
-#if IS_BUILTIN(CONFIG_TCP_CONG_BBR)
-BTF_ID(func, bbr_init)
-BTF_ID(func, bbr_main)
-BTF_ID(func, bbr_sndbuf_expand)
-BTF_ID(func, bbr_undo_cwnd)
-BTF_ID(func, bbr_cwnd_event)
-BTF_ID(func, bbr_ssthresh)
-BTF_ID(func, bbr_min_tso_segs)
-BTF_ID(func, bbr_set_state)
-#endif
-#endif  /* CONFIG_DYNAMIC_FTRACE */
-#endif	/* CONFIG_X86 */
 BTF_SET_END(bpf_tcp_ca_kfunc_ids)
 
 static bool bpf_tcp_ca_check_kfunc_call(u32 kfunc_btf_id, struct module *owner)
 {
-	return btf_id_set_contains(&bpf_tcp_ca_kfunc_ids, kfunc_btf_id);
+	if (btf_id_set_contains(&bpf_tcp_ca_kfunc_ids, kfunc_btf_id))
+		return true;
+	return __bpf_check_bpf_tcp_ca_kfunc_call(kfunc_btf_id, owner);
 }
 
 static const struct bpf_verifier_ops bpf_tcp_ca_verifier_ops = {
diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
index 6274462b86b4..ec5550089b4d 100644
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -56,6 +56,8 @@
  * otherwise TCP stack falls back to an internal pacing using one high
  * resolution timer per TCP socket and may use more resources.
  */
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/module.h>
 #include <net/tcp.h>
 #include <linux/inet_diag.h>
@@ -1152,14 +1154,38 @@ static struct tcp_congestion_ops tcp_bbr_cong_ops __read_mostly = {
 	.set_state	= bbr_set_state,
 };
 
+BTF_SET_START(tcp_bbr_kfunc_ids)
+#ifdef CONFIG_X86
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, bbr_init)
+BTF_ID(func, bbr_main)
+BTF_ID(func, bbr_sndbuf_expand)
+BTF_ID(func, bbr_undo_cwnd)
+BTF_ID(func, bbr_cwnd_event)
+BTF_ID(func, bbr_ssthresh)
+BTF_ID(func, bbr_min_tso_segs)
+BTF_ID(func, bbr_set_state)
+#endif
+#endif
+BTF_SET_END(tcp_bbr_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_ID_SET(&tcp_bbr_kfunc_ids, tcp_bbr_kfunc_btf_set);
+
 static int __init bbr_register(void)
 {
+	int ret;
+
 	BUILD_BUG_ON(sizeof(struct bbr) > ICSK_CA_PRIV_SIZE);
-	return tcp_register_congestion_control(&tcp_bbr_cong_ops);
+	ret = tcp_register_congestion_control(&tcp_bbr_cong_ops);
+	if (ret)
+		return ret;
+	register_kfunc_btf_id_set(&bpf_tcp_ca_kfunc_list, &tcp_bbr_kfunc_btf_set);
+	return 0;
 }
 
 static void __exit bbr_unregister(void)
 {
+	unregister_kfunc_btf_id_set(&bpf_tcp_ca_kfunc_list, &tcp_bbr_kfunc_btf_set);
 	tcp_unregister_congestion_control(&tcp_bbr_cong_ops);
 }
 
diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
index 4a30deaa9a37..5e9d9c51164c 100644
--- a/net/ipv4/tcp_cubic.c
+++ b/net/ipv4/tcp_cubic.c
@@ -25,6 +25,8 @@
  */
 
 #include <linux/mm.h>
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/module.h>
 #include <linux/math64.h>
 #include <net/tcp.h>
@@ -482,8 +484,25 @@ static struct tcp_congestion_ops cubictcp __read_mostly = {
 	.name		= "cubic",
 };
 
+BTF_SET_START(tcp_cubic_kfunc_ids)
+#ifdef CONFIG_X86
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, cubictcp_init)
+BTF_ID(func, cubictcp_recalc_ssthresh)
+BTF_ID(func, cubictcp_cong_avoid)
+BTF_ID(func, cubictcp_state)
+BTF_ID(func, cubictcp_cwnd_event)
+BTF_ID(func, cubictcp_acked)
+#endif
+#endif
+BTF_SET_END(tcp_cubic_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_ID_SET(&tcp_cubic_kfunc_ids, tcp_cubic_kfunc_btf_set);
+
 static int __init cubictcp_register(void)
 {
+	int ret;
+
 	BUILD_BUG_ON(sizeof(struct bictcp) > ICSK_CA_PRIV_SIZE);
 
 	/* Precompute a bunch of the scaling factors that are used per-packet
@@ -514,11 +533,16 @@ static int __init cubictcp_register(void)
 	/* divide by bic_scale and by constant Srtt (100ms) */
 	do_div(cube_factor, bic_scale * 10);
 
-	return tcp_register_congestion_control(&cubictcp);
+	ret = tcp_register_congestion_control(&cubictcp);
+	if (ret)
+		return ret;
+	register_kfunc_btf_id_set(&bpf_tcp_ca_kfunc_list, &tcp_cubic_kfunc_btf_set);
+	return 0;
 }
 
 static void __exit cubictcp_unregister(void)
 {
+	unregister_kfunc_btf_id_set(&bpf_tcp_ca_kfunc_list, &tcp_cubic_kfunc_btf_set);
 	tcp_unregister_congestion_control(&cubictcp);
 }
 
diff --git a/net/ipv4/tcp_dctcp.c b/net/ipv4/tcp_dctcp.c
index 79f705450c16..0d7ab3cc7b61 100644
--- a/net/ipv4/tcp_dctcp.c
+++ b/net/ipv4/tcp_dctcp.c
@@ -36,6 +36,8 @@
  *	Glenn Judd <glenn.judd@morganstanley.com>
  */
 
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/module.h>
 #include <linux/mm.h>
 #include <net/tcp.h>
@@ -236,14 +238,36 @@ static struct tcp_congestion_ops dctcp_reno __read_mostly = {
 	.name		= "dctcp-reno",
 };
 
+BTF_SET_START(tcp_dctcp_kfunc_ids)
+#ifdef CONFIG_X86
+#ifdef CONFIG_DYNAMIC_FTRACE
+BTF_ID(func, dctcp_init)
+BTF_ID(func, dctcp_update_alpha)
+BTF_ID(func, dctcp_cwnd_event)
+BTF_ID(func, dctcp_ssthresh)
+BTF_ID(func, dctcp_cwnd_undo)
+BTF_ID(func, dctcp_state)
+#endif
+#endif
+BTF_SET_END(tcp_dctcp_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_ID_SET(&tcp_dctcp_kfunc_ids, tcp_dctcp_kfunc_btf_set);
+
 static int __init dctcp_register(void)
 {
+	int ret;
+
 	BUILD_BUG_ON(sizeof(struct dctcp) > ICSK_CA_PRIV_SIZE);
-	return tcp_register_congestion_control(&dctcp);
+	ret = tcp_register_congestion_control(&dctcp);
+	if (ret)
+		return ret;
+	register_kfunc_btf_id_set(&bpf_tcp_ca_kfunc_list, &tcp_dctcp_kfunc_btf_set);
+	return 0;
 }
 
 static void __exit dctcp_unregister(void)
 {
+	unregister_kfunc_btf_id_set(&bpf_tcp_ca_kfunc_list, &tcp_dctcp_kfunc_btf_set);
 	tcp_unregister_congestion_control(&dctcp);
 }
 
diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
index ff805777431c..b4f83533eda6 100644
--- a/scripts/Makefile.modfinal
+++ b/scripts/Makefile.modfinal
@@ -41,6 +41,7 @@ quiet_cmd_btf_ko = BTF [M] $@
       cmd_btf_ko = 							\
 	if [ -f vmlinux ]; then						\
 		LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \
+		$(RESOLVE_BTFIDS) --no-fail -s vmlinux $@; 		\
 	else								\
 		printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \
 	fi;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (4 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-21 22:41   ` Andrii Nakryiko
  2021-09-20 14:15 ` [PATCH bpf-next v4 07/11] libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This patch adds libbpf support for kernel module function call support.
The fd_array parameter is used during BPF program load is used to pass
module BTFs referenced by the program. insn->off is set to index into
this array, but starts from 1, because insn->off as 0 is reserved for
btf_vmlinux.

We try to use existing insn->off for a module, since the kernel limits
the maximum distinct module BTFs for kfuncs to 256, and also because
index must never exceed the maximum allowed value that can fit in
insn->off (INT16_MAX). In the future, if kernel interprets signed offset
as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/lib/bpf/bpf.c             |  1 +
 tools/lib/bpf/libbpf.c          | 58 +++++++++++++++++++++++++++++++--
 tools/lib/bpf/libbpf_internal.h |  1 +
 3 files changed, 57 insertions(+), 3 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 2401fad090c5..7d1741ceaa32 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -264,6 +264,7 @@ int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr)
 	attr.line_info_rec_size = load_attr->line_info_rec_size;
 	attr.line_info_cnt = load_attr->line_info_cnt;
 	attr.line_info = ptr_to_u64(load_attr->line_info);
+	attr.fd_array = ptr_to_u64(load_attr->fd_array);
 
 	if (load_attr->name)
 		memcpy(attr.prog_name, load_attr->name,
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index da65a1666a5e..3049dfc6088e 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -420,6 +420,12 @@ struct extern_desc {
 
 			/* local btf_id of the ksym extern's type. */
 			__u32 type_id;
+			/* offset to be patched in for insn->off,
+			 * this is 0 for btf_vmlinux, and index + 1
+			 * for module BTF, where index is BTF index in
+			 * obj->fd_array
+			 */
+			__s16 offset;
 		} ksym;
 	};
 };
@@ -516,6 +522,10 @@ struct bpf_object {
 	void *priv;
 	bpf_object_clear_priv_t clear_priv;
 
+	int *fd_array;
+	size_t fd_cap_cnt;
+	int nr_fds;
+
 	char path[];
 };
 #define obj_elf_valid(o)	((o)->efile.elf)
@@ -5357,6 +5367,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
 			ext = &obj->externs[relo->sym_off];
 			insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
 			insn[0].imm = ext->ksym.kernel_btf_id;
+			insn[0].off = ext->ksym.offset;
 			break;
 		case RELO_SUBPROG_ADDR:
 			if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
@@ -6151,6 +6162,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
 	}
 	load_attr.log_level = prog->log_level;
 	load_attr.prog_flags = prog->prog_flags;
+	load_attr.fd_array = prog->obj->fd_array;
 
 	if (prog->obj->gen_loader) {
 		bpf_gen__prog_load(prog->obj->gen_loader, &load_attr,
@@ -6763,9 +6775,46 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 	}
 
 	if (kern_btf != obj->btf_vmlinux) {
-		pr_warn("extern (func ksym) '%s': function in kernel module is not supported\n",
-			ext->name);
-		return -ENOTSUP;
+		int index = -1;
+
+		if (!obj->fd_array) {
+			obj->fd_array = calloc(8, sizeof(*obj->fd_array));
+			if (!obj->fd_array)
+				return -ENOMEM;
+			obj->fd_cap_cnt = 8;
+			/* index = 0 is for vmlinux BTF, so skip it */
+			obj->nr_fds = 1;
+		}
+
+		for (int i = 0; i < obj->nr_fds; i++) {
+			if (obj->fd_array[i] == kern_btf_fd) {
+				index = i;
+				break;
+			}
+		}
+
+		if (index == -1) {
+			if (obj->nr_fds == obj->fd_cap_cnt) {
+				ret = libbpf_ensure_mem((void **)&obj->fd_array,
+							&obj->fd_cap_cnt, sizeof(int),
+							obj->fd_cap_cnt + 1);
+				if (ret)
+					return ret;
+			}
+
+			index = obj->nr_fds;
+			obj->fd_array[obj->nr_fds++] = kern_btf_fd;
+		}
+
+		if (index > INT16_MAX) {
+			/* insn->off is s16 */
+			pr_warn("extern (func ksym) '%s': module btf fd index too big\n",
+				ext->name);
+			return -E2BIG;
+		}
+		ext->ksym.offset = index;
+	} else {
+		ext->ksym.offset = 0;
 	}
 
 	kern_func = btf__type_by_id(kern_btf, kfunc_id);
@@ -6941,6 +6990,9 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
 			err = bpf_gen__finish(obj->gen_loader);
 	}
 
+	/* clean up fd_array */
+	zfree(&obj->fd_array);
+
 	/* clean up module BTFs */
 	for (i = 0; i < obj->btf_module_cnt; i++) {
 		close(obj->btf_modules[i].fd);
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index ceb0c98979bc..44b8f381b035 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -291,6 +291,7 @@ struct bpf_prog_load_params {
 	__u32 log_level;
 	char *log_buf;
 	size_t log_buf_sz;
+	int *fd_array;
 };
 
 int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 07/11] libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (5 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-21 22:47   ` Andrii Nakryiko
  2021-09-20 14:15 ` [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations Kumar Kartikeya Dwivedi
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

Preserve these calls as it allows verifier to succeed in loading the
program if they are determined to be unreachable after dead code
elimination during program load. If not, the verifier will fail at
runtime. This is done for ext->is_weak symbols similar to the case for
variable ksyms.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/lib/bpf/libbpf.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 3049dfc6088e..3c195eaadf56 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3413,11 +3413,6 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
 				return -ENOTSUP;
 			}
 		} else if (strcmp(sec_name, KSYMS_SEC) == 0) {
-			if (btf_is_func(t) && ext->is_weak) {
-				pr_warn("extern weak function %s is unsupported\n",
-					ext->name);
-				return -ENOTSUP;
-			}
 			ksym_sec = sec;
 			ext->type = EXT_KSYM;
 			skip_mods_and_typedefs(obj->btf, t->type,
@@ -5366,8 +5361,12 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
 		case RELO_EXTERN_FUNC:
 			ext = &obj->externs[relo->sym_off];
 			insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
-			insn[0].imm = ext->ksym.kernel_btf_id;
-			insn[0].off = ext->ksym.offset;
+			if (ext->is_set) {
+				insn[0].imm = ext->ksym.kernel_btf_id;
+				insn[0].off = ext->ksym.offset;
+			} else { /* unresolved weak kfunc */
+				insn[0].imm = insn[0].off = 0;
+			}
 			break;
 		case RELO_SUBPROG_ADDR:
 			if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
@@ -6768,8 +6767,10 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 
 	kfunc_id = find_ksym_btf_id(obj, ext->name, BTF_KIND_FUNC,
 				    &kern_btf, &kern_btf_fd);
-	if (kfunc_id < 0) {
-		pr_warn("extern (func ksym) '%s': not found in kernel BTF\n",
+	if (kfunc_id == -ESRCH && ext->is_weak) {
+		return 0;
+	} else if (kfunc_id < 0) {
+		pr_warn("extern (func ksym) '%s': not found in kernel or module BTFs\n",
 			ext->name);
 		return kfunc_id;
 	}
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (6 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 07/11] libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-21  0:57   ` Alexei Starovoitov
  2021-09-20 14:15 ` [PATCH bpf-next v4 09/11] tools: bpftool: Add separate fd_array map support for light skeleton Kumar Kartikeya Dwivedi
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
relocations, with support for weak kfunc relocations. The next commit
adds bpftool supports to set up the fd_array_sz parameter for light
skeleton.

A second map for keeping fds is used instead of adding fds to existing
loader.map because of following reasons:

If reserving an area for map and BTF fds, we would waste the remaining
of (MAX_USED_MAPS + MAX_KFUNC_DESCS) * sizeof(int), which in most cases
will be unused by the program. Also, we must place some limit on the
amount of map and BTF fds a program can possibly open.

If setting gen->fd_array to first map_fd offset, and then just finding
the offset relative to this (for later BTF fds), such that they can be
packed without wasting space, we run the risk of unnecessarily running
out of valid offset for emit_relo stage (for kfuncs), because gen map
creation and relocation stages are separated by other steps that can add
lots of data (including bpf_object__populate_internal_map). It is also
prone to break silently if features are added between map and BTF fd
emits that possibly add more data (just ~128KB to break BTF fd, since
insn->off allows for INT16_MAX (32767) * 4 bytes).

Both of these issues are compounded by the fact that data map is shared
by all programs, so it is easy to end up with invalid offset for BTF fd.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/lib/bpf/bpf_gen_internal.h |  16 ++-
 tools/lib/bpf/gen_loader.c       | 222 ++++++++++++++++++++++++++-----
 tools/lib/bpf/libbpf.c           |   8 +-
 tools/lib/bpf/libbpf.h           |   1 +
 tools/lib/bpf/skel_internal.h    |  27 +++-
 5 files changed, 230 insertions(+), 44 deletions(-)

diff --git a/tools/lib/bpf/bpf_gen_internal.h b/tools/lib/bpf/bpf_gen_internal.h
index 615400391e57..c4aa86865b65 100644
--- a/tools/lib/bpf/bpf_gen_internal.h
+++ b/tools/lib/bpf/bpf_gen_internal.h
@@ -7,6 +7,16 @@ struct ksym_relo_desc {
 	const char *name;
 	int kind;
 	int insn_idx;
+	bool is_weak;
+};
+
+/* maximum distinct calls */
+#define MAX_KFUNC_DESCS 256
+
+struct kfunc_desc {
+	const char *name;
+	int ref;
+	int off;
 };
 
 struct bpf_gen {
@@ -23,7 +33,10 @@ struct bpf_gen {
 	struct ksym_relo_desc *relos;
 	int relo_cnt;
 	char attach_target[128];
+	struct kfunc_desc kdescs[MAX_KFUNC_DESCS];
 	int attach_kind;
+	__u32 nr_kfuncs;
+	int fd_array_sz;
 };
 
 void bpf_gen__init(struct bpf_gen *gen, int log_level);
@@ -36,6 +49,7 @@ void bpf_gen__prog_load(struct bpf_gen *gen, struct bpf_prog_load_params *load_a
 void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *value, __u32 value_size);
 void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx);
 void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *name, enum bpf_attach_type type);
-void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind, int insn_idx);
+void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak, int kind,
+			    int insn_idx);
 
 #endif
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 8df718a6b142..c471e0844b22 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -5,6 +5,7 @@
 #include <string.h>
 #include <errno.h>
 #include <linux/filter.h>
+#include <linux/kernel.h>
 #include "btf.h"
 #include "bpf.h"
 #include "libbpf.h"
@@ -13,7 +14,6 @@
 #include "bpf_gen_internal.h"
 #include "skel_internal.h"
 
-#define MAX_USED_MAPS 64
 #define MAX_USED_PROGS 32
 
 /* The following structure describes the stack layout of the loader program.
@@ -26,10 +26,10 @@
  * stack - bpf program stack
  * blob - bpf_attr-s, strings, insns, map data.
  *        All the bytes that loader prog will use for read/write.
+ * fd_blob - map fds, kfunc module btf fds
  */
 struct loader_stack {
 	__u32 btf_fd;
-	__u32 map_fd[MAX_USED_MAPS];
 	__u32 prog_fd[MAX_USED_PROGS];
 	__u32 inner_map_fd;
 };
@@ -145,6 +145,16 @@ static int add_data(struct bpf_gen *gen, const void *data, __u32 size)
 	return prev - gen->data_start;
 }
 
+/* return offset in fd_blob */
+static int add_fd(struct bpf_gen *gen)
+{
+	int prev;
+
+	prev = gen->fd_array_sz;
+	gen->fd_array_sz += sizeof(int);
+	return prev;
+}
+
 static int insn_bytes_to_bpf_size(__u32 sz)
 {
 	switch (sz) {
@@ -166,16 +176,34 @@ static void emit_rel_store(struct bpf_gen *gen, int off, int data)
 	emit(gen, BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0));
 }
 
-/* *(u64 *)(blob + off) = (u64)(void *)(%sp + stack_off) */
-static void emit_rel_store_sp(struct bpf_gen *gen, int off, int stack_off)
+/* *(u64 *)(blob + off) = (u64)(void *)(fd_blob + data) */
+static void emit_rel_store_fd(struct bpf_gen *gen, int off, int data)
 {
-	emit(gen, BPF_MOV64_REG(BPF_REG_0, BPF_REG_10));
-	emit(gen, BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, stack_off));
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 1, data));
 	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
 					 0, 0, 0, off));
 	emit(gen, BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0));
 }
 
+static void move_fd_blob2blob(struct bpf_gen *gen, int off, int size, int blob_off)
+{
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 0, off));
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_7, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 1, blob_off));
+	emit(gen, BPF_LDX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_0, BPF_REG_7, 0));
+	emit(gen, BPF_STX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_1, BPF_REG_0, 0));
+}
+
+static void move_fd_blob2ctx(struct bpf_gen *gen, int off, int size, int blob_off)
+{
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 1, blob_off));
+	emit(gen, BPF_LDX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_0, BPF_REG_1, 0));
+	emit(gen, BPF_STX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_6, BPF_REG_0, off));
+}
+
 static void move_ctx2blob(struct bpf_gen *gen, int off, int size, int ctx_off,
 				   bool check_non_zero)
 {
@@ -300,14 +328,24 @@ static void emit_sys_close_stack(struct bpf_gen *gen, int stack_off)
 	__emit_sys_close(gen);
 }
 
-static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
+static void __emit_sys_close_blob(struct bpf_gen *gen, int blob_idx, int blob_off)
 {
 	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE,
-					 0, 0, 0, blob_off));
+					 0, 0, blob_idx, blob_off));
 	emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0));
 	__emit_sys_close(gen);
 }
 
+static void emit_sys_close_blob(struct bpf_gen *gen, int blob_off)
+{
+	__emit_sys_close_blob(gen, 0, blob_off);
+}
+
+static void emit_sys_close_fd_blob(struct bpf_gen *gen, int blob_off)
+{
+	__emit_sys_close_blob(gen, 1, blob_off);
+}
+
 int bpf_gen__finish(struct bpf_gen *gen)
 {
 	int i;
@@ -321,11 +359,11 @@ int bpf_gen__finish(struct bpf_gen *gen)
 			       offsetof(struct bpf_prog_desc, prog_fd), 4,
 			       stack_off(prog_fd[i]));
 	for (i = 0; i < gen->nr_maps; i++)
-		move_stack2ctx(gen,
-			       sizeof(struct bpf_loader_ctx) +
-			       sizeof(struct bpf_map_desc) * i +
-			       offsetof(struct bpf_map_desc, map_fd), 4,
-			       stack_off(map_fd[i]));
+		move_fd_blob2ctx(gen,
+			      sizeof(struct bpf_loader_ctx) +
+			      sizeof(struct bpf_map_desc) * i +
+			      offsetof(struct bpf_map_desc, map_fd), 4,
+			      sizeof(int) * i);
 	emit(gen, BPF_MOV64_IMM(BPF_REG_0, 0));
 	emit(gen, BPF_EXIT_INSN());
 	pr_debug("gen: finish %d\n", gen->error);
@@ -336,6 +374,7 @@ int bpf_gen__finish(struct bpf_gen *gen)
 		opts->insns_sz = gen->insn_cur - gen->insn_start;
 		opts->data = gen->data_start;
 		opts->data_sz = gen->data_cur - gen->data_start;
+		opts->fd_array_sz = gen->fd_array_sz;
 	}
 	return gen->error;
 }
@@ -385,7 +424,7 @@ void bpf_gen__map_create(struct bpf_gen *gen,
 {
 	int attr_size = offsetofend(union bpf_attr, btf_vmlinux_value_type_id);
 	bool close_inner_map_fd = false;
-	int map_create_attr;
+	int map_create_attr, off;
 	union bpf_attr attr;
 
 	memset(&attr, 0, attr_size);
@@ -462,8 +501,10 @@ void bpf_gen__map_create(struct bpf_gen *gen,
 		gen->error = -EDOM; /* internal bug */
 		return;
 	} else {
-		emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7,
-				      stack_off(map_fd[map_idx])));
+		off = add_fd(gen);
+		emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
+						 0, 0, 1, off));
+		emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_7, 0));
 		gen->nr_maps++;
 	}
 	if (close_inner_map_fd)
@@ -506,8 +547,8 @@ static void emit_find_attach_target(struct bpf_gen *gen)
 	 */
 }
 
-void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind,
-			    int insn_idx)
+void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, bool is_weak,
+			    int kind, int insn_idx)
 {
 	struct ksym_relo_desc *relo;
 
@@ -519,11 +560,119 @@ void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind,
 	gen->relos = relo;
 	relo += gen->relo_cnt;
 	relo->name = name;
+	relo->is_weak = is_weak;
 	relo->kind = kind;
 	relo->insn_idx = insn_idx;
 	gen->relo_cnt++;
 }
 
+/* returns existing kfunc_desc with ref incremented, or inserts a new one */
+static struct kfunc_desc *get_kfunc_desc(struct bpf_gen *gen, const char *name)
+{
+	struct kfunc_desc *kdesc;
+
+	for (int i = 0; i < gen->nr_kfuncs; i++) {
+		if (!strcmp(gen->kdescs[i].name, name)) {
+			gen->kdescs[i].ref++;
+			return &gen->kdescs[i];
+		}
+	}
+	if (gen->nr_kfuncs == ARRAY_SIZE(gen->kdescs))
+		return NULL;
+	kdesc = &gen->kdescs[gen->nr_kfuncs++];
+	kdesc->name = name;
+	kdesc->ref = 1;
+	kdesc->off = 0;
+	return kdesc;
+}
+
+/* Expects:
+ * BPF_REG_0 - pointer to instruction
+ * BPF_REG_7 - return value of bpf_btf_find_by_name_kind
+ *
+ * We need to reuse BTF fd for same symbol otherwise each relocation takes a new
+ * index, while kernel limits total kfunc BTFs to 256. For duplicate symbols,
+ * this would mean a new BTF fd index for each entry. By pairing symbol name
+ * with index, we get the insn->imm, insn->off pairing that kernel uses for
+ * kfunc_tab, which becomes the effective limit even though all of them may
+ * share same index in fd_array (such that kfunc_btf_tab has 1 element).
+ */
+static void emit_relo_kfunc_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insn)
+{
+	struct kfunc_desc *kdesc, def_kdesc;
+	int off, btf_fd;
+
+	kdesc = get_kfunc_desc(gen, relo->name);
+	if (!kdesc) {
+		/* Fallback to storing fd in fd_array, and let kernel handle too many BTFs/kfuncs */
+		pr_warn("Out of slots for kfunc %s, disabled BTF fd dedup for relocation\n",
+			relo->name);
+		def_kdesc.name = relo->name;
+		def_kdesc.ref = 1;
+		def_kdesc.off = 0;
+		kdesc = &def_kdesc;
+	}
+
+	btf_fd = kdesc->ref > 1 ? kdesc->off : add_fd(gen);
+	/* load slot pointer */
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_8, BPF_PSEUDO_MAP_IDX_VALUE,
+					 0, 0, 1, btf_fd));
+	/* Try to map one insn->off to one insn->imm */
+	if (kdesc->ref > 1) {
+		emit(gen, BPF_MOV64_REG(BPF_REG_9, BPF_REG_7));
+		goto skip_btf_fd;
+	} else {
+		/* cannot use index == 0 */
+		if (!btf_fd) {
+			btf_fd = add_fd(gen);
+			/* shift to next slot */
+			emit(gen, BPF_ALU64_IMM(BPF_ADD, BPF_REG_8, sizeof(int)));
+		}
+		kdesc->off = btf_fd;
+	}
+
+	/* set a default value */
+	emit(gen, BPF_ST_MEM(BPF_W, BPF_REG_8, 0, 0));
+	emit(gen, BPF_MOV64_REG(BPF_REG_9, BPF_REG_7));
+	/* store BTF fd if ret < 0 */
+	emit(gen, BPF_JMP_IMM(BPF_JSLT, BPF_REG_7, 0, 3));
+	emit(gen, BPF_ALU64_IMM(BPF_RSH, BPF_REG_7, 32));
+	/* store BTF fd in slot */
+	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7, 0));
+	emit(gen, BPF_MOV64_REG(BPF_REG_7, BPF_REG_9));
+skip_btf_fd:
+	/* remember BTF fd to skip insn->off store for vmlinux case */
+	emit(gen, BPF_ALU64_IMM(BPF_RSH, BPF_REG_9, 32));
+	/* store index into insn[insn_idx].off */
+	off = btf_fd / sizeof(int);
+	off = off > INT16_MAX ? 0 : off;
+	/* set a default value */
+	emit(gen, BPF_ST_MEM(BPF_H, BPF_REG_0, offsetof(struct bpf_insn, off), 0));
+	/* skip insn->off store if ret < 0 */
+	emit(gen, BPF_JMP_IMM(BPF_JSLT, BPF_REG_7, 0, 2));
+	/* skip if vmlinux BTF */
+	emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 0, 1));
+	/* store insn->off as the index of BTF fd */
+	emit(gen, BPF_ST_MEM(BPF_H, BPF_REG_0, offsetof(struct bpf_insn, off), off));
+	/* close fd that we skipped storing in fd_array map */
+	if (kdesc->ref > 1) {
+		emit(gen, BPF_MOV64_REG(BPF_REG_1, BPF_REG_9));
+		__emit_sys_close(gen);
+	}
+	if (gen->log_level) {
+		/* reload register 0, overwritten by sys_close */
+		if (kdesc->ref > 1)
+			emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE,
+							 0, 0, 0, insn));
+		emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_0,
+				      offsetof(struct bpf_insn, imm)));
+		emit(gen, BPF_LDX_MEM(BPF_H, BPF_REG_8, BPF_REG_0,
+				      offsetof(struct bpf_insn, off)));
+		debug_regs(gen, BPF_REG_7, BPF_REG_8, "sym (%s:%d): imm: %%d, off: %%d",
+			   relo->name, kdesc->ref);
+	}
+}
+
 static void emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insns)
 {
 	int name, insn, len = strlen(relo->name) + 1;
@@ -539,18 +688,24 @@ static void emit_relo(struct bpf_gen *gen, struct ksym_relo_desc *relo, int insn
 	emit(gen, BPF_EMIT_CALL(BPF_FUNC_btf_find_by_name_kind));
 	emit(gen, BPF_MOV64_REG(BPF_REG_7, BPF_REG_0));
 	debug_ret(gen, "find_by_name_kind(%s,%d)", relo->name, relo->kind);
-	emit_check_err(gen);
+	/* if not weak kfunc, emit err check */
+	if (relo->kind != BTF_KIND_FUNC || !relo->is_weak)
+		emit_check_err(gen);
+	insn = insns + sizeof(struct bpf_insn) * relo->insn_idx;
+	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE, 0, 0, 0, insn));
+	/* set a default value */
+	emit(gen, BPF_ST_MEM(BPF_W, BPF_REG_0, offsetof(struct bpf_insn, imm), 0));
+	/* skip success case store if ret < 0 */
+	emit(gen, BPF_JMP_IMM(BPF_JSLT, BPF_REG_7, 0, 1));
 	/* store btf_id into insn[insn_idx].imm */
-	insn = insns + sizeof(struct bpf_insn) * relo->insn_idx +
-		offsetof(struct bpf_insn, imm);
-	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE,
-					 0, 0, 0, insn));
-	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, 0));
+	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_insn, imm)));
 	if (relo->kind == BTF_KIND_VAR) {
 		/* store btf_obj_fd into insn[insn_idx + 1].imm */
 		emit(gen, BPF_ALU64_IMM(BPF_RSH, BPF_REG_7, 32));
 		emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_7,
-				      sizeof(struct bpf_insn)));
+				      sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm)));
+	} else if (relo->kind == BTF_KIND_FUNC) {
+		emit_relo_kfunc_btf(gen, relo, insn);
 	}
 }
 
@@ -558,6 +713,8 @@ static void emit_relos(struct bpf_gen *gen, int insns)
 {
 	int i;
 
+	/* clear kfunc_desc table */
+	gen->nr_kfuncs = 0;
 	for (i = 0; i < gen->relo_cnt; i++)
 		emit_relo(gen, gen->relos + i, insns);
 }
@@ -566,6 +723,8 @@ static void cleanup_relos(struct bpf_gen *gen, int insns)
 {
 	int i, insn;
 
+	for (i = 0; i < gen->nr_kfuncs; i++)
+		emit_sys_close_fd_blob(gen, gen->kdescs[i].off);
 	for (i = 0; i < gen->relo_cnt; i++) {
 		if (gen->relos[i].kind != BTF_KIND_VAR)
 			continue;
@@ -632,9 +791,8 @@ void bpf_gen__prog_load(struct bpf_gen *gen,
 	/* populate union bpf_attr with a pointer to line_info */
 	emit_rel_store(gen, attr_field(prog_load_attr, line_info), line_info);
 
-	/* populate union bpf_attr fd_array with a pointer to stack where map_fds are saved */
-	emit_rel_store_sp(gen, attr_field(prog_load_attr, fd_array),
-			  stack_off(map_fd[0]));
+	/* populate union bpf_attr fd_array with a pointer to data where map_fds are saved */
+	emit_rel_store_fd(gen, attr_field(prog_load_attr, fd_array), 0);
 
 	/* populate union bpf_attr with user provided log details */
 	move_ctx2blob(gen, attr_field(prog_load_attr, log_level), 4,
@@ -701,8 +859,8 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
 	emit(gen, BPF_EMIT_CALL(BPF_FUNC_copy_from_user));
 
 	map_update_attr = add_data(gen, &attr, attr_size);
-	move_stack2blob(gen, attr_field(map_update_attr, map_fd), 4,
-			stack_off(map_fd[map_idx]));
+	move_fd_blob2blob(gen, attr_field(map_update_attr, map_fd), 4,
+			  sizeof(int) * map_idx);
 	emit_rel_store(gen, attr_field(map_update_attr, key), key);
 	emit_rel_store(gen, attr_field(map_update_attr, value), value);
 	/* emit MAP_UPDATE_ELEM command */
@@ -720,8 +878,8 @@ void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx)
 	memset(&attr, 0, attr_size);
 	pr_debug("gen: map_freeze: idx %d\n", map_idx);
 	map_freeze_attr = add_data(gen, &attr, attr_size);
-	move_stack2blob(gen, attr_field(map_freeze_attr, map_fd), 4,
-			stack_off(map_fd[map_idx]));
+	move_fd_blob2blob(gen, attr_field(map_freeze_attr, map_fd), 4,
+			  sizeof(int) * map_idx);
 	/* emit MAP_FREEZE command */
 	emit_sys_bpf(gen, BPF_MAP_FREEZE, map_freeze_attr, attr_size);
 	debug_ret(gen, "map_freeze");
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 3c195eaadf56..77dffb66c1fd 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -6264,12 +6264,12 @@ static int bpf_program__record_externs(struct bpf_program *prog)
 					ext->name);
 				return -ENOTSUP;
 			}
-			bpf_gen__record_extern(obj->gen_loader, ext->name, BTF_KIND_VAR,
-					       relo->insn_idx);
+			bpf_gen__record_extern(obj->gen_loader, ext->name, ext->is_weak,
+					       BTF_KIND_VAR, relo->insn_idx);
 			break;
 		case RELO_EXTERN_FUNC:
-			bpf_gen__record_extern(obj->gen_loader, ext->name, BTF_KIND_FUNC,
-					       relo->insn_idx);
+			bpf_gen__record_extern(obj->gen_loader, ext->name, ext->is_weak,
+					       BTF_KIND_FUNC, relo->insn_idx);
 			break;
 		default:
 			continue;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index c90e3d79e72c..fdcdcfab5bce 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -885,6 +885,7 @@ struct gen_loader_opts {
 	const char *insns;
 	__u32 data_sz;
 	__u32 insns_sz;
+	__u32 fd_array_sz;
 };
 
 #define gen_loader_opts__last_field insns_sz
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
index b22b50c1b173..6c0f0adfd42f 100644
--- a/tools/lib/bpf/skel_internal.h
+++ b/tools/lib/bpf/skel_internal.h
@@ -44,6 +44,7 @@ struct bpf_load_and_run_opts {
 	const void *insns;
 	__u32 data_sz;
 	__u32 insns_sz;
+	__u32 fd_array_sz;
 	const char *errstr;
 };
 
@@ -62,31 +63,41 @@ static inline int skel_closenz(int fd)
 
 static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
 {
-	int map_fd = -1, prog_fd = -1, key = 0, err;
+	int map_fd[2] = {-1, -1}, prog_fd = -1, key = 0, err;
 	union bpf_attr attr;
 
-	map_fd = bpf_create_map_name(BPF_MAP_TYPE_ARRAY, "__loader.map", 4,
+	map_fd[0] = bpf_create_map_name(BPF_MAP_TYPE_ARRAY, "__loader.map", 4,
 				     opts->data_sz, 1, 0);
-	if (map_fd < 0) {
+	if (map_fd[0] < 0) {
 		opts->errstr = "failed to create loader map";
 		err = -errno;
 		goto out;
 	}
 
-	err = bpf_map_update_elem(map_fd, &key, opts->data, 0);
+	err = bpf_map_update_elem(map_fd[0], &key, opts->data, 0);
 	if (err < 0) {
 		opts->errstr = "failed to update loader map";
 		err = -errno;
 		goto out;
 	}
 
+	if (opts->fd_array_sz) {
+		map_fd[1] = bpf_create_map_name(BPF_MAP_TYPE_ARRAY, "__loader.fd.map", 4,
+						opts->fd_array_sz, 1, 0);
+		if (map_fd[1] < 0) {
+			opts->errstr = "failed to create loader fd map";
+			err = -errno;
+			goto out;
+		}
+	}
+
 	memset(&attr, 0, sizeof(attr));
 	attr.prog_type = BPF_PROG_TYPE_SYSCALL;
 	attr.insns = (long) opts->insns;
 	attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
 	attr.license = (long) "Dual BSD/GPL";
 	memcpy(attr.prog_name, "__loader.prog", sizeof("__loader.prog"));
-	attr.fd_array = (long) &map_fd;
+	attr.fd_array = (long) map_fd;
 	attr.log_level = opts->ctx->log_level;
 	attr.log_size = opts->ctx->log_size;
 	attr.log_buf = opts->ctx->log_buf;
@@ -113,8 +124,10 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
 	}
 	err = 0;
 out:
-	if (map_fd >= 0)
-		close(map_fd);
+	if (map_fd[0] >= 0)
+		close(map_fd[0]);
+	if (map_fd[1] >= 0)
+		close(map_fd[1]);
 	if (prog_fd >= 0)
 		close(prog_fd);
 	return err;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 09/11] tools: bpftool: Add separate fd_array map support for light skeleton
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (7 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 10/11] libbpf: Fix skel_internal.h to set errno on loader retval < 0 Kumar Kartikeya Dwivedi
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This prepares bpftool to pass fd_array_sz parameter, so that the map can
be created by bpf_load_and_run function to hold map and BTF fds.
---
 tools/bpf/bpftool/gen.c  | 3 ++-
 tools/bpf/bpftool/prog.c | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
index e3ec47a6a612..5bd3650956a4 100644
--- a/tools/bpf/bpftool/gen.c
+++ b/tools/bpf/bpftool/gen.c
@@ -532,10 +532,11 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
 
 	codegen("\
 		\n\
+			opts.fd_array_sz = %d;				    \n\
 			opts.insns_sz = %d;				    \n\
 			opts.insns = (void *)\"\\			    \n\
 		",
-		opts.insns_sz);
+		opts.fd_array_sz, opts.insns_sz);
 	print_hex(opts.insns, opts.insns_sz);
 	codegen("\
 		\n\
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 9c3e343b7d87..1d6286d727f4 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -1693,6 +1693,7 @@ static int try_loader(struct gen_loader_opts *gen)
 	opts.data_sz = gen->data_sz;
 	opts.insns = gen->insns;
 	opts.insns_sz = gen->insns_sz;
+	opts.fd_array_sz = gen->fd_array_sz;
 	fds_before = count_open_fds();
 	err = bpf_load_and_run(&opts);
 	fd_delta = count_open_fds() - fds_before;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 10/11] libbpf: Fix skel_internal.h to set errno on loader retval < 0
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (8 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 09/11] tools: bpftool: Add separate fd_array map support for light skeleton Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-20 14:15 ` [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support Kumar Kartikeya Dwivedi
  2021-09-21  0:29 ` [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Andrii Nakryiko
  11 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

When the loader indicates an internal error (result of a checked bpf
system call), it returns the result in attr.test.retval. However, tests
that rely on ASSERT_OK_PTR on NULL (returned from light skeleton) may
miss that NULL denotes an error if errno is set to 0. This would result
in skel pointer being NULL, while ASSERT_OK_PTR returning 1, leading to
a SEGV on dereference of skel, because libbpf_get_error relies on the
assumption that errno is always set in case of error for ptr == NULL.

In particular, this was observed for the ksyms_module test. When
executed using `./test_progs -t ksyms`, prior tests manipulated errno
and the test didn't crash when it failed at ksyms_module load, while
using `./test_progs -t ksyms_module` crashed due to errno being
untouched.

Fixes: 67234743736a (libbpf: Generate loader program out of BPF ELF file.)
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 tools/lib/bpf/skel_internal.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
index 6c0f0adfd42f..c7eb88040d6b 100644
--- a/tools/lib/bpf/skel_internal.h
+++ b/tools/lib/bpf/skel_internal.h
@@ -116,10 +116,12 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
 	err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr));
 	if (err < 0 || (int)attr.test.retval < 0) {
 		opts->errstr = "failed to execute loader prog";
-		if (err < 0)
+		if (err < 0) {
 			err = -errno;
-		else
+		} else {
 			err = (int)attr.test.retval;
+			errno = -err;
+		}
 		goto out;
 	}
 	err = 0;
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (9 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 10/11] libbpf: Fix skel_internal.h to set errno on loader retval < 0 Kumar Kartikeya Dwivedi
@ 2021-09-20 14:15 ` Kumar Kartikeya Dwivedi
  2021-09-21 23:00   ` Andrii Nakryiko
  2021-09-21  0:29 ` [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Andrii Nakryiko
  11 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-20 14:15 UTC (permalink / raw)
  To: bpf
  Cc: Kumar Kartikeya Dwivedi, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen, netdev

This adds selftests that tests the success and failure path for modules
kfuncs (in presence of invalid kfunc calls) for both libbpf and
gen_loader. It also adds a prog_test kfunc_btf_id_list so that we can
add module BTF ID set from bpf_testmod.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
---
 include/linux/btf.h                           |  2 +
 kernel/bpf/btf.c                              |  2 +
 net/bpf/test_run.c                            |  5 +-
 tools/testing/selftests/bpf/Makefile          |  5 +-
 .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 26 ++++++-
 .../selftests/bpf/prog_tests/ksyms_module.c   | 52 ++++++++++----
 .../bpf/prog_tests/ksyms_module_libbpf.c      | 44 ++++++++++++
 .../selftests/bpf/progs/test_ksyms_module.c   | 41 ++++++++---
 .../bpf/progs/test_ksyms_module_fail.c        | 29 ++++++++
 .../progs/test_ksyms_module_fail_toomany.c    | 19 +++++
 .../bpf/progs/test_ksyms_module_libbpf.c      | 71 +++++++++++++++++++
 .../bpf/progs/test_ksyms_module_util.h        | 48 +++++++++++++
 12 files changed, 317 insertions(+), 27 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
 create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h

diff --git a/include/linux/btf.h b/include/linux/btf.h
index f5ae81e225be..cab6aadf59de 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -270,7 +270,9 @@ static inline void unregister_kfunc_btf_id_set(struct kfunc_btf_id_list *l,
 					 THIS_MODULE }
 
 extern struct kfunc_btf_id_list bpf_tcp_ca_kfunc_list;
+extern struct kfunc_btf_id_list prog_test_kfunc_list;
 
 DECLARE_CHECK_KFUNC_CALLBACK(bpf_tcp_ca);
+DECLARE_CHECK_KFUNC_CALLBACK(prog_test);
 
 #endif
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 671b4f713a51..998aca7f42d0 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -6397,3 +6397,5 @@ EXPORT_SYMBOL_GPL(unregister_kfunc_btf_id_set);
 
 DEFINE_KFUNC_BTF_ID_LIST(bpf_tcp_ca_kfunc_list);
 DEFINE_CHECK_KFUNC_CALLBACK(bpf_tcp_ca, bpf_tcp_ca_kfunc_list);
+DEFINE_KFUNC_BTF_ID_LIST(prog_test_kfunc_list);
+DEFINE_CHECK_KFUNC_CALLBACK(prog_test, prog_test_kfunc_list);
diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index fe5c34f414a2..caa6831c1849 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -2,6 +2,7 @@
 /* Copyright (c) 2017 Facebook
  */
 #include <linux/bpf.h>
+#include <linux/btf.h>
 #include <linux/btf_ids.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@@ -243,7 +244,9 @@ BTF_SET_END(test_sk_kfunc_ids)
 
 bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner)
 {
-	return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
+	if (btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id))
+		return true;
+	return __bpf_check_prog_test_kfunc_call(kfunc_id, owner);
 }
 
 static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 326ea75ce99e..d20ff0563120 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -174,6 +174,7 @@ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_tes
 	$(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation
 	$(Q)$(MAKE) $(submake_extras) -C bpf_testmod
 	$(Q)cp bpf_testmod/bpf_testmod.ko $@
+	$(Q)$(RESOLVE_BTFIDS) -s ../../../../vmlinux bpf_testmod.ko
 
 $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
 	$(call msg,CC,,$@)
@@ -315,8 +316,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h		\
 		linked_vars.skel.h linked_maps.skel.h
 
 LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
-	test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
-	trace_vprintk.c
+	test_ksyms_module.c test_ksyms_module_fail.c test_ksyms_module_fail_toomany.c \
+	test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c
 SKEL_BLACKLIST += $$(LSKELS)
 
 test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index 50fc5561110a..4157d8497963 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 /* Copyright (c) 2020 Facebook */
+#include <linux/btf.h>
+#include <linux/btf_ids.h>
 #include <linux/error-injection.h>
 #include <linux/init.h>
 #include <linux/module.h>
@@ -10,9 +12,17 @@
 
 #define CREATE_TRACE_POINTS
 #include "bpf_testmod-events.h"
+#include "../progs/test_ksyms_module_util.h"
 
+KFUNC_DEFINE_VALID_DISTINCT_256;
 DEFINE_PER_CPU(int, bpf_testmod_ksym_percpu) = 123;
 
+noinline void
+bpf_testmod_test_mod_kfunc(int i)
+{
+	*(int *)this_cpu_ptr(&bpf_testmod_ksym_percpu) = i;
+}
+
 noinline int bpf_testmod_loop_test(int n)
 {
 	int i, sum = 0;
@@ -71,13 +81,27 @@ static struct bin_attribute bin_attr_bpf_testmod_file __ro_after_init = {
 	.write = bpf_testmod_test_write,
 };
 
+BTF_SET_START(bpf_testmod_kfunc_ids)
+BTF_ID(func, bpf_testmod_test_mod_kfunc)
+KFUNC_BTF_ID_VALID_DISTINCT_256
+BTF_SET_END(bpf_testmod_kfunc_ids)
+
+static DEFINE_KFUNC_BTF_ID_SET(&bpf_testmod_kfunc_ids, bpf_testmod_kfunc_btf_set);
+
 static int bpf_testmod_init(void)
 {
-	return sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
+	int ret;
+
+	ret = sysfs_create_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
+	if (ret)
+		return ret;
+	register_kfunc_btf_id_set(&prog_test_kfunc_list, &bpf_testmod_kfunc_btf_set);
+	return 0;
 }
 
 static void bpf_testmod_exit(void)
 {
+	unregister_kfunc_btf_id_set(&prog_test_kfunc_list, &bpf_testmod_kfunc_btf_set);
 	return sysfs_remove_bin_file(kernel_kobj, &bin_attr_bpf_testmod_file);
 }
 
diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
index 2cd5cded543f..a0dd60f00c57 100644
--- a/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
+++ b/tools/testing/selftests/bpf/prog_tests/ksyms_module.c
@@ -2,30 +2,54 @@
 /* Copyright (c) 2021 Facebook */
 
 #include <test_progs.h>
-#include <bpf/libbpf.h>
-#include <bpf/btf.h>
+#include <network_helpers.h>
 #include "test_ksyms_module.lskel.h"
+#include "test_ksyms_module_fail.lskel.h"
+#include "test_ksyms_module_fail_toomany.lskel.h"
 
-static int duration;
-
-void test_ksyms_module(void)
+void test_ksyms_module_main(void)
 {
-	struct test_ksyms_module* skel;
+	struct test_ksyms_module *skel;
+	int retval;
 	int err;
 
+	if (!env.has_testmod) {
+		test__skip();
+		return;
+	}
+
 	skel = test_ksyms_module__open_and_load();
-	if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
+	if (!ASSERT_OK_PTR(skel, "test_ksyms_module__open_and_load"))
 		return;
 
-	err = test_ksyms_module__attach(skel);
-	if (CHECK(err, "skel_attach", "skeleton attach failed: %d\n", err))
+	err = bpf_prog_test_run(skel->progs.handler.prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
+				NULL, NULL, (__u32 *)&retval, NULL);
+	if (!ASSERT_OK(err, "bpf_prog_test_run"))
 		goto cleanup;
+	ASSERT_EQ(retval, 0, "retval");
+	ASSERT_EQ(skel->bss->out_bpf_testmod_ksym, 42, "bpf_testmod_ksym");
+cleanup:
+	test_ksyms_module__destroy(skel);
+}
 
-	usleep(1);
+void test_ksyms_module_fail(void)
+{
+	struct test_ksyms_module_fail_toomany *skel2;
+	struct test_ksyms_module_fail *skel1;
 
-	ASSERT_EQ(skel->bss->triggered, true, "triggered");
-	ASSERT_EQ(skel->bss->out_mod_ksym_global, 123, "global_ksym_val");
+	skel1 = test_ksyms_module_fail__open_and_load();
+	if (!ASSERT_EQ(skel1, NULL, "test_ksyms_module_fail__open_and_load"))
+		test_ksyms_module_fail__destroy(skel1);
 
-cleanup:
-	test_ksyms_module__destroy(skel);
+	skel2 = test_ksyms_module_fail_toomany__open_and_load();
+	if (!ASSERT_EQ(skel2, NULL, "test_ksyms_module_fail_toomany__open_and_load"))
+		test_ksyms_module_fail_toomany__destroy(skel2);
+}
+
+void test_ksyms_module(void)
+{
+	if (test__start_subtest("main"))
+		test_ksyms_module_main();
+	if (test__start_subtest("fail"))
+		test_ksyms_module_fail();
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c b/tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
new file mode 100644
index 000000000000..d83297724ce8
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <test_progs.h>
+#include <network_helpers.h>
+#include "test_ksyms_module_libbpf.skel.h"
+
+void test_ksyms_module_libbpf(void)
+{
+	struct test_ksyms_module_libbpf *skel;
+	int retval, err;
+
+	if (!env.has_testmod) {
+		test__skip();
+		return;
+	}
+
+	skel = test_ksyms_module_libbpf__open();
+	if (!ASSERT_OK_PTR(skel, "test_ksyms_module_libbpf__open"))
+		return;
+	err = bpf_program__set_autoload(skel->progs.load_fail1, false);
+	if (!ASSERT_OK(err, "bpf_program__set_autoload false load_fail1"))
+		goto cleanup;
+	err = bpf_program__set_autoload(skel->progs.load_fail2, false);
+	if (!ASSERT_OK(err, "bpf_program__set_autoload false load_fail2"))
+		goto cleanup;
+	err = test_ksyms_module_libbpf__load(skel);
+	if (!ASSERT_OK(err, "test_ksyms_module_libbpf__load"))
+		goto cleanup;
+	err = bpf_prog_test_run(bpf_program__fd(skel->progs.handler), 1, &pkt_v4,
+				sizeof(pkt_v4), NULL, NULL, (__u32 *)&retval, NULL);
+	if (!ASSERT_OK(err, "bpf_prog_test_run"))
+		goto cleanup;
+	ASSERT_EQ(retval, 0, "retval");
+	ASSERT_EQ(skel->bss->out_bpf_testmod_ksym, 42, "bpf_testmod_ksym");
+
+	err = bpf_program__load(skel->progs.load_fail1, "GPL", 0);
+	if (!ASSERT_NEQ(err, 0, "bpf_program__load load_fail1"))
+		goto cleanup;
+	err = bpf_program__load(skel->progs.load_fail2, "GPL", 0);
+	if (!ASSERT_NEQ(err, 0, "bpf_program__load load_fail2"))
+		goto cleanup;
+cleanup:
+	test_ksyms_module_libbpf__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module.c b/tools/testing/selftests/bpf/progs/test_ksyms_module.c
index d6a0b3086b90..6b460c69a25a 100644
--- a/tools/testing/selftests/bpf/progs/test_ksyms_module.c
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_module.c
@@ -4,22 +4,45 @@
 #include "vmlinux.h"
 
 #include <bpf/bpf_helpers.h>
+#include "test_ksyms_module_util.h"
 
+KFUNC_KSYM_DECLARE_VALID_DISTINCT_256;
 extern const int bpf_testmod_ksym_percpu __ksym;
+extern void bpf_testmod_test_mod_kfunc(int i) __ksym;
+extern void bpf_testmod_invalid_mod_kfunc(void) __ksym __weak;
 
-int out_mod_ksym_global = 0;
-bool triggered = false;
+int out_bpf_testmod_ksym = 0;
+const volatile int x = 0;
 
-SEC("raw_tp/sys_enter")
-int handler(const void *ctx)
+SEC("classifier")
+int handler(struct __sk_buff *skb)
 {
-	int *val;
-	__u32 cpu;
+	/* This should be preserved by clang, but DCE'd by verifier, and still
+	 * allow loading the classifier prog
+	 */
+	if (x) {
+		bpf_testmod_invalid_mod_kfunc();
+		return -1;
+	}
+	bpf_testmod_test_mod_kfunc(42);
+	out_bpf_testmod_ksym = *(int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
+	return 0;
+}
 
-	val = (int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
-	out_mod_ksym_global = *val;
-	triggered = true;
+SEC("classifier")
+int load_256(struct __sk_buff *skb)
+{
+	/* this will fail if kfunc doesn't reuse its own btf fd index */
+	KFUNC_VALID_SAME_256;
+	KFUNC_VALID_SAME_ONE;
+	return 0;
+}
 
+SEC("classifier")
+int load_distinct256(struct __sk_buff *skb)
+{
+	/* kfuncs with distinct insn->imm, insn->off */
+	KFUNC_VALID_DISTINCT_256;
 	return 0;
 }
 
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c b/tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
new file mode 100644
index 000000000000..bcf98e814a7a
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
@@ -0,0 +1,29 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "vmlinux.h"
+
+#include <bpf/bpf_helpers.h>
+
+extern const int bpf_testmod_ksym_percpu __ksym;
+extern void bpf_testmod_test_mod_kfunc(int i) __ksym;
+extern void bpf_testmod_invalid_mod_kfunc(void) __ksym __weak;
+
+int out_bpf_testmod_ksym = 0;
+const volatile int x = 0;
+
+SEC("classifier")
+int load(struct __sk_buff *skb)
+{
+	/* This should be preserved by clang, but not DCE'd by verifier,
+	 * hence fail loading
+	 */
+	if (!x) {
+		bpf_testmod_invalid_mod_kfunc();
+		return -1;
+	}
+	bpf_testmod_test_mod_kfunc(42);
+	out_bpf_testmod_ksym = *(int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
+	return 0;
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c b/tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
new file mode 100644
index 000000000000..633a743a67c7
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "vmlinux.h"
+
+#include <bpf/bpf_helpers.h>
+#include "test_ksyms_module_util.h"
+
+KFUNC_KSYM_DECLARE_VALID_DISTINCT_256;
+extern void bpf_testmod_test_mod_kfunc(int i) __ksym;
+
+SEC("classifier")
+int load(struct __sk_buff *skb)
+{
+	KFUNC_VALID_DISTINCT_256;
+	KFUNC_VALID_SAME_ONE;
+	return 0;
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c b/tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
new file mode 100644
index 000000000000..079a039ccda5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
@@ -0,0 +1,71 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include "vmlinux.h"
+
+#include <bpf/bpf_helpers.h>
+#include "test_ksyms_module_util.h"
+
+KFUNC_KSYM_DECLARE_VALID_DISTINCT_256;
+extern const int bpf_testmod_ksym_percpu __ksym;
+extern void bpf_testmod_test_mod_kfunc(int i) __ksym;
+extern void bpf_testmod_invalid_mod_kfunc(void) __ksym __weak;
+
+int out_bpf_testmod_ksym = 0;
+const volatile int x = 0;
+
+SEC("classifier")
+int handler(struct __sk_buff *skb)
+{
+	/* This should be preserved by clang, but DCE'd by verifier, and still
+	 * allow loading the classifier prog
+	 */
+	if (x) {
+		bpf_testmod_invalid_mod_kfunc();
+		return -1;
+	}
+	bpf_testmod_test_mod_kfunc(42);
+	out_bpf_testmod_ksym = *(int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
+	return 0;
+}
+
+SEC("classifier")
+int load_fail1(struct __sk_buff *skb)
+{
+	/* This should be preserved by clang, but not DCE'd by verifier,
+	 * hence fail loading
+	 */
+	if (!x) {
+		bpf_testmod_invalid_mod_kfunc();
+		return -1;
+	}
+	bpf_testmod_test_mod_kfunc(42);
+	out_bpf_testmod_ksym = *(int *)bpf_this_cpu_ptr(&bpf_testmod_ksym_percpu);
+	return 0;
+}
+
+SEC("classifier")
+int load_fail2(struct __sk_buff *skb)
+{
+	KFUNC_VALID_DISTINCT_256;
+	KFUNC_VALID_SAME_ONE;
+	return 0;
+}
+
+SEC("classifier")
+int load_256(struct __sk_buff *skb)
+{
+	/* this will fail if kfunc doesn't reuse its own btf fd index */
+	KFUNC_VALID_SAME_256;
+	KFUNC_VALID_SAME_ONE;
+	return 0;
+}
+
+SEC("classifier")
+int load_distinct256(struct __sk_buff *skb)
+{
+	/* kfuncs with distinct insn->imm, insn->off */
+	KFUNC_VALID_DISTINCT_256;
+	return 0;
+}
+
+char LICENSE[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
new file mode 100644
index 000000000000..3afa74841ae0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+#ifndef __KSYMS_MODULE_UTIL_H__
+#define __KSYMS_MODULE_UTIL_H__
+
+#define __KFUNC_NR_EXP(Y)							\
+Y(0) Y(1) Y(2) Y(3) Y(4) Y(5) Y(6) Y(7) Y(8) Y(9) Y(10) Y(11) Y(12)		\
+Y(13) Y(14) Y(15) Y(16) Y(17) Y(18) Y(19) Y(20) Y(21) Y(22) Y(23)		\
+Y(24) Y(25) Y(26) Y(27) Y(28) Y(29) Y(30) Y(31) Y(32) Y(33) Y(34)		\
+Y(35) Y(36) Y(37) Y(38) Y(39) Y(40) Y(41) Y(42) Y(43) Y(44) Y(45)		\
+Y(46) Y(47) Y(48) Y(49) Y(50) Y(51) Y(52) Y(53) Y(54) Y(55) Y(56)		\
+Y(57) Y(58) Y(59) Y(60) Y(61) Y(62) Y(63) Y(64) Y(65) Y(66) Y(67)		\
+Y(68) Y(69) Y(70) Y(71) Y(72) Y(73) Y(74) Y(75) Y(76) Y(77) Y(78)		\
+Y(79) Y(80) Y(81) Y(82) Y(83) Y(84) Y(85) Y(86) Y(87) Y(88) Y(89)		\
+Y(90) Y(91) Y(92) Y(93) Y(94) Y(95) Y(96) Y(97) Y(98) Y(99) Y(100)		\
+Y(101) Y(102) Y(103) Y(104) Y(105) Y(106) Y(107) Y(108) Y(109) Y(110)		\
+Y(111) Y(112) Y(113) Y(114) Y(115) Y(116) Y(117) Y(118) Y(119) Y(120)		\
+Y(121) Y(122) Y(123) Y(124) Y(125) Y(126) Y(127) Y(128) Y(129) Y(130)		\
+Y(131) Y(132) Y(133) Y(134) Y(135) Y(136) Y(137) Y(138) Y(139) Y(140)		\
+Y(141) Y(142) Y(143) Y(144) Y(145) Y(146) Y(147) Y(148) Y(149) Y(150)		\
+Y(151) Y(152) Y(153) Y(154) Y(155) Y(156) Y(157) Y(158) Y(159) Y(160)		\
+Y(161) Y(162) Y(163) Y(164) Y(165) Y(166) Y(167) Y(168) Y(169) Y(170)		\
+Y(171) Y(172) Y(173) Y(174) Y(175) Y(176) Y(177) Y(178) Y(179) Y(180)		\
+Y(181) Y(182) Y(183) Y(184) Y(185) Y(186) Y(187) Y(188) Y(189) Y(190)		\
+Y(191) Y(192) Y(193) Y(194) Y(195) Y(196) Y(197) Y(198) Y(199) Y(200)		\
+Y(201) Y(202) Y(203) Y(204) Y(205) Y(206) Y(207) Y(208) Y(209) Y(210)		\
+Y(211) Y(212) Y(213) Y(214) Y(215) Y(216) Y(217) Y(218) Y(219) Y(220)		\
+Y(221) Y(222) Y(223) Y(224) Y(225) Y(226) Y(227) Y(228) Y(229) Y(230)		\
+Y(231) Y(232) Y(233) Y(234) Y(235) Y(236) Y(237) Y(238) Y(239) Y(240)		\
+Y(241) Y(242) Y(243) Y(244) Y(245) Y(246) Y(247) Y(248) Y(249) Y(250)		\
+Y(251) Y(252) Y(253) Y(254) Y(255)
+
+#define __KFUNC_A(nr) bpf_testmod_test_mod_kfunc_##nr();
+#define KFUNC_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_A)
+
+#define __KFUNC_B(nr) extern void bpf_testmod_test_mod_kfunc_##nr(void) __ksym;
+#define KFUNC_KSYM_DECLARE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_B)
+
+#define __KFUNC_C(nr) noinline void bpf_testmod_test_mod_kfunc_##nr(void) {};
+#define KFUNC_DEFINE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_C)
+
+#define __KFUNC_D(nr) BTF_ID(func, bpf_testmod_test_mod_kfunc_##nr)
+#define KFUNC_BTF_ID_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_D)
+
+#define __KFUNC_E(nr) bpf_testmod_test_mod_kfunc(nr);
+#define KFUNC_VALID_SAME_ONE __KFUNC_E(0)
+#define KFUNC_VALID_SAME_256 __KFUNC_NR_EXP(__KFUNC_E)
+
+#endif
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF
  2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
                   ` (10 preceding siblings ...)
  2021-09-20 14:15 ` [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support Kumar Kartikeya Dwivedi
@ 2021-09-21  0:29 ` Andrii Nakryiko
  2021-09-21  4:55   ` Kumar Kartikeya Dwivedi
  11 siblings, 1 reply; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-21  0:29 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This set enables kernel module function calls, and also modifies verifier logic
> to permit invalid kernel function calls as long as they are pruned as part of
> dead code elimination. This is done to provide better runtime portability for
> BPF objects, which can conditionally disable parts of code that are pruned later
> by the verifier (e.g. const volatile vars, kconfig options). libbpf
> modifications are made along with kernel changes to support module function
> calls. The set includes gen_loader support for emitting kfunc relocations.
>
> It also converts TCP congestion control objects to use the module kfunc support
> instead of relying on IS_BUILTIN ifdef.
>
> Changelog:
> ----------
> v3 -> v4

Please use vmtest.sh locally to test everything. That should help to
avoid breaking our CI ([0]):

  test_ksyms_module_libbpf:PASS:test_ksyms_module_libbpf__open 0 nsec
  test_ksyms_module_libbpf:PASS:bpf_program__set_autoload false
load_fail1 0 nsec
  test_ksyms_module_libbpf:PASS:bpf_program__set_autoload false
load_fail2 0 nsec
  libbpf: load bpf program failed: Invalid argument
  libbpf: -- BEGIN DUMP LOG ---
  libbpf:
  kernel btf_id 81786 is not a function
  processed 0 insns (limit 1000000) max_states_per_insn 0 total_states
0 peak_states 0 mark_read 0

  libbpf: -- END LOG --
  libbpf: failed to load program 'handler'
  libbpf: failed to load object 'test_ksyms_module_libbpf'
  libbpf: failed to load BPF skeleton 'test_ksyms_module_libbpf': -4007
  test_ksyms_module_libbpf:FAIL:test_ksyms_module_libbpf__load
unexpected error: -4007 (errno 4007)
  #66 ksyms_module_libbpf:FAIL

  test_module_attach:PASS:skel_open 0 nsec
  test_module_attach:PASS:set_attach_target 0 nsec
  libbpf: load bpf program failed: Invalid argument
  libbpf: -- BEGIN DUMP LOG ---
  libbpf:
  attach_btf_id 81768 is invalid
  processed 0 insns (limit 1000000) max_states_per_insn 0 total_states
0 peak_states 0 mark_read 0

  libbpf: -- END LOG --
  libbpf: failed to load program 'handle_tp_btf'
  libbpf: failed to load object 'test_module_attach'
  libbpf: failed to load BPF skeleton 'test_module_attach': -4007
  test_module_attach:FAIL:skel_load failed to load skeleton
  #81 module_attach:FAIL

  [0] https://github.com/kernel-patches/bpf/pull/1807/checks?check_run_id=3652765027


> v3: https://lore.kernel.org/bpf/20210915050943.679062-1-memxor@gmail.com
>
>  * Address comments from Alexei
>    * Drop MAX_BPF_STACK change, instead move map_fd and BTF fd to BPF array map
>      and pass fd_array using BPF_PSEUDO_MAP_IDX_VALUE
>  * Address comments from Andrii
>    * Fix selftest to store to variable for observing function call instead of
>      printk and polluting CI logs
>  * Drop use of raw_tp for testing, instead reuse classifier based prog_test_run
>  * Drop index + 1 based insn->off convention for kfunc module calls
>  * Expand selftests to cover more corner cases
>  * Misc cleanups
>
> v2 -> v3
> v2: https://lore.kernel.org/bpf/20210914123750.460750-1-memxor@gmail.com
>
>  * Fix issues pointed out by Kernel Test Robot
>  * Fix find_kfunc_desc to also take offset into consideration when comparing
>
> RFC v1 -> v2
> v1: https://lore.kernel.org/bpf/20210830173424.1385796-1-memxor@gmail.com
>
>  * Address comments from Alexei
>    * Reuse fd_array instead of introducing kfunc_btf_fds array
>    * Take btf and module reference as needed, instead of preloading
>    * Add BTF_KIND_FUNC relocation support to gen_loader infrastructure
>  * Address comments from Andrii
>    * Drop hashmap in libbpf for finding index of existing BTF in fd_array
>    * Preserve invalid kfunc calls only when the symbol is weak
>  * Adjust verifier selftests
>
> Kumar Kartikeya Dwivedi (11):
>   bpf: Introduce BPF support for kernel module function calls
>   bpf: Be conservative while processing invalid kfunc calls
>   bpf: btf: Introduce helpers for dynamic BTF set registration
>   tools: Allow specifying base BTF file in resolve_btfids
>   bpf: Enable TCP congestion control kfunc from modules
>   libbpf: Support kernel module function calls
>   libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0
>   libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
>   tools: bpftool: Add separate fd_array map support for light skeleton
>   libbpf: Fix skel_internal.h to set errno on loader retval < 0
>   bpf: selftests: Add selftests for module kfunc support
>
>  include/linux/bpf.h                           |   8 +-
>  include/linux/bpf_verifier.h                  |   2 +
>  include/linux/bpfptr.h                        |   1 +
>  include/linux/btf.h                           |  37 +++
>  kernel/bpf/btf.c                              |  56 +++++
>  kernel/bpf/core.c                             |   4 +
>  kernel/bpf/verifier.c                         | 220 ++++++++++++++---
>  net/bpf/test_run.c                            |   7 +-
>  net/ipv4/bpf_tcp_ca.c                         |  36 +--
>  net/ipv4/tcp_bbr.c                            |  28 ++-
>  net/ipv4/tcp_cubic.c                          |  26 +-
>  net/ipv4/tcp_dctcp.c                          |  26 +-
>  scripts/Makefile.modfinal                     |   1 +
>  tools/bpf/bpftool/gen.c                       |   3 +-
>  tools/bpf/bpftool/prog.c                      |   1 +
>  tools/bpf/resolve_btfids/main.c               |  19 +-
>  tools/lib/bpf/bpf.c                           |   1 +
>  tools/lib/bpf/bpf_gen_internal.h              |  16 +-
>  tools/lib/bpf/gen_loader.c                    | 222 +++++++++++++++---
>  tools/lib/bpf/libbpf.c                        |  83 +++++--
>  tools/lib/bpf/libbpf.h                        |   1 +
>  tools/lib/bpf/libbpf_internal.h               |   1 +
>  tools/lib/bpf/skel_internal.h                 |  33 ++-
>  tools/testing/selftests/bpf/Makefile          |   5 +-
>  .../selftests/bpf/bpf_testmod/bpf_testmod.c   |  26 +-
>  .../selftests/bpf/prog_tests/ksyms_module.c   |  52 ++--
>  .../bpf/prog_tests/ksyms_module_libbpf.c      |  44 ++++
>  .../selftests/bpf/progs/test_ksyms_module.c   |  41 +++-
>  .../bpf/progs/test_ksyms_module_fail.c        |  29 +++
>  .../progs/test_ksyms_module_fail_toomany.c    |  19 ++
>  .../bpf/progs/test_ksyms_module_libbpf.c      |  71 ++++++
>  .../bpf/progs/test_ksyms_module_util.h        |  48 ++++
>  32 files changed, 1014 insertions(+), 153 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
>
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
  2021-09-20 14:15 ` [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations Kumar Kartikeya Dwivedi
@ 2021-09-21  0:57   ` Alexei Starovoitov
  2021-09-21  4:50     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2021-09-21  0:57 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Network Development

On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
> relocations, with support for weak kfunc relocations. The next commit
> adds bpftool supports to set up the fd_array_sz parameter for light
> skeleton.
>
> A second map for keeping fds is used instead of adding fds to existing
> loader.map because of following reasons:

but it complicates signing bpf progs a lot.

> If reserving an area for map and BTF fds, we would waste the remaining
> of (MAX_USED_MAPS + MAX_KFUNC_DESCS) * sizeof(int), which in most cases
> will be unused by the program. Also, we must place some limit on the
> amount of map and BTF fds a program can possibly open.

That is just (256 + 64)*4 bytes of data. Really not much.
I wouldn't worry about reserving this space.

> If setting gen->fd_array to first map_fd offset, and then just finding
> the offset relative to this (for later BTF fds), such that they can be
> packed without wasting space, we run the risk of unnecessarily running
> out of valid offset for emit_relo stage (for kfuncs), because gen map
> creation and relocation stages are separated by other steps that can add
> lots of data (including bpf_object__populate_internal_map). It is also
> prone to break silently if features are added between map and BTF fd
> emits that possibly add more data (just ~128KB to break BTF fd, since
> insn->off allows for INT16_MAX (32767) * 4 bytes).

I don't follow this logic.

> Both of these issues are compounded by the fact that data map is shared
> by all programs, so it is easy to end up with invalid offset for BTF fd.

I don't follow this either. There is only one map and one program.
What sharing are you talking about?

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
  2021-09-21  0:57   ` Alexei Starovoitov
@ 2021-09-21  4:50     ` Kumar Kartikeya Dwivedi
  2021-09-21 19:04       ` Alexei Starovoitov
  0 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-21  4:50 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Network Development

On Tue, Sep 21, 2021 at 06:27:16AM IST, Alexei Starovoitov wrote:
> On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
> > relocations, with support for weak kfunc relocations. The next commit
> > adds bpftool supports to set up the fd_array_sz parameter for light
> > skeleton.
> >
> > A second map for keeping fds is used instead of adding fds to existing
> > loader.map because of following reasons:
>
> but it complicates signing bpf progs a lot.
>

Can you explain this in short? (Just want to understand why it would be
problem).

> > If reserving an area for map and BTF fds, we would waste the remaining
> > of (MAX_USED_MAPS + MAX_KFUNC_DESCS) * sizeof(int), which in most cases
> > will be unused by the program. Also, we must place some limit on the
> > amount of map and BTF fds a program can possibly open.
>
> That is just (256 + 64)*4 bytes of data. Really not much.
> I wouldn't worry about reserving this space.
>

Ok, I'll probably go with this now, I didn't realise a separate fd would be
prohibitive for the signing case, so I thought it would nice to lift the
limiation on number of map_fds by packing fd_array fds in another map.

> > If setting gen->fd_array to first map_fd offset, and then just finding
> > the offset relative to this (for later BTF fds), such that they can be
> > packed without wasting space, we run the risk of unnecessarily running
> > out of valid offset for emit_relo stage (for kfuncs), because gen map
> > creation and relocation stages are separated by other steps that can add
> > lots of data (including bpf_object__populate_internal_map). It is also
> > prone to break silently if features are added between map and BTF fd
> > emits that possibly add more data (just ~128KB to break BTF fd, since
> > insn->off allows for INT16_MAX (32767) * 4 bytes).
>
> I don't follow this logic.
>
> > Both of these issues are compounded by the fact that data map is shared
> > by all programs, so it is easy to end up with invalid offset for BTF fd.
>
> I don't follow this either. There is only one map and one program.
> What sharing are you talking about?

What I saw was that the sequence of calls is like this:
bpf_gen__map_create
add_data - from first emit we add map_fd, we also store gen->fd_array
then libbpf would call bpf_object__populate_internal_map
which calls bpf_gen__map_update_elem, which also does add_data (can be of
arbitrary sizes).

emit_relos happens relatively at the end.
For each program in the object, this sequence can be repeated, such that the
add_data that we do in emit_relos, relative offset from gen->fd_array offset
can end up becoming big enough (as all programs in object add data to same map),
while gen->fd_array comes from first map creation.

However reserving an area works ok (like with the stack).
Thanks.

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF
  2021-09-21  0:29 ` [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Andrii Nakryiko
@ 2021-09-21  4:55   ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-21  4:55 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Tue, Sep 21, 2021 at 05:59:22AM IST, Andrii Nakryiko wrote:
> On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > This set enables kernel module function calls, and also modifies verifier logic
> > to permit invalid kernel function calls as long as they are pruned as part of
> > dead code elimination. This is done to provide better runtime portability for
> > BPF objects, which can conditionally disable parts of code that are pruned later
> > by the verifier (e.g. const volatile vars, kconfig options). libbpf
> > modifications are made along with kernel changes to support module function
> > calls. The set includes gen_loader support for emitting kfunc relocations.
> >
> > It also converts TCP congestion control objects to use the module kfunc support
> > instead of relying on IS_BUILTIN ifdef.
> >
> > Changelog:
> > ----------
> > v3 -> v4
>
> Please use vmtest.sh locally to test everything. That should help to
> avoid breaking our CI ([0]):
>

I'll look into it. There were some problems getting it to run on Arch and
Fedora, so I instead tested in my own VM (GLIBC_2.33 not found).

Surprisingly, everything passed when I ran it before sending. e.g. I still
get this for the two failing tests in CI.

[root@(none) bpf]# ./test_progs -t ksyms_module_libbpf
#66 ksyms_module_libbpf:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
[root@(none) bpf]# ./test_progs -t module_attach
#81 module_attach:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
  2021-09-21  4:50     ` Kumar Kartikeya Dwivedi
@ 2021-09-21 19:04       ` Alexei Starovoitov
  2021-09-21 22:25         ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 31+ messages in thread
From: Alexei Starovoitov @ 2021-09-21 19:04 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Network Development

On Mon, Sep 20, 2021 at 9:50 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Tue, Sep 21, 2021 at 06:27:16AM IST, Alexei Starovoitov wrote:
> > On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
> > > relocations, with support for weak kfunc relocations. The next commit
> > > adds bpftool supports to set up the fd_array_sz parameter for light
> > > skeleton.
> > >
> > > A second map for keeping fds is used instead of adding fds to existing
> > > loader.map because of following reasons:
> >
> > but it complicates signing bpf progs a lot.
> >
>
> Can you explain this in short? (Just want to understand why it would be
> problem).

The signing idea (and light skeleton too) rely on two matching blocks:
signed map and signed prog that operates on this map.
They have to match and be technically part of single logical signature
that consists of two pieces.
The second map doesn't quite fit this model. Especially since it's an empty
map and it is there for temporary use during execution of the loader prog.
That fd_array_sz value would somehow need to be part of the signature.
Adding a 3rd non-generic component to a signature has consequences
to the whole signing process.
The loader prog could have created this temp map on its own
without asking bpf_load_and_run() to do it and without exposing it
into a signature.
Anyway the signed bpf progs may get solved differently with the latest John
proposal, but that's a different discussion.
The light skeleton minimalizm is its main advantage. Keeping it two
pieces: one map and one prog is its main selling point.

> > > If reserving an area for map and BTF fds, we would waste the remaining
> > > of (MAX_USED_MAPS + MAX_KFUNC_DESCS) * sizeof(int), which in most cases
> > > will be unused by the program. Also, we must place some limit on the
> > > amount of map and BTF fds a program can possibly open.
> >
> > That is just (256 + 64)*4 bytes of data. Really not much.
> > I wouldn't worry about reserving this space.
> >
>
> Ok, I'll probably go with this now, I didn't realise a separate fd would be
> prohibitive for the signing case, so I thought it would nice to lift the
> limiation on number of map_fds by packing fd_array fds in another map.
>
> > > If setting gen->fd_array to first map_fd offset, and then just finding
> > > the offset relative to this (for later BTF fds), such that they can be
> > > packed without wasting space, we run the risk of unnecessarily running
> > > out of valid offset for emit_relo stage (for kfuncs), because gen map
> > > creation and relocation stages are separated by other steps that can add
> > > lots of data (including bpf_object__populate_internal_map). It is also
> > > prone to break silently if features are added between map and BTF fd
> > > emits that possibly add more data (just ~128KB to break BTF fd, since
> > > insn->off allows for INT16_MAX (32767) * 4 bytes).
> >
> > I don't follow this logic.
> >
> > > Both of these issues are compounded by the fact that data map is shared
> > > by all programs, so it is easy to end up with invalid offset for BTF fd.
> >
> > I don't follow this either. There is only one map and one program.
> > What sharing are you talking about?
>
> What I saw was that the sequence of calls is like this:
> bpf_gen__map_create
> add_data - from first emit we add map_fd, we also store gen->fd_array
> then libbpf would call bpf_object__populate_internal_map
> which calls bpf_gen__map_update_elem, which also does add_data (can be of
> arbitrary sizes).
>
> emit_relos happens relatively at the end.
> For each program in the object, this sequence can be repeated, such that the
> add_data that we do in emit_relos, relative offset from gen->fd_array offset
> can end up becoming big enough (as all programs in object add data to same map),
> while gen->fd_array comes from first map creation.

You've meant to use fd_array as a very very sparse array
with giant gaps between valid map_fds and btf_fds. Now I see it :)
Indeed in such a case there is a risk of running out of 16-bit in bpf_insn->off.
Reserving (256 + 64)*4 in the beginning of the data map should solve it, right?
The loader prog can create a 2nd auxiliary map on the fly,
but it seems easier and simpler to just reserve this space in one and only map.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules
  2021-09-20 14:15 ` [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
@ 2021-09-21 22:18   ` Andrii Nakryiko
  2021-09-21 22:23     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 22:18 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This commit moves BTF ID lookup into the newly added registration
> helper, in a way that the bbr, cubic, and dctcp implementation set up
> their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
> dependent on modules are looked up from the wrapper function.
>
> This lifts the restriction for them to be compiled as built in objects,
> and can be loaded as modules if required. Also modify Makefile.modfinal
> to resolve_btfids in TCP congestion control modules if the config option
> is set, using the base BTF support added in the previous commit.
>
> See following commits for background on use of:
>
>  CONFIG_X86 ifdef:
>  569c484f9995 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)
>
>  CONFIG_DYNAMIC_FTRACE ifdef:
>  7aae231ac93b (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)
>
> [ resolve_btfids uses --no-fail because some crypto kernel modules
>   under arch/x86/crypto generated from ASM do not have the .BTF sections ]
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  include/linux/btf.h       |  4 ++++
>  kernel/bpf/btf.c          |  3 +++
>  net/ipv4/bpf_tcp_ca.c     | 34 +++-------------------------------
>  net/ipv4/tcp_bbr.c        | 28 +++++++++++++++++++++++++++-
>  net/ipv4/tcp_cubic.c      | 26 +++++++++++++++++++++++++-
>  net/ipv4/tcp_dctcp.c      | 26 +++++++++++++++++++++++++-
>  scripts/Makefile.modfinal |  1 +
>  7 files changed, 88 insertions(+), 34 deletions(-)
>

[...]

> diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
> index ff805777431c..b4f83533eda6 100644
> --- a/scripts/Makefile.modfinal
> +++ b/scripts/Makefile.modfinal
> @@ -41,6 +41,7 @@ quiet_cmd_btf_ko = BTF [M] $@
>        cmd_btf_ko =                                                     \
>         if [ -f vmlinux ]; then                                         \
>                 LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \
> +               $(RESOLVE_BTFIDS) --no-fail -s vmlinux $@;              \

I think I've asked that before, but I don't remember this being
answered. Why is this --no-fail?

>         else                                                            \
>                 printf "Skipping BTF generation for %s due to unavailability of vmlinux\n" $@ 1>&2; \
>         fi;
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules
  2021-09-21 22:18   ` Andrii Nakryiko
@ 2021-09-21 22:23     ` Kumar Kartikeya Dwivedi
  2021-09-21 23:02       ` Andrii Nakryiko
  0 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-21 22:23 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Wed, Sep 22, 2021 at 03:48:55AM IST, Andrii Nakryiko wrote:
> On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > This commit moves BTF ID lookup into the newly added registration
> > helper, in a way that the bbr, cubic, and dctcp implementation set up
> > their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
> > dependent on modules are looked up from the wrapper function.
> >
> > This lifts the restriction for them to be compiled as built in objects,
> > and can be loaded as modules if required. Also modify Makefile.modfinal
> > to resolve_btfids in TCP congestion control modules if the config option
> > is set, using the base BTF support added in the previous commit.
> >
> > See following commits for background on use of:
> >
> >  CONFIG_X86 ifdef:
> >  569c484f9995 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)
> >
> >  CONFIG_DYNAMIC_FTRACE ifdef:
> >  7aae231ac93b (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)
> >
> > [ resolve_btfids uses --no-fail because some crypto kernel modules
> >   under arch/x86/crypto generated from ASM do not have the .BTF sections ]
> >
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  include/linux/btf.h       |  4 ++++
> >  kernel/bpf/btf.c          |  3 +++
> >  net/ipv4/bpf_tcp_ca.c     | 34 +++-------------------------------
> >  net/ipv4/tcp_bbr.c        | 28 +++++++++++++++++++++++++++-
> >  net/ipv4/tcp_cubic.c      | 26 +++++++++++++++++++++++++-
> >  net/ipv4/tcp_dctcp.c      | 26 +++++++++++++++++++++++++-
> >  scripts/Makefile.modfinal |  1 +
> >  7 files changed, 88 insertions(+), 34 deletions(-)
> >
>
> [...]
>
> > diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
> > index ff805777431c..b4f83533eda6 100644
> > --- a/scripts/Makefile.modfinal
> > +++ b/scripts/Makefile.modfinal
> > @@ -41,6 +41,7 @@ quiet_cmd_btf_ko = BTF [M] $@
> >        cmd_btf_ko =                                                     \
> >         if [ -f vmlinux ]; then                                         \
> >                 LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \
> > +               $(RESOLVE_BTFIDS) --no-fail -s vmlinux $@;              \
>
> I think I've asked that before, but I don't remember this being
> answered. Why is this --no-fail?
>

Sorry, the first time, I missed that mail, and then it was too late so I decided
to put the reason in the commit message above.

> > [ resolve_btfids uses --no-fail because some crypto kernel modules
> >   under arch/x86/crypto generated from ASM do not have the .BTF sections ]

I could add a mode that fails only when processing a .BTF section present in
object fails, would that be better?

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations
  2021-09-21 19:04       ` Alexei Starovoitov
@ 2021-09-21 22:25         ` Kumar Kartikeya Dwivedi
  0 siblings, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-21 22:25 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Network Development

On Wed, Sep 22, 2021 at 12:34:41AM IST, Alexei Starovoitov wrote:
> On Mon, Sep 20, 2021 at 9:50 PM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > On Tue, Sep 21, 2021 at 06:27:16AM IST, Alexei Starovoitov wrote:
> > > On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> > > <memxor@gmail.com> wrote:
> > > >
> > > > This change updates the BPF syscall loader to relocate BTF_KIND_FUNC
> > > > relocations, with support for weak kfunc relocations. The next commit
> > > > adds bpftool supports to set up the fd_array_sz parameter for light
> > > > skeleton.
> > > >
> > > > A second map for keeping fds is used instead of adding fds to existing
> > > > loader.map because of following reasons:
> > >
> > > but it complicates signing bpf progs a lot.
> > >
> >
> > Can you explain this in short? (Just want to understand why it would be
> > problem).
>
> The signing idea (and light skeleton too) rely on two matching blocks:
> signed map and signed prog that operates on this map.
> They have to match and be technically part of single logical signature
> that consists of two pieces.
> The second map doesn't quite fit this model. Especially since it's an empty
> map and it is there for temporary use during execution of the loader prog.
> That fd_array_sz value would somehow need to be part of the signature.
> Adding a 3rd non-generic component to a signature has consequences
> to the whole signing process.
> The loader prog could have created this temp map on its own
> without asking bpf_load_and_run() to do it and without exposing it
> into a signature.
> Anyway the signed bpf progs may get solved differently with the latest John
> proposal, but that's a different discussion.
> The light skeleton minimalizm is its main advantage. Keeping it two
> pieces: one map and one prog is its main selling point.
>
> > > > If reserving an area for map and BTF fds, we would waste the remaining
> > > > of (MAX_USED_MAPS + MAX_KFUNC_DESCS) * sizeof(int), which in most cases
> > > > will be unused by the program. Also, we must place some limit on the
> > > > amount of map and BTF fds a program can possibly open.
> > >
> > > That is just (256 + 64)*4 bytes of data. Really not much.
> > > I wouldn't worry about reserving this space.
> > >
> >
> > Ok, I'll probably go with this now, I didn't realise a separate fd would be
> > prohibitive for the signing case, so I thought it would nice to lift the
> > limiation on number of map_fds by packing fd_array fds in another map.
> >
> > > > If setting gen->fd_array to first map_fd offset, and then just finding
> > > > the offset relative to this (for later BTF fds), such that they can be
> > > > packed without wasting space, we run the risk of unnecessarily running
> > > > out of valid offset for emit_relo stage (for kfuncs), because gen map
> > > > creation and relocation stages are separated by other steps that can add
> > > > lots of data (including bpf_object__populate_internal_map). It is also
> > > > prone to break silently if features are added between map and BTF fd
> > > > emits that possibly add more data (just ~128KB to break BTF fd, since
> > > > insn->off allows for INT16_MAX (32767) * 4 bytes).
> > >
> > > I don't follow this logic.
> > >
> > > > Both of these issues are compounded by the fact that data map is shared
> > > > by all programs, so it is easy to end up with invalid offset for BTF fd.
> > >
> > > I don't follow this either. There is only one map and one program.
> > > What sharing are you talking about?
> >
> > What I saw was that the sequence of calls is like this:
> > bpf_gen__map_create
> > add_data - from first emit we add map_fd, we also store gen->fd_array
> > then libbpf would call bpf_object__populate_internal_map
> > which calls bpf_gen__map_update_elem, which also does add_data (can be of
> > arbitrary sizes).
> >
> > emit_relos happens relatively at the end.
> > For each program in the object, this sequence can be repeated, such that the
> > add_data that we do in emit_relos, relative offset from gen->fd_array offset
> > can end up becoming big enough (as all programs in object add data to same map),
> > while gen->fd_array comes from first map creation.
>
> You've meant to use fd_array as a very very sparse array
> with giant gaps between valid map_fds and btf_fds. Now I see it :)
> Indeed in such a case there is a risk of running out of 16-bit in bpf_insn->off.
> Reserving (256 + 64)*4 in the beginning of the data map should solve it, right?
> The loader prog can create a 2nd auxiliary map on the fly,
> but it seems easier and simpler to just reserve this space in one and only map.

Thanks for the explanation! It makes sense. I will fix this in the next spin.

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls
  2021-09-20 14:15 ` [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
@ 2021-09-21 22:41   ` Andrii Nakryiko
  2021-09-24 23:54     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 22:41 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This patch adds libbpf support for kernel module function call support.
> The fd_array parameter is used during BPF program load is used to pass
> module BTFs referenced by the program. insn->off is set to index into
> this array, but starts from 1, because insn->off as 0 is reserved for
> btf_vmlinux.
>
> We try to use existing insn->off for a module, since the kernel limits
> the maximum distinct module BTFs for kfuncs to 256, and also because
> index must never exceed the maximum allowed value that can fit in
> insn->off (INT16_MAX). In the future, if kernel interprets signed offset
> as unsigned for kfunc calls, this limit can be increased to UINT16_MAX.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  tools/lib/bpf/bpf.c             |  1 +
>  tools/lib/bpf/libbpf.c          | 58 +++++++++++++++++++++++++++++++--
>  tools/lib/bpf/libbpf_internal.h |  1 +
>  3 files changed, 57 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index 2401fad090c5..7d1741ceaa32 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -264,6 +264,7 @@ int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr)
>         attr.line_info_rec_size = load_attr->line_info_rec_size;
>         attr.line_info_cnt = load_attr->line_info_cnt;
>         attr.line_info = ptr_to_u64(load_attr->line_info);
> +       attr.fd_array = ptr_to_u64(load_attr->fd_array);
>
>         if (load_attr->name)
>                 memcpy(attr.prog_name, load_attr->name,
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index da65a1666a5e..3049dfc6088e 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -420,6 +420,12 @@ struct extern_desc {
>
>                         /* local btf_id of the ksym extern's type. */
>                         __u32 type_id;
> +                       /* offset to be patched in for insn->off,
> +                        * this is 0 for btf_vmlinux, and index + 1

What does "index + 1" mean here? Seems like kernel code is using the
offset as is, without any -1 compensation.

> +                        * for module BTF, where index is BTF index in
> +                        * obj->fd_array
> +                        */
> +                       __s16 offset;
>                 } ksym;
>         };
>  };
> @@ -516,6 +522,10 @@ struct bpf_object {
>         void *priv;
>         bpf_object_clear_priv_t clear_priv;
>
> +       int *fd_array;
> +       size_t fd_cap_cnt;
> +       int nr_fds;
> +
>         char path[];
>  };
>  #define obj_elf_valid(o)       ((o)->efile.elf)
> @@ -5357,6 +5367,7 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
>                         ext = &obj->externs[relo->sym_off];
>                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
>                         insn[0].imm = ext->ksym.kernel_btf_id;
> +                       insn[0].off = ext->ksym.offset;
>                         break;
>                 case RELO_SUBPROG_ADDR:
>                         if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
> @@ -6151,6 +6162,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
>         }
>         load_attr.log_level = prog->log_level;
>         load_attr.prog_flags = prog->prog_flags;
> +       load_attr.fd_array = prog->obj->fd_array;
>
>         if (prog->obj->gen_loader) {
>                 bpf_gen__prog_load(prog->obj->gen_loader, &load_attr,
> @@ -6763,9 +6775,46 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
>         }
>
>         if (kern_btf != obj->btf_vmlinux) {
> -               pr_warn("extern (func ksym) '%s': function in kernel module is not supported\n",
> -                       ext->name);
> -               return -ENOTSUP;
> +               int index = -1;
> +
> +               if (!obj->fd_array) {
> +                       obj->fd_array = calloc(8, sizeof(*obj->fd_array));
> +                       if (!obj->fd_array)
> +                               return -ENOMEM;
> +                       obj->fd_cap_cnt = 8;
> +                       /* index = 0 is for vmlinux BTF, so skip it */
> +                       obj->nr_fds = 1;
> +               }

this doesn't make sense, you use libbpf_ensure_mem() and shouldn't do
anything like this, it's all taken care of  already

> +
> +               for (int i = 0; i < obj->nr_fds; i++) {
> +                       if (obj->fd_array[i] == kern_btf_fd) {
> +                               index = i;
> +                               break;
> +                       }
> +               }

we can actually avoid all this. We already have a list of module BTFs
in bpf_object (obj->btf_modules), where we remember their id, fd, etc.
We can also remember their fd_arr_idx for quick lookup. Just teach
find_ksym_btf_id() to optionally return struct module_btf * and use
that to find/set idx. That seems cleaner and probably would be easier
in the future as well.

> +
> +               if (index == -1) {
> +                       if (obj->nr_fds == obj->fd_cap_cnt) {

don't check, libbpf_ensure_mem() handles that

> +                               ret = libbpf_ensure_mem((void **)&obj->fd_array,
> +                                                       &obj->fd_cap_cnt, sizeof(int),
> +                                                       obj->fd_cap_cnt + 1);
> +                               if (ret)
> +                                       return ret;
> +                       }
> +
> +                       index = obj->nr_fds;
> +                       obj->fd_array[obj->nr_fds++] = kern_btf_fd;
> +               }
> +
> +               if (index > INT16_MAX) {
> +                       /* insn->off is s16 */
> +                       pr_warn("extern (func ksym) '%s': module btf fd index too big\n",
> +                               ext->name);

can you log index value here as well? "module BTF FD index %d is too big\n"?

> +                       return -E2BIG;
> +               }
> +               ext->ksym.offset = index;

> +       } else {
> +               ext->ksym.offset = 0;
>         }

I think it will be cleaner if you move the entire offset determination
logic after all the other checks are performed and ext is mostly
populated. That will also make the logic shorter and simpler because
if you find kern_btf_fd match, you can exit early (or probably rather
goto to report the match and exit). Otherwise

>
>         kern_func = btf__type_by_id(kern_btf, kfunc_id);

this is actually extremely wasteful for module BTFs. Let's add
internal (at least for now) helper that will search only for "own" BTF
types in the BTF, skipping types in base BTF. Something like
btf_type_by_id_own()?

> @@ -6941,6 +6990,9 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
>                         err = bpf_gen__finish(obj->gen_loader);
>         }
>
> +       /* clean up fd_array */
> +       zfree(&obj->fd_array);
> +
>         /* clean up module BTFs */
>         for (i = 0; i < obj->btf_module_cnt; i++) {
>                 close(obj->btf_modules[i].fd);
> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> index ceb0c98979bc..44b8f381b035 100644
> --- a/tools/lib/bpf/libbpf_internal.h
> +++ b/tools/lib/bpf/libbpf_internal.h
> @@ -291,6 +291,7 @@ struct bpf_prog_load_params {
>         __u32 log_level;
>         char *log_buf;
>         size_t log_buf_sz;
> +       int *fd_array;
>  };
>
>  int libbpf__bpf_prog_load(const struct bpf_prog_load_params *load_attr);
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 07/11] libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0
  2021-09-20 14:15 ` [PATCH bpf-next v4 07/11] libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
@ 2021-09-21 22:47   ` Andrii Nakryiko
  0 siblings, 0 replies; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 22:47 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> Preserve these calls as it allows verifier to succeed in loading the
> program if they are determined to be unreachable after dead code
> elimination during program load. If not, the verifier will fail at
> runtime. This is done for ext->is_weak symbols similar to the case for
> variable ksyms.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---

Looks good with few nits below

Acked-by: Andrii Nakryiko <andrii@kernel.org>

>  tools/lib/bpf/libbpf.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 3049dfc6088e..3c195eaadf56 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -3413,11 +3413,6 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
>                                 return -ENOTSUP;
>                         }
>                 } else if (strcmp(sec_name, KSYMS_SEC) == 0) {
> -                       if (btf_is_func(t) && ext->is_weak) {
> -                               pr_warn("extern weak function %s is unsupported\n",
> -                                       ext->name);
> -                               return -ENOTSUP;
> -                       }
>                         ksym_sec = sec;
>                         ext->type = EXT_KSYM;
>                         skip_mods_and_typedefs(obj->btf, t->type,
> @@ -5366,8 +5361,12 @@ bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
>                 case RELO_EXTERN_FUNC:
>                         ext = &obj->externs[relo->sym_off];
>                         insn[0].src_reg = BPF_PSEUDO_KFUNC_CALL;
> -                       insn[0].imm = ext->ksym.kernel_btf_id;
> -                       insn[0].off = ext->ksym.offset;
> +                       if (ext->is_set) {
> +                               insn[0].imm = ext->ksym.kernel_btf_id;
> +                               insn[0].off = ext->ksym.offset;
> +                       } else { /* unresolved weak kfunc */
> +                               insn[0].imm = insn[0].off = 0;

it's a bit too easy to miss this, please write as two separate statements

> +                       }
>                         break;
>                 case RELO_SUBPROG_ADDR:
>                         if (insn[0].src_reg != BPF_PSEUDO_FUNC) {
> @@ -6768,8 +6767,10 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
>
>         kfunc_id = find_ksym_btf_id(obj, ext->name, BTF_KIND_FUNC,
>                                     &kern_btf, &kern_btf_fd);
> -       if (kfunc_id < 0) {
> -               pr_warn("extern (func ksym) '%s': not found in kernel BTF\n",
> +       if (kfunc_id == -ESRCH && ext->is_weak) {
> +               return 0;
> +       } else if (kfunc_id < 0) {

nit: there is return above, no need for "else if", please drop that
part. It would be nice if you could clean this up in
bpf_object__resolve_ksym_var_btf_id() as well. Thanks!

> +               pr_warn("extern (func ksym) '%s': not found in kernel or module BTFs\n",
>                         ext->name);
>                 return kfunc_id;
>         }
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-20 14:15 ` [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support Kumar Kartikeya Dwivedi
@ 2021-09-21 23:00   ` Andrii Nakryiko
  2021-09-21 23:13     ` Kumar Kartikeya Dwivedi
  0 siblings, 1 reply; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 23:00 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Mon, Sep 20, 2021 at 7:16 AM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> This adds selftests that tests the success and failure path for modules
> kfuncs (in presence of invalid kfunc calls) for both libbpf and
> gen_loader. It also adds a prog_test kfunc_btf_id_list so that we can
> add module BTF ID set from bpf_testmod.
>
> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> ---
>  include/linux/btf.h                           |  2 +
>  kernel/bpf/btf.c                              |  2 +
>  net/bpf/test_run.c                            |  5 +-
>  tools/testing/selftests/bpf/Makefile          |  5 +-
>  .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 26 ++++++-
>  .../selftests/bpf/prog_tests/ksyms_module.c   | 52 ++++++++++----
>  .../bpf/prog_tests/ksyms_module_libbpf.c      | 44 ++++++++++++
>  .../selftests/bpf/progs/test_ksyms_module.c   | 41 ++++++++---
>  .../bpf/progs/test_ksyms_module_fail.c        | 29 ++++++++
>  .../progs/test_ksyms_module_fail_toomany.c    | 19 +++++
>  .../bpf/progs/test_ksyms_module_libbpf.c      | 71 +++++++++++++++++++
>  .../bpf/progs/test_ksyms_module_util.h        | 48 +++++++++++++
>  12 files changed, 317 insertions(+), 27 deletions(-)
>  create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
>  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
>

[...]

> @@ -243,7 +244,9 @@ BTF_SET_END(test_sk_kfunc_ids)
>
>  bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner)
>  {
> -       return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
> +       if (btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id))
> +               return true;
> +       return __bpf_check_prog_test_kfunc_call(kfunc_id, owner);
>  }
>
>  static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
> diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> index 326ea75ce99e..d20ff0563120 100644
> --- a/tools/testing/selftests/bpf/Makefile
> +++ b/tools/testing/selftests/bpf/Makefile
> @@ -174,6 +174,7 @@ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_tes
>         $(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation
>         $(Q)$(MAKE) $(submake_extras) -C bpf_testmod
>         $(Q)cp bpf_testmod/bpf_testmod.ko $@
> +       $(Q)$(RESOLVE_BTFIDS) -s ../../../../vmlinux bpf_testmod.ko

$(VMLINUX_BTF) instead of "../../../../vmlinux", it will break

>
>  $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
>         $(call msg,CC,,$@)
> @@ -315,8 +316,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h               \
>                 linked_vars.skel.h linked_maps.skel.h
>
>  LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
> -       test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
> -       trace_vprintk.c
> +       test_ksyms_module.c test_ksyms_module_fail.c test_ksyms_module_fail_toomany.c \
> +       test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c
>  SKEL_BLACKLIST += $$(LSKELS)
>

[...]

> diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> new file mode 100644
> index 000000000000..3afa74841ae0
> --- /dev/null
> +++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> @@ -0,0 +1,48 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#ifndef __KSYMS_MODULE_UTIL_H__
> +#define __KSYMS_MODULE_UTIL_H__
> +
> +#define __KFUNC_NR_EXP(Y)                                                      \
> +Y(0) Y(1) Y(2) Y(3) Y(4) Y(5) Y(6) Y(7) Y(8) Y(9) Y(10) Y(11) Y(12)            \
> +Y(13) Y(14) Y(15) Y(16) Y(17) Y(18) Y(19) Y(20) Y(21) Y(22) Y(23)              \
> +Y(24) Y(25) Y(26) Y(27) Y(28) Y(29) Y(30) Y(31) Y(32) Y(33) Y(34)              \
> +Y(35) Y(36) Y(37) Y(38) Y(39) Y(40) Y(41) Y(42) Y(43) Y(44) Y(45)              \
> +Y(46) Y(47) Y(48) Y(49) Y(50) Y(51) Y(52) Y(53) Y(54) Y(55) Y(56)              \
> +Y(57) Y(58) Y(59) Y(60) Y(61) Y(62) Y(63) Y(64) Y(65) Y(66) Y(67)              \
> +Y(68) Y(69) Y(70) Y(71) Y(72) Y(73) Y(74) Y(75) Y(76) Y(77) Y(78)              \
> +Y(79) Y(80) Y(81) Y(82) Y(83) Y(84) Y(85) Y(86) Y(87) Y(88) Y(89)              \
> +Y(90) Y(91) Y(92) Y(93) Y(94) Y(95) Y(96) Y(97) Y(98) Y(99) Y(100)             \
> +Y(101) Y(102) Y(103) Y(104) Y(105) Y(106) Y(107) Y(108) Y(109) Y(110)          \
> +Y(111) Y(112) Y(113) Y(114) Y(115) Y(116) Y(117) Y(118) Y(119) Y(120)          \
> +Y(121) Y(122) Y(123) Y(124) Y(125) Y(126) Y(127) Y(128) Y(129) Y(130)          \
> +Y(131) Y(132) Y(133) Y(134) Y(135) Y(136) Y(137) Y(138) Y(139) Y(140)          \
> +Y(141) Y(142) Y(143) Y(144) Y(145) Y(146) Y(147) Y(148) Y(149) Y(150)          \
> +Y(151) Y(152) Y(153) Y(154) Y(155) Y(156) Y(157) Y(158) Y(159) Y(160)          \
> +Y(161) Y(162) Y(163) Y(164) Y(165) Y(166) Y(167) Y(168) Y(169) Y(170)          \
> +Y(171) Y(172) Y(173) Y(174) Y(175) Y(176) Y(177) Y(178) Y(179) Y(180)          \
> +Y(181) Y(182) Y(183) Y(184) Y(185) Y(186) Y(187) Y(188) Y(189) Y(190)          \
> +Y(191) Y(192) Y(193) Y(194) Y(195) Y(196) Y(197) Y(198) Y(199) Y(200)          \
> +Y(201) Y(202) Y(203) Y(204) Y(205) Y(206) Y(207) Y(208) Y(209) Y(210)          \
> +Y(211) Y(212) Y(213) Y(214) Y(215) Y(216) Y(217) Y(218) Y(219) Y(220)          \
> +Y(221) Y(222) Y(223) Y(224) Y(225) Y(226) Y(227) Y(228) Y(229) Y(230)          \
> +Y(231) Y(232) Y(233) Y(234) Y(235) Y(236) Y(237) Y(238) Y(239) Y(240)          \
> +Y(241) Y(242) Y(243) Y(244) Y(245) Y(246) Y(247) Y(248) Y(249) Y(250)          \
> +Y(251) Y(252) Y(253) Y(254) Y(255)
> +
> +#define __KFUNC_A(nr) bpf_testmod_test_mod_kfunc_##nr();
> +#define KFUNC_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_A)
> +
> +#define __KFUNC_B(nr) extern void bpf_testmod_test_mod_kfunc_##nr(void) __ksym;
> +#define KFUNC_KSYM_DECLARE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_B)
> +
> +#define __KFUNC_C(nr) noinline void bpf_testmod_test_mod_kfunc_##nr(void) {};
> +#define KFUNC_DEFINE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_C)
> +
> +#define __KFUNC_D(nr) BTF_ID(func, bpf_testmod_test_mod_kfunc_##nr)
> +#define KFUNC_BTF_ID_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_D)
> +
> +#define __KFUNC_E(nr) bpf_testmod_test_mod_kfunc(nr);
> +#define KFUNC_VALID_SAME_ONE __KFUNC_E(0)
> +#define KFUNC_VALID_SAME_256 __KFUNC_NR_EXP(__KFUNC_E)
> +

This is pretty horrible... Wouldn't it be better to test limits like
this using the test_verifier approach, where we can craft a *short*
sequence of instructions that will test all these limits?...


> +#endif
> --
> 2.33.0
>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules
  2021-09-21 22:23     ` Kumar Kartikeya Dwivedi
@ 2021-09-21 23:02       ` Andrii Nakryiko
  0 siblings, 0 replies; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-21 23:02 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking, Jiri Olsa

On Tue, Sep 21, 2021 at 3:23 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Wed, Sep 22, 2021 at 03:48:55AM IST, Andrii Nakryiko wrote:
> > On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > This commit moves BTF ID lookup into the newly added registration
> > > helper, in a way that the bbr, cubic, and dctcp implementation set up
> > > their sets in the bpf_tcp_ca kfunc_btf_set list, while the ones not
> > > dependent on modules are looked up from the wrapper function.
> > >
> > > This lifts the restriction for them to be compiled as built in objects,
> > > and can be loaded as modules if required. Also modify Makefile.modfinal
> > > to resolve_btfids in TCP congestion control modules if the config option
> > > is set, using the base BTF support added in the previous commit.
> > >
> > > See following commits for background on use of:
> > >
> > >  CONFIG_X86 ifdef:
> > >  569c484f9995 (bpf: Limit static tcp-cc functions in the .BTF_ids list to x86)
> > >
> > >  CONFIG_DYNAMIC_FTRACE ifdef:
> > >  7aae231ac93b (bpf: tcp: Limit calling some tcp cc functions to CONFIG_DYNAMIC_FTRACE)
> > >
> > > [ resolve_btfids uses --no-fail because some crypto kernel modules
> > >   under arch/x86/crypto generated from ASM do not have the .BTF sections ]
> > >
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  include/linux/btf.h       |  4 ++++
> > >  kernel/bpf/btf.c          |  3 +++
> > >  net/ipv4/bpf_tcp_ca.c     | 34 +++-------------------------------
> > >  net/ipv4/tcp_bbr.c        | 28 +++++++++++++++++++++++++++-
> > >  net/ipv4/tcp_cubic.c      | 26 +++++++++++++++++++++++++-
> > >  net/ipv4/tcp_dctcp.c      | 26 +++++++++++++++++++++++++-
> > >  scripts/Makefile.modfinal |  1 +
> > >  7 files changed, 88 insertions(+), 34 deletions(-)
> > >
> >
> > [...]
> >
> > > diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
> > > index ff805777431c..b4f83533eda6 100644
> > > --- a/scripts/Makefile.modfinal
> > > +++ b/scripts/Makefile.modfinal
> > > @@ -41,6 +41,7 @@ quiet_cmd_btf_ko = BTF [M] $@
> > >        cmd_btf_ko =                                                     \
> > >         if [ -f vmlinux ]; then                                         \
> > >                 LLVM_OBJCOPY="$(OBJCOPY)" $(PAHOLE) -J --btf_base vmlinux $@; \
> > > +               $(RESOLVE_BTFIDS) --no-fail -s vmlinux $@;              \
> >
> > I think I've asked that before, but I don't remember this being
> > answered. Why is this --no-fail?
> >
>
> Sorry, the first time, I missed that mail, and then it was too late so I decided
> to put the reason in the commit message above.
>
> > > [ resolve_btfids uses --no-fail because some crypto kernel modules
> > >   under arch/x86/crypto generated from ASM do not have the .BTF sections ]
>
> I could add a mode that fails only when processing a .BTF section present in
> object fails, would that be better?

Oh, missed [ ] part in the commit message. But yeah, it feels like it
shouldn't be an error if the module legitimately doesn't have a .BTF
section. Is it an error right now? cc Jiri, maybe that was intentional

>
> --
> Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-21 23:00   ` Andrii Nakryiko
@ 2021-09-21 23:13     ` Kumar Kartikeya Dwivedi
  2021-09-21 23:28       ` Kumar Kartikeya Dwivedi
  2021-09-22  0:03       ` Andrii Nakryiko
  0 siblings, 2 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-21 23:13 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Wed, Sep 22, 2021 at 04:30:32AM IST, Andrii Nakryiko wrote:
> On Mon, Sep 20, 2021 at 7:16 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> >
> > This adds selftests that tests the success and failure path for modules
> > kfuncs (in presence of invalid kfunc calls) for both libbpf and
> > gen_loader. It also adds a prog_test kfunc_btf_id_list so that we can
> > add module BTF ID set from bpf_testmod.
> >
> > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > ---
> >  include/linux/btf.h                           |  2 +
> >  kernel/bpf/btf.c                              |  2 +
> >  net/bpf/test_run.c                            |  5 +-
> >  tools/testing/selftests/bpf/Makefile          |  5 +-
> >  .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 26 ++++++-
> >  .../selftests/bpf/prog_tests/ksyms_module.c   | 52 ++++++++++----
> >  .../bpf/prog_tests/ksyms_module_libbpf.c      | 44 ++++++++++++
> >  .../selftests/bpf/progs/test_ksyms_module.c   | 41 ++++++++---
> >  .../bpf/progs/test_ksyms_module_fail.c        | 29 ++++++++
> >  .../progs/test_ksyms_module_fail_toomany.c    | 19 +++++
> >  .../bpf/progs/test_ksyms_module_libbpf.c      | 71 +++++++++++++++++++
> >  .../bpf/progs/test_ksyms_module_util.h        | 48 +++++++++++++
> >  12 files changed, 317 insertions(+), 27 deletions(-)
> >  create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
> >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> >
>
> [...]
>
> > @@ -243,7 +244,9 @@ BTF_SET_END(test_sk_kfunc_ids)
> >
> >  bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner)
> >  {
> > -       return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
> > +       if (btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id))
> > +               return true;
> > +       return __bpf_check_prog_test_kfunc_call(kfunc_id, owner);
> >  }
> >
> >  static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
> > diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> > index 326ea75ce99e..d20ff0563120 100644
> > --- a/tools/testing/selftests/bpf/Makefile
> > +++ b/tools/testing/selftests/bpf/Makefile
> > @@ -174,6 +174,7 @@ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_tes
> >         $(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation
> >         $(Q)$(MAKE) $(submake_extras) -C bpf_testmod
> >         $(Q)cp bpf_testmod/bpf_testmod.ko $@
> > +       $(Q)$(RESOLVE_BTFIDS) -s ../../../../vmlinux bpf_testmod.ko
>
> $(VMLINUX_BTF) instead of "../../../../vmlinux", it will break
>
> >
> >  $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
> >         $(call msg,CC,,$@)
> > @@ -315,8 +316,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h               \
> >                 linked_vars.skel.h linked_maps.skel.h
> >
> >  LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
> > -       test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
> > -       trace_vprintk.c
> > +       test_ksyms_module.c test_ksyms_module_fail.c test_ksyms_module_fail_toomany.c \
> > +       test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c
> >  SKEL_BLACKLIST += $$(LSKELS)
> >
>
> [...]
>
> > diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > new file mode 100644
> > index 000000000000..3afa74841ae0
> > --- /dev/null
> > +++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > @@ -0,0 +1,48 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#ifndef __KSYMS_MODULE_UTIL_H__
> > +#define __KSYMS_MODULE_UTIL_H__
> > +
> > +#define __KFUNC_NR_EXP(Y)                                                      \
> > +Y(0) Y(1) Y(2) Y(3) Y(4) Y(5) Y(6) Y(7) Y(8) Y(9) Y(10) Y(11) Y(12)            \
> > +Y(13) Y(14) Y(15) Y(16) Y(17) Y(18) Y(19) Y(20) Y(21) Y(22) Y(23)              \
> > +Y(24) Y(25) Y(26) Y(27) Y(28) Y(29) Y(30) Y(31) Y(32) Y(33) Y(34)              \
> > +Y(35) Y(36) Y(37) Y(38) Y(39) Y(40) Y(41) Y(42) Y(43) Y(44) Y(45)              \
> > +Y(46) Y(47) Y(48) Y(49) Y(50) Y(51) Y(52) Y(53) Y(54) Y(55) Y(56)              \
> > +Y(57) Y(58) Y(59) Y(60) Y(61) Y(62) Y(63) Y(64) Y(65) Y(66) Y(67)              \
> > +Y(68) Y(69) Y(70) Y(71) Y(72) Y(73) Y(74) Y(75) Y(76) Y(77) Y(78)              \
> > +Y(79) Y(80) Y(81) Y(82) Y(83) Y(84) Y(85) Y(86) Y(87) Y(88) Y(89)              \
> > +Y(90) Y(91) Y(92) Y(93) Y(94) Y(95) Y(96) Y(97) Y(98) Y(99) Y(100)             \
> > +Y(101) Y(102) Y(103) Y(104) Y(105) Y(106) Y(107) Y(108) Y(109) Y(110)          \
> > +Y(111) Y(112) Y(113) Y(114) Y(115) Y(116) Y(117) Y(118) Y(119) Y(120)          \
> > +Y(121) Y(122) Y(123) Y(124) Y(125) Y(126) Y(127) Y(128) Y(129) Y(130)          \
> > +Y(131) Y(132) Y(133) Y(134) Y(135) Y(136) Y(137) Y(138) Y(139) Y(140)          \
> > +Y(141) Y(142) Y(143) Y(144) Y(145) Y(146) Y(147) Y(148) Y(149) Y(150)          \
> > +Y(151) Y(152) Y(153) Y(154) Y(155) Y(156) Y(157) Y(158) Y(159) Y(160)          \
> > +Y(161) Y(162) Y(163) Y(164) Y(165) Y(166) Y(167) Y(168) Y(169) Y(170)          \
> > +Y(171) Y(172) Y(173) Y(174) Y(175) Y(176) Y(177) Y(178) Y(179) Y(180)          \
> > +Y(181) Y(182) Y(183) Y(184) Y(185) Y(186) Y(187) Y(188) Y(189) Y(190)          \
> > +Y(191) Y(192) Y(193) Y(194) Y(195) Y(196) Y(197) Y(198) Y(199) Y(200)          \
> > +Y(201) Y(202) Y(203) Y(204) Y(205) Y(206) Y(207) Y(208) Y(209) Y(210)          \
> > +Y(211) Y(212) Y(213) Y(214) Y(215) Y(216) Y(217) Y(218) Y(219) Y(220)          \
> > +Y(221) Y(222) Y(223) Y(224) Y(225) Y(226) Y(227) Y(228) Y(229) Y(230)          \
> > +Y(231) Y(232) Y(233) Y(234) Y(235) Y(236) Y(237) Y(238) Y(239) Y(240)          \
> > +Y(241) Y(242) Y(243) Y(244) Y(245) Y(246) Y(247) Y(248) Y(249) Y(250)          \
> > +Y(251) Y(252) Y(253) Y(254) Y(255)
> > +
> > +#define __KFUNC_A(nr) bpf_testmod_test_mod_kfunc_##nr();
> > +#define KFUNC_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_A)
> > +
> > +#define __KFUNC_B(nr) extern void bpf_testmod_test_mod_kfunc_##nr(void) __ksym;
> > +#define KFUNC_KSYM_DECLARE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_B)
> > +
> > +#define __KFUNC_C(nr) noinline void bpf_testmod_test_mod_kfunc_##nr(void) {};
> > +#define KFUNC_DEFINE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_C)
> > +
> > +#define __KFUNC_D(nr) BTF_ID(func, bpf_testmod_test_mod_kfunc_##nr)
> > +#define KFUNC_BTF_ID_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_D)
> > +
> > +#define __KFUNC_E(nr) bpf_testmod_test_mod_kfunc(nr);
> > +#define KFUNC_VALID_SAME_ONE __KFUNC_E(0)
> > +#define KFUNC_VALID_SAME_256 __KFUNC_NR_EXP(__KFUNC_E)
> > +
>
> This is pretty horrible... Wouldn't it be better to test limits like

Yeah, I actually thought about this a bit yesterday, we could also do:
(untested)

#define X_0(x)
#define X_1(x) x X_0(x)
#define X_2(x) x X_1(x)
#define X_3(x) x X_2(x)
#define X_4(x) x X_3(x)
#define X_5(x) x X_4(x)
#define X_6(x) x X_5(x)
#define X_7(x) x X_6(x)
#define X_8(x) x X_7(x)
#define X_9(x) x X_8(x)
#define X_10(x) x X_9(x)

Then, for generating 256 items

X_2(X_10(X_10(foo))) X_5(X_10(foo)) X_6(foo)

... which looks much better.

> this using the test_verifier approach, where we can craft a *short*
> sequence of instructions that will test all these limits?...
>

Hmm, good idea, I'd just need to fill in the BTF id dynamically at runtime,
but that should be possible.

Though we still need to craft distinct calls (I am trying to test the limit
where insn->off is different for each case). Since we try to reuse index in both
gen_loader and libbpf, just generating same call 256 times would not be enough.

Let me know which one of the two you prefer.

>
> > +#endif
> > --
> > 2.33.0
> >

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-21 23:13     ` Kumar Kartikeya Dwivedi
@ 2021-09-21 23:28       ` Kumar Kartikeya Dwivedi
  2021-09-22  0:03       ` Andrii Nakryiko
  1 sibling, 0 replies; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-21 23:28 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Wed, Sep 22, 2021 at 04:43:20AM IST, Kumar Kartikeya Dwivedi wrote:
> On Wed, Sep 22, 2021 at 04:30:32AM IST, Andrii Nakryiko wrote:
> > On Mon, Sep 20, 2021 at 7:16 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > This adds selftests that tests the success and failure path for modules
> > > kfuncs (in presence of invalid kfunc calls) for both libbpf and
> > > gen_loader. It also adds a prog_test kfunc_btf_id_list so that we can
> > > add module BTF ID set from bpf_testmod.
> > >
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  include/linux/btf.h                           |  2 +
> > >  kernel/bpf/btf.c                              |  2 +
> > >  net/bpf/test_run.c                            |  5 +-
> > >  tools/testing/selftests/bpf/Makefile          |  5 +-
> > >  .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 26 ++++++-
> > >  .../selftests/bpf/prog_tests/ksyms_module.c   | 52 ++++++++++----
> > >  .../bpf/prog_tests/ksyms_module_libbpf.c      | 44 ++++++++++++
> > >  .../selftests/bpf/progs/test_ksyms_module.c   | 41 ++++++++---
> > >  .../bpf/progs/test_ksyms_module_fail.c        | 29 ++++++++
> > >  .../progs/test_ksyms_module_fail_toomany.c    | 19 +++++
> > >  .../bpf/progs/test_ksyms_module_libbpf.c      | 71 +++++++++++++++++++
> > >  .../bpf/progs/test_ksyms_module_util.h        | 48 +++++++++++++
> > >  12 files changed, 317 insertions(+), 27 deletions(-)
> > >  create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > >
> >
> > [...]
> >
> > > @@ -243,7 +244,9 @@ BTF_SET_END(test_sk_kfunc_ids)
> > >
> > >  bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner)
> > >  {
> > > -       return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
> > > +       if (btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id))
> > > +               return true;
> > > +       return __bpf_check_prog_test_kfunc_call(kfunc_id, owner);
> > >  }
> > >
> > >  static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
> > > diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> > > index 326ea75ce99e..d20ff0563120 100644
> > > --- a/tools/testing/selftests/bpf/Makefile
> > > +++ b/tools/testing/selftests/bpf/Makefile
> > > @@ -174,6 +174,7 @@ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_tes
> > >         $(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation
> > >         $(Q)$(MAKE) $(submake_extras) -C bpf_testmod
> > >         $(Q)cp bpf_testmod/bpf_testmod.ko $@
> > > +       $(Q)$(RESOLVE_BTFIDS) -s ../../../../vmlinux bpf_testmod.ko
> >
> > $(VMLINUX_BTF) instead of "../../../../vmlinux", it will break
> >
> > >
> > >  $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
> > >         $(call msg,CC,,$@)
> > > @@ -315,8 +316,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h               \
> > >                 linked_vars.skel.h linked_maps.skel.h
> > >
> > >  LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
> > > -       test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
> > > -       trace_vprintk.c
> > > +       test_ksyms_module.c test_ksyms_module_fail.c test_ksyms_module_fail_toomany.c \
> > > +       test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c
> > >  SKEL_BLACKLIST += $$(LSKELS)
> > >
> >
> > [...]
> >
> > > diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > > new file mode 100644
> > > index 000000000000..3afa74841ae0
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > > @@ -0,0 +1,48 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#ifndef __KSYMS_MODULE_UTIL_H__
> > > +#define __KSYMS_MODULE_UTIL_H__
> > > +
> > > +#define __KFUNC_NR_EXP(Y)                                                      \
> > > +Y(0) Y(1) Y(2) Y(3) Y(4) Y(5) Y(6) Y(7) Y(8) Y(9) Y(10) Y(11) Y(12)            \
> > > +Y(13) Y(14) Y(15) Y(16) Y(17) Y(18) Y(19) Y(20) Y(21) Y(22) Y(23)              \
> > > +Y(24) Y(25) Y(26) Y(27) Y(28) Y(29) Y(30) Y(31) Y(32) Y(33) Y(34)              \
> > > +Y(35) Y(36) Y(37) Y(38) Y(39) Y(40) Y(41) Y(42) Y(43) Y(44) Y(45)              \
> > > +Y(46) Y(47) Y(48) Y(49) Y(50) Y(51) Y(52) Y(53) Y(54) Y(55) Y(56)              \
> > > +Y(57) Y(58) Y(59) Y(60) Y(61) Y(62) Y(63) Y(64) Y(65) Y(66) Y(67)              \
> > > +Y(68) Y(69) Y(70) Y(71) Y(72) Y(73) Y(74) Y(75) Y(76) Y(77) Y(78)              \
> > > +Y(79) Y(80) Y(81) Y(82) Y(83) Y(84) Y(85) Y(86) Y(87) Y(88) Y(89)              \
> > > +Y(90) Y(91) Y(92) Y(93) Y(94) Y(95) Y(96) Y(97) Y(98) Y(99) Y(100)             \
> > > +Y(101) Y(102) Y(103) Y(104) Y(105) Y(106) Y(107) Y(108) Y(109) Y(110)          \
> > > +Y(111) Y(112) Y(113) Y(114) Y(115) Y(116) Y(117) Y(118) Y(119) Y(120)          \
> > > +Y(121) Y(122) Y(123) Y(124) Y(125) Y(126) Y(127) Y(128) Y(129) Y(130)          \
> > > +Y(131) Y(132) Y(133) Y(134) Y(135) Y(136) Y(137) Y(138) Y(139) Y(140)          \
> > > +Y(141) Y(142) Y(143) Y(144) Y(145) Y(146) Y(147) Y(148) Y(149) Y(150)          \
> > > +Y(151) Y(152) Y(153) Y(154) Y(155) Y(156) Y(157) Y(158) Y(159) Y(160)          \
> > > +Y(161) Y(162) Y(163) Y(164) Y(165) Y(166) Y(167) Y(168) Y(169) Y(170)          \
> > > +Y(171) Y(172) Y(173) Y(174) Y(175) Y(176) Y(177) Y(178) Y(179) Y(180)          \
> > > +Y(181) Y(182) Y(183) Y(184) Y(185) Y(186) Y(187) Y(188) Y(189) Y(190)          \
> > > +Y(191) Y(192) Y(193) Y(194) Y(195) Y(196) Y(197) Y(198) Y(199) Y(200)          \
> > > +Y(201) Y(202) Y(203) Y(204) Y(205) Y(206) Y(207) Y(208) Y(209) Y(210)          \
> > > +Y(211) Y(212) Y(213) Y(214) Y(215) Y(216) Y(217) Y(218) Y(219) Y(220)          \
> > > +Y(221) Y(222) Y(223) Y(224) Y(225) Y(226) Y(227) Y(228) Y(229) Y(230)          \
> > > +Y(231) Y(232) Y(233) Y(234) Y(235) Y(236) Y(237) Y(238) Y(239) Y(240)          \
> > > +Y(241) Y(242) Y(243) Y(244) Y(245) Y(246) Y(247) Y(248) Y(249) Y(250)          \
> > > +Y(251) Y(252) Y(253) Y(254) Y(255)
> > > +
> > > +#define __KFUNC_A(nr) bpf_testmod_test_mod_kfunc_##nr();
> > > +#define KFUNC_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_A)
> > > +
> > > +#define __KFUNC_B(nr) extern void bpf_testmod_test_mod_kfunc_##nr(void) __ksym;
> > > +#define KFUNC_KSYM_DECLARE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_B)
> > > +
> > > +#define __KFUNC_C(nr) noinline void bpf_testmod_test_mod_kfunc_##nr(void) {};
> > > +#define KFUNC_DEFINE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_C)
> > > +
> > > +#define __KFUNC_D(nr) BTF_ID(func, bpf_testmod_test_mod_kfunc_##nr)
> > > +#define KFUNC_BTF_ID_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_D)
> > > +
> > > +#define __KFUNC_E(nr) bpf_testmod_test_mod_kfunc(nr);
> > > +#define KFUNC_VALID_SAME_ONE __KFUNC_E(0)
> > > +#define KFUNC_VALID_SAME_256 __KFUNC_NR_EXP(__KFUNC_E)
> > > +
> >
> > This is pretty horrible... Wouldn't it be better to test limits like
>
> Yeah, I actually thought about this a bit yesterday, we could also do:
> (untested)
>
> #define X_0(x)
> #define X_1(x) x X_0(x)
> #define X_2(x) x X_1(x)
> #define X_3(x) x X_2(x)
> #define X_4(x) x X_3(x)
> #define X_5(x) x X_4(x)
> #define X_6(x) x X_5(x)
> #define X_7(x) x X_6(x)
> #define X_8(x) x X_7(x)
> #define X_9(x) x X_8(x)
> #define X_10(x) x X_9(x)
>
> Then, for generating 256 items
>
> X_2(X_10(X_10(foo))) X_5(X_10(foo)) X_6(foo)
>

Hmm, one problem with this is I cannot find a good way to attach an incrementing
token to paste to each expansion, so that they become foo0, foo1, ..., it only
works well for the simple case.

Unless someone knows a trick to make that work somehow, I guess test_verifier is
the only option.

> ... which looks much better.
>
> > this using the test_verifier approach, where we can craft a *short*
> > sequence of instructions that will test all these limits?...
> >
>
> Hmm, good idea, I'd just need to fill in the BTF id dynamically at runtime,
> but that should be possible.
>
> Though we still need to craft distinct calls (I am trying to test the limit
> where insn->off is different for each case). Since we try to reuse index in both
> gen_loader and libbpf, just generating same call 256 times would not be enough.
>
> Let me know which one of the two you prefer.
>
> >
> > > +#endif
> > > --
> > > 2.33.0
> > >
>
> --
> Kartikeya

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-21 23:13     ` Kumar Kartikeya Dwivedi
  2021-09-21 23:28       ` Kumar Kartikeya Dwivedi
@ 2021-09-22  0:03       ` Andrii Nakryiko
  2021-09-22  6:06         ` Kumar Kartikeya Dwivedi
  1 sibling, 1 reply; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-22  0:03 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Tue, Sep 21, 2021 at 4:13 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Wed, Sep 22, 2021 at 04:30:32AM IST, Andrii Nakryiko wrote:
> > On Mon, Sep 20, 2021 at 7:16 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > >
> > > This adds selftests that tests the success and failure path for modules
> > > kfuncs (in presence of invalid kfunc calls) for both libbpf and
> > > gen_loader. It also adds a prog_test kfunc_btf_id_list so that we can
> > > add module BTF ID set from bpf_testmod.
> > >
> > > Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
> > > ---
> > >  include/linux/btf.h                           |  2 +
> > >  kernel/bpf/btf.c                              |  2 +
> > >  net/bpf/test_run.c                            |  5 +-
> > >  tools/testing/selftests/bpf/Makefile          |  5 +-
> > >  .../selftests/bpf/bpf_testmod/bpf_testmod.c   | 26 ++++++-
> > >  .../selftests/bpf/prog_tests/ksyms_module.c   | 52 ++++++++++----
> > >  .../bpf/prog_tests/ksyms_module_libbpf.c      | 44 ++++++++++++
> > >  .../selftests/bpf/progs/test_ksyms_module.c   | 41 ++++++++---
> > >  .../bpf/progs/test_ksyms_module_fail.c        | 29 ++++++++
> > >  .../progs/test_ksyms_module_fail_toomany.c    | 19 +++++
> > >  .../bpf/progs/test_ksyms_module_libbpf.c      | 71 +++++++++++++++++++
> > >  .../bpf/progs/test_ksyms_module_util.h        | 48 +++++++++++++
> > >  12 files changed, 317 insertions(+), 27 deletions(-)
> > >  create mode 100644 tools/testing/selftests/bpf/prog_tests/ksyms_module_libbpf.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_fail_toomany.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_libbpf.c
> > >  create mode 100644 tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > >
> >
> > [...]
> >
> > > @@ -243,7 +244,9 @@ BTF_SET_END(test_sk_kfunc_ids)
> > >
> > >  bool bpf_prog_test_check_kfunc_call(u32 kfunc_id, struct module *owner)
> > >  {
> > > -       return btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id);
> > > +       if (btf_id_set_contains(&test_sk_kfunc_ids, kfunc_id))
> > > +               return true;
> > > +       return __bpf_check_prog_test_kfunc_call(kfunc_id, owner);
> > >  }
> > >
> > >  static void *bpf_test_init(const union bpf_attr *kattr, u32 size,
> > > diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
> > > index 326ea75ce99e..d20ff0563120 100644
> > > --- a/tools/testing/selftests/bpf/Makefile
> > > +++ b/tools/testing/selftests/bpf/Makefile
> > > @@ -174,6 +174,7 @@ $(OUTPUT)/bpf_testmod.ko: $(VMLINUX_BTF) $(wildcard bpf_testmod/Makefile bpf_tes
> > >         $(Q)$(RM) bpf_testmod/bpf_testmod.ko # force re-compilation
> > >         $(Q)$(MAKE) $(submake_extras) -C bpf_testmod
> > >         $(Q)cp bpf_testmod/bpf_testmod.ko $@
> > > +       $(Q)$(RESOLVE_BTFIDS) -s ../../../../vmlinux bpf_testmod.ko
> >
> > $(VMLINUX_BTF) instead of "../../../../vmlinux", it will break
> >
> > >
> > >  $(OUTPUT)/test_stub.o: test_stub.c $(BPFOBJ)
> > >         $(call msg,CC,,$@)
> > > @@ -315,8 +316,8 @@ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h               \
> > >                 linked_vars.skel.h linked_maps.skel.h
> > >
> > >  LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
> > > -       test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c \
> > > -       trace_vprintk.c
> > > +       test_ksyms_module.c test_ksyms_module_fail.c test_ksyms_module_fail_toomany.c \
> > > +       test_ringbuf.c atomics.c trace_printk.c trace_vprintk.c
> > >  SKEL_BLACKLIST += $$(LSKELS)
> > >
> >
> > [...]
> >
> > > diff --git a/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > > new file mode 100644
> > > index 000000000000..3afa74841ae0
> > > --- /dev/null
> > > +++ b/tools/testing/selftests/bpf/progs/test_ksyms_module_util.h
> > > @@ -0,0 +1,48 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +#ifndef __KSYMS_MODULE_UTIL_H__
> > > +#define __KSYMS_MODULE_UTIL_H__
> > > +
> > > +#define __KFUNC_NR_EXP(Y)                                                      \
> > > +Y(0) Y(1) Y(2) Y(3) Y(4) Y(5) Y(6) Y(7) Y(8) Y(9) Y(10) Y(11) Y(12)            \
> > > +Y(13) Y(14) Y(15) Y(16) Y(17) Y(18) Y(19) Y(20) Y(21) Y(22) Y(23)              \
> > > +Y(24) Y(25) Y(26) Y(27) Y(28) Y(29) Y(30) Y(31) Y(32) Y(33) Y(34)              \
> > > +Y(35) Y(36) Y(37) Y(38) Y(39) Y(40) Y(41) Y(42) Y(43) Y(44) Y(45)              \
> > > +Y(46) Y(47) Y(48) Y(49) Y(50) Y(51) Y(52) Y(53) Y(54) Y(55) Y(56)              \
> > > +Y(57) Y(58) Y(59) Y(60) Y(61) Y(62) Y(63) Y(64) Y(65) Y(66) Y(67)              \
> > > +Y(68) Y(69) Y(70) Y(71) Y(72) Y(73) Y(74) Y(75) Y(76) Y(77) Y(78)              \
> > > +Y(79) Y(80) Y(81) Y(82) Y(83) Y(84) Y(85) Y(86) Y(87) Y(88) Y(89)              \
> > > +Y(90) Y(91) Y(92) Y(93) Y(94) Y(95) Y(96) Y(97) Y(98) Y(99) Y(100)             \
> > > +Y(101) Y(102) Y(103) Y(104) Y(105) Y(106) Y(107) Y(108) Y(109) Y(110)          \
> > > +Y(111) Y(112) Y(113) Y(114) Y(115) Y(116) Y(117) Y(118) Y(119) Y(120)          \
> > > +Y(121) Y(122) Y(123) Y(124) Y(125) Y(126) Y(127) Y(128) Y(129) Y(130)          \
> > > +Y(131) Y(132) Y(133) Y(134) Y(135) Y(136) Y(137) Y(138) Y(139) Y(140)          \
> > > +Y(141) Y(142) Y(143) Y(144) Y(145) Y(146) Y(147) Y(148) Y(149) Y(150)          \
> > > +Y(151) Y(152) Y(153) Y(154) Y(155) Y(156) Y(157) Y(158) Y(159) Y(160)          \
> > > +Y(161) Y(162) Y(163) Y(164) Y(165) Y(166) Y(167) Y(168) Y(169) Y(170)          \
> > > +Y(171) Y(172) Y(173) Y(174) Y(175) Y(176) Y(177) Y(178) Y(179) Y(180)          \
> > > +Y(181) Y(182) Y(183) Y(184) Y(185) Y(186) Y(187) Y(188) Y(189) Y(190)          \
> > > +Y(191) Y(192) Y(193) Y(194) Y(195) Y(196) Y(197) Y(198) Y(199) Y(200)          \
> > > +Y(201) Y(202) Y(203) Y(204) Y(205) Y(206) Y(207) Y(208) Y(209) Y(210)          \
> > > +Y(211) Y(212) Y(213) Y(214) Y(215) Y(216) Y(217) Y(218) Y(219) Y(220)          \
> > > +Y(221) Y(222) Y(223) Y(224) Y(225) Y(226) Y(227) Y(228) Y(229) Y(230)          \
> > > +Y(231) Y(232) Y(233) Y(234) Y(235) Y(236) Y(237) Y(238) Y(239) Y(240)          \
> > > +Y(241) Y(242) Y(243) Y(244) Y(245) Y(246) Y(247) Y(248) Y(249) Y(250)          \
> > > +Y(251) Y(252) Y(253) Y(254) Y(255)
> > > +
> > > +#define __KFUNC_A(nr) bpf_testmod_test_mod_kfunc_##nr();
> > > +#define KFUNC_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_A)
> > > +
> > > +#define __KFUNC_B(nr) extern void bpf_testmod_test_mod_kfunc_##nr(void) __ksym;
> > > +#define KFUNC_KSYM_DECLARE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_B)
> > > +
> > > +#define __KFUNC_C(nr) noinline void bpf_testmod_test_mod_kfunc_##nr(void) {};
> > > +#define KFUNC_DEFINE_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_C)
> > > +
> > > +#define __KFUNC_D(nr) BTF_ID(func, bpf_testmod_test_mod_kfunc_##nr)
> > > +#define KFUNC_BTF_ID_VALID_DISTINCT_256 __KFUNC_NR_EXP(__KFUNC_D)
> > > +
> > > +#define __KFUNC_E(nr) bpf_testmod_test_mod_kfunc(nr);
> > > +#define KFUNC_VALID_SAME_ONE __KFUNC_E(0)
> > > +#define KFUNC_VALID_SAME_256 __KFUNC_NR_EXP(__KFUNC_E)
> > > +
> >
> > This is pretty horrible... Wouldn't it be better to test limits like
>
> Yeah, I actually thought about this a bit yesterday, we could also do:
> (untested)
>
> #define X_0(x)
> #define X_1(x) x X_0(x)
> #define X_2(x) x X_1(x)
> #define X_3(x) x X_2(x)
> #define X_4(x) x X_3(x)
> #define X_5(x) x X_4(x)
> #define X_6(x) x X_5(x)
> #define X_7(x) x X_6(x)
> #define X_8(x) x X_7(x)
> #define X_9(x) x X_8(x)
> #define X_10(x) x X_9(x)
>
> Then, for generating 256 items
>
> X_2(X_10(X_10(foo))) X_5(X_10(foo)) X_6(foo)
>
> ... which looks much better.
>
> > this using the test_verifier approach, where we can craft a *short*
> > sequence of instructions that will test all these limits?...
> >
>
> Hmm, good idea, I'd just need to fill in the BTF id dynamically at runtime,
> but that should be possible.
>
> Though we still need to craft distinct calls (I am trying to test the limit
> where insn->off is different for each case). Since we try to reuse index in both
> gen_loader and libbpf, just generating same call 256 times would not be enough.

You just need to generate one instruction with offset = 257 to test
this. And separately one call with fd_array that has module BTF fd at
fd_array[256] (to check that 256 is ok). Or am I missing something?

>
> Let me know which one of the two you prefer.
>
> >
> > > +#endif
> > > --
> > > 2.33.0
> > >
>
> --
> Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-22  0:03       ` Andrii Nakryiko
@ 2021-09-22  6:06         ` Kumar Kartikeya Dwivedi
  2021-09-22 18:24           ` Alexei Starovoitov
  0 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-22  6:06 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Wed, Sep 22, 2021 at 05:33:26AM IST, Andrii Nakryiko wrote:
> > [...]
> > Hmm, good idea, I'd just need to fill in the BTF id dynamically at runtime,
> > but that should be possible.
> >
> > Though we still need to craft distinct calls (I am trying to test the limit
> > where insn->off is different for each case). Since we try to reuse index in both
> > gen_loader and libbpf, just generating same call 256 times would not be enough.
>
> You just need to generate one instruction with offset = 257 to test
> this. And separately one call with fd_array that has module BTF fd at
> fd_array[256] (to check that 256 is ok). Or am I missing something?
>

That won't be enough, if I just pass insn->imm = id, insn->off = 257, it becomes
first descriptor in kfunc_tab and kfunc_btf_tab. The total limit is 256, and
they are kept in sorted order by based on id and off for the first, off for the
second. So 256 different offs are needed (imm may be same actually), so that
both fill up.

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support
  2021-09-22  6:06         ` Kumar Kartikeya Dwivedi
@ 2021-09-22 18:24           ` Alexei Starovoitov
  0 siblings, 0 replies; 31+ messages in thread
From: Alexei Starovoitov @ 2021-09-22 18:24 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: Andrii Nakryiko, bpf, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Tue, Sep 21, 2021 at 11:06 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Wed, Sep 22, 2021 at 05:33:26AM IST, Andrii Nakryiko wrote:
> > > [...]
> > > Hmm, good idea, I'd just need to fill in the BTF id dynamically at runtime,
> > > but that should be possible.
> > >
> > > Though we still need to craft distinct calls (I am trying to test the limit
> > > where insn->off is different for each case). Since we try to reuse index in both
> > > gen_loader and libbpf, just generating same call 256 times would not be enough.
> >
> > You just need to generate one instruction with offset = 257 to test
> > this. And separately one call with fd_array that has module BTF fd at
> > fd_array[256] (to check that 256 is ok). Or am I missing something?
> >
>
> That won't be enough, if I just pass insn->imm = id, insn->off = 257, it becomes
> first descriptor in kfunc_tab and kfunc_btf_tab. The total limit is 256, and
> they are kept in sorted order by based on id and off for the first, off for the
> second. So 256 different offs are needed (imm may be same actually), so that
> both fill up.

Just to test the 256 limit? I don't think it's necessary.
afaik there is no test that exercises the 64 map limit.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls
  2021-09-21 22:41   ` Andrii Nakryiko
@ 2021-09-24 23:54     ` Kumar Kartikeya Dwivedi
  2021-09-25  0:30       ` Andrii Nakryiko
  0 siblings, 1 reply; 31+ messages in thread
From: Kumar Kartikeya Dwivedi @ 2021-09-24 23:54 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Wed, Sep 22, 2021 at 04:11:13AM IST, Andrii Nakryiko wrote:
> On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> <memxor@gmail.com> wrote:
> > [...]
> > +                       return -E2BIG;
> > +               }
> > +               ext->ksym.offset = index;
>
> > +       } else {
> > +               ext->ksym.offset = 0;
> >         }
>
> I think it will be cleaner if you move the entire offset determination
> logic after all the other checks are performed and ext is mostly
> populated. That will also make the logic shorter and simpler because
> if ayou find kern_btf_fd match, you can exit early (or probably rather

Ack to everything else (including the other mail), but...

> goto to report the match and exit). Otherwise
>

This sentence got eaten up.

> >
> >         kern_func = btf__type_by_id(kern_btf, kfunc_id);
>
> this is actually extremely wasteful for module BTFs. Let's add
> internal (at least for now) helper that will search only for "own" BTF
> types in the BTF, skipping types in base BTF. Something like
> btf_type_by_id_own()?
>

Just to make sure I am not misunderstanding: I don't see where this is wasteful.
btf_type_by_id seems to not be searching anything, but just returns pointer in
base BTF if kfunc_id < btf->start_id, otherwise in module BTF.

What am I missing? I guess the 'kern_btf' name was the source of confusion? If
so, I'll rename it.

Thanks.

--
Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls
  2021-09-24 23:54     ` Kumar Kartikeya Dwivedi
@ 2021-09-25  0:30       ` Andrii Nakryiko
  0 siblings, 0 replies; 31+ messages in thread
From: Andrii Nakryiko @ 2021-09-25  0:30 UTC (permalink / raw)
  To: Kumar Kartikeya Dwivedi
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song,
	Jesper Dangaard Brouer, Toke Høiland-Jørgensen,
	Networking

On Fri, Sep 24, 2021 at 4:54 PM Kumar Kartikeya Dwivedi
<memxor@gmail.com> wrote:
>
> On Wed, Sep 22, 2021 at 04:11:13AM IST, Andrii Nakryiko wrote:
> > On Mon, Sep 20, 2021 at 7:15 AM Kumar Kartikeya Dwivedi
> > <memxor@gmail.com> wrote:
> > > [...]
> > > +                       return -E2BIG;
> > > +               }
> > > +               ext->ksym.offset = index;
> >
> > > +       } else {
> > > +               ext->ksym.offset = 0;
> > >         }
> >
> > I think it will be cleaner if you move the entire offset determination
> > logic after all the other checks are performed and ext is mostly
> > populated. That will also make the logic shorter and simpler because
> > if ayou find kern_btf_fd match, you can exit early (or probably rather
>
> Ack to everything else (including the other mail), but...
>
> > goto to report the match and exit). Otherwise
> >
>
> This sentence got eaten up.

No idea what I was going to say here, sorry... Sometimes Gmail UI
glitches with undo/redo, maybe that's what happened here. Doesn't
matter, ignore the "Otherwise" part.

>
> > >
> > >         kern_func = btf__type_by_id(kern_btf, kfunc_id);
> >
> > this is actually extremely wasteful for module BTFs. Let's add
> > internal (at least for now) helper that will search only for "own" BTF
> > types in the BTF, skipping types in base BTF. Something like
> > btf_type_by_id_own()?
> >
>
> Just to make sure I am not misunderstanding: I don't see where this is wasteful.
> btf_type_by_id seems to not be searching anything, but just returns pointer in
> base BTF if kfunc_id < btf->start_id, otherwise in module BTF.
>

Hm, sorry... Right sentiment and thought, but wrong piece of code to
quote it on.

I had in mind the btf__find_by_name_kind() use in find_ksym_btf_id().
Once we start going over each module, we shouldn't be re-checking
vmlinux BTF when doing btf__find_by_name_kind. It should only check
the types that each module BTF adds on top of vmlinux BTF. That's what
would be good to optimize, especially as more complicated BPF programs
will start using more ksym vars and funcs.


> What am I missing? I guess the 'kern_btf' name was the source of confusion? If
> so, I'll rename it.
>
> Thanks.
>
> --
> Kartikeya

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2021-09-25  0:31 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-20 14:15 [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 01/11] bpf: Introduce BPF support for kernel module function calls Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 02/11] bpf: Be conservative while processing invalid kfunc calls Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 03/11] bpf: btf: Introduce helpers for dynamic BTF set registration Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 04/11] tools: Allow specifying base BTF file in resolve_btfids Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 05/11] bpf: Enable TCP congestion control kfunc from modules Kumar Kartikeya Dwivedi
2021-09-21 22:18   ` Andrii Nakryiko
2021-09-21 22:23     ` Kumar Kartikeya Dwivedi
2021-09-21 23:02       ` Andrii Nakryiko
2021-09-20 14:15 ` [PATCH bpf-next v4 06/11] libbpf: Support kernel module function calls Kumar Kartikeya Dwivedi
2021-09-21 22:41   ` Andrii Nakryiko
2021-09-24 23:54     ` Kumar Kartikeya Dwivedi
2021-09-25  0:30       ` Andrii Nakryiko
2021-09-20 14:15 ` [PATCH bpf-next v4 07/11] libbpf: Resolve invalid weak kfunc calls with imm = 0, off = 0 Kumar Kartikeya Dwivedi
2021-09-21 22:47   ` Andrii Nakryiko
2021-09-20 14:15 ` [PATCH bpf-next v4 08/11] libbpf: Update gen_loader to emit BTF_KIND_FUNC relocations Kumar Kartikeya Dwivedi
2021-09-21  0:57   ` Alexei Starovoitov
2021-09-21  4:50     ` Kumar Kartikeya Dwivedi
2021-09-21 19:04       ` Alexei Starovoitov
2021-09-21 22:25         ` Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 09/11] tools: bpftool: Add separate fd_array map support for light skeleton Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 10/11] libbpf: Fix skel_internal.h to set errno on loader retval < 0 Kumar Kartikeya Dwivedi
2021-09-20 14:15 ` [PATCH bpf-next v4 11/11] bpf: selftests: Add selftests for module kfunc support Kumar Kartikeya Dwivedi
2021-09-21 23:00   ` Andrii Nakryiko
2021-09-21 23:13     ` Kumar Kartikeya Dwivedi
2021-09-21 23:28       ` Kumar Kartikeya Dwivedi
2021-09-22  0:03       ` Andrii Nakryiko
2021-09-22  6:06         ` Kumar Kartikeya Dwivedi
2021-09-22 18:24           ` Alexei Starovoitov
2021-09-21  0:29 ` [PATCH bpf-next v4 00/11] Support kernel module function calls from eBPF Andrii Nakryiko
2021-09-21  4:55   ` Kumar Kartikeya Dwivedi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.