All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support
@ 2024-01-02 19:00 Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently Andrii Nakryiko
                   ` (10 more replies)
  0 siblings, 11 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team

Support __arg_ctx global function argument tag semantics even on older kernels
that don't natively support it through btf_decl_tag("arg:ctx").

Patch #1 does a bunch of internal renaming to make internal function naming
consistent. We were doing it lazily up until now, but mixing single and double
underscored names are confusing, so let's bite a bullet and get it over the
finish line in one go.

Patches #3-#7 are preparatory work to allow to postpone BTF loading into the
kernel until after all the BPF program relocations (including global func
appending to main programs) are done. Patch #5 is perhaps the most important
and establishes pre-created stable placeholder FDs, so that relocations can
embed valid map FDs into ldimm64 instructions.

Once BTF is done after relocation, what's left is to adjust BTF information to
have each main program's copy of each used global subprog to point to its own
adjusted FUNC -> FUNC_PROTO type chain (if they use __arg_ctx) in such a way
as to satisfy type expectations of BPF verifier regarding the PTR_TO_CTX
argument definition. See patch #8 for details.

Patch #9 adds few more __arg_ctx use cases (edge cases like multiple arguments
having __arg_ctx, etc) to test_global_func_ctx_args.c, to make it simple to
validate that this logic indeed works on old kernels. It does.

Andrii Nakryiko (9):
  libbpf: name internal functions consistently
  libbpf: make uniform use of btf__fd() accessor inside libbpf
  libbpf: use explicit map reuse flag to skip map creation steps
  libbpf: don't rely on map->fd as an indicator of map being created
  libbpf: use stable map placeholder FDs
  libbpf: move exception callbacks assignment logic into relocation step
  libbpf: move BTF loading step after relocation step
  libbpf: implement __arg_ctx fallback logic
  selftests/bpf: add arg:ctx cases to test_global_funcs tests

 tools/lib/bpf/libbpf.c                        | 1055 ++++++++++-------
 tools/lib/bpf/libbpf_internal.h               |   24 +
 .../bpf/progs/test_global_func_ctx_args.c     |   49 +
 3 files changed, 725 insertions(+), 403 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-03 23:12   ` Alexei Starovoitov
  2024-01-02 19:00 ` [PATCH v2 bpf-next 2/9] libbpf: make uniform use of btf__fd() accessor inside libbpf Andrii Nakryiko
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team

For a while now all new internal libbpf functions stopped using
<obj>__<method>() naming, which was historically used both for public
APIs and all the helper functions that can be thought of as "methods" of
libbpf "objects" (bpf_object, bpf_map, bpf_program, etc). This
convention turned out to be confusing because of "could be public API"
concerns, requiring double-checking whether a given function needs
special treatment or not (special error return handling, for example).

We've been doing conversion of pre-existing code naming lazily as we
touched relevant functions, but there are still a bunch of functions
remaining that use old double-underscore naming.

To remove all the confusion and inconsistent naming, complete the rename
to keep double-underscore naming only for public APIs.

There are some notable exceptions, though. Libbpf has a bunch of
APIs that are internal to libbpf, but still are used as API boundaries.
For example, bpf_gen__xxx() is designed to be decoupled from libbpf.c's
logic. Similarly, we have hashmap and strset datastructures with their
own internal APIs (some of which are actually used by bpftool as well,
so they are kind-of-internal). For those internal APIs we still keep
API-like naming with double underscores.

No functional changes.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 504 +++++++++++++++++++----------------------
 1 file changed, 238 insertions(+), 266 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index ebcfb2147fbd..8e7a50c1ce89 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -68,7 +68,7 @@
 
 #define __printf(a, b)	__attribute__((format(printf, a, b)))
 
-static struct bpf_map *bpf_object__add_map(struct bpf_object *obj);
+static struct bpf_map *bpf_object_add_map(struct bpf_object *obj);
 static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog);
 
 static const char * const attach_type_name[] = {
@@ -479,7 +479,7 @@ struct bpf_struct_ops {
 	 *	struct tcp_congestion_ops data;
 	 * }
 	 * kern_vdata-size == sizeof(struct bpf_struct_ops_tcp_congestion_ops)
-	 * bpf_map__init_kern_struct_ops() will populate the "kern_vdata"
+	 * bpf_map_init_kern_struct_ops() will populate the "kern_vdata"
 	 * from "data".
 	 */
 	void *kern_vdata;
@@ -717,7 +717,7 @@ void bpf_program__unload(struct bpf_program *prog)
 	zfree(&prog->line_info);
 }
 
-static void bpf_program__exit(struct bpf_program *prog)
+static void bpf_program_exit(struct bpf_program *prog)
 {
 	if (!prog)
 		return;
@@ -753,10 +753,9 @@ static bool insn_is_pseudo_func(struct bpf_insn *insn)
 	return is_ldimm64_insn(insn) && insn->src_reg == BPF_PSEUDO_FUNC;
 }
 
-static int
-bpf_object__init_prog(struct bpf_object *obj, struct bpf_program *prog,
-		      const char *name, size_t sec_idx, const char *sec_name,
-		      size_t sec_off, void *insn_data, size_t insn_data_sz)
+static int bpf_object_init_prog(struct bpf_object *obj, struct bpf_program *prog,
+				const char *name, size_t sec_idx, const char *sec_name,
+				size_t sec_off, void *insn_data, size_t insn_data_sz)
 {
 	if (insn_data_sz == 0 || insn_data_sz % BPF_INSN_SZ || sec_off % BPF_INSN_SZ) {
 		pr_warn("sec '%s': corrupted program '%s', offset %zu, size %zu\n",
@@ -810,13 +809,12 @@ bpf_object__init_prog(struct bpf_object *obj, struct bpf_program *prog,
 	return 0;
 errout:
 	pr_warn("sec '%s': failed to allocate memory for prog '%s'\n", sec_name, name);
-	bpf_program__exit(prog);
+	bpf_program_exit(prog);
 	return -ENOMEM;
 }
 
-static int
-bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
-			 const char *sec_name, int sec_idx)
+static int bpf_object_add_programs(struct bpf_object *obj, Elf_Data *sec_data,
+				   const char *sec_name, int sec_idx)
 {
 	Elf_Data *symbols = obj->efile.symbols;
 	struct bpf_program *prog, *progs;
@@ -877,8 +875,8 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
 
 		prog = &progs[nr_progs];
 
-		err = bpf_object__init_prog(obj, prog, name, sec_idx, sec_name,
-					    sec_off, data + sec_off, prog_sz);
+		err = bpf_object_init_prog(obj, prog, name, sec_idx, sec_name,
+					   sec_off, data + sec_off, prog_sz);
 		if (err)
 			return err;
 
@@ -993,15 +991,15 @@ find_struct_ops_kern_types(const struct btf *btf, const char *tname,
 	return 0;
 }
 
-static bool bpf_map__is_struct_ops(const struct bpf_map *map)
+static bool bpf_map_is_struct_ops(const struct bpf_map *map)
 {
 	return map->def.type == BPF_MAP_TYPE_STRUCT_OPS;
 }
 
 /* Init the map's fields that depend on kern_btf */
-static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
-					 const struct btf *btf,
-					 const struct btf *kern_btf)
+static int bpf_map_init_kern_struct_ops(struct bpf_map *map,
+					const struct btf *btf,
+					const struct btf *kern_btf)
 {
 	const struct btf_member *member, *kern_member, *kern_data_member;
 	const struct btf_type *type, *kern_type, *kern_vtype;
@@ -1090,7 +1088,7 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
 							    &kern_mtype_id);
 
 			/* mtype->type must be a func_proto which was
-			 * guaranteed in bpf_object__collect_st_ops_relos(),
+			 * guaranteed in bpf_object_collect_st_ops_relos(),
 			 * so only check kern_mtype for func_proto here.
 			 */
 			if (!btf_is_func_proto(kern_mtype)) {
@@ -1129,7 +1127,7 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map,
 	return 0;
 }
 
-static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj)
+static int bpf_object_init_kern_struct_ops_maps(struct bpf_object *obj)
 {
 	struct bpf_map *map;
 	size_t i;
@@ -1138,11 +1136,10 @@ static int bpf_object__init_kern_struct_ops_maps(struct bpf_object *obj)
 	for (i = 0; i < obj->nr_maps; i++) {
 		map = &obj->maps[i];
 
-		if (!bpf_map__is_struct_ops(map))
+		if (!bpf_map_is_struct_ops(map))
 			continue;
 
-		err = bpf_map__init_kern_struct_ops(map, obj->btf,
-						    obj->btf_vmlinux);
+		err = bpf_map_init_kern_struct_ops(map, obj->btf, obj->btf_vmlinux);
 		if (err)
 			return err;
 	}
@@ -1198,7 +1195,7 @@ static int init_struct_ops_maps(struct bpf_object *obj, const char *sec_name,
 			return -EINVAL;
 		}
 
-		map = bpf_object__add_map(obj);
+		map = bpf_object_add_map(obj);
 		if (IS_ERR(map))
 			return PTR_ERR(map);
 
@@ -1258,10 +1255,10 @@ static int bpf_object_init_struct_ops(struct bpf_object *obj)
 	return err;
 }
 
-static struct bpf_object *bpf_object__new(const char *path,
-					  const void *obj_buf,
-					  size_t obj_buf_sz,
-					  const char *obj_name)
+static struct bpf_object *bpf_object_new(const char *path,
+					 const void *obj_buf,
+					 size_t obj_buf_sz,
+					 const char *obj_name)
 {
 	struct bpf_object *obj;
 	char *end;
@@ -1286,7 +1283,7 @@ static struct bpf_object *bpf_object__new(const char *path,
 	obj->efile.fd = -1;
 	/*
 	 * Caller of this function should also call
-	 * bpf_object__elf_finish() after data collection to return
+	 * bpf_object_elf_finish() after data collection to return
 	 * obj_buf to user. If not, we should duplicate the buffer to
 	 * avoid user freeing them before elf finish.
 	 */
@@ -1303,7 +1300,7 @@ static struct bpf_object *bpf_object__new(const char *path,
 	return obj;
 }
 
-static void bpf_object__elf_finish(struct bpf_object *obj)
+static void bpf_object_elf_finish(struct bpf_object *obj)
 {
 	if (!obj->efile.elf)
 		return;
@@ -1321,7 +1318,7 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
 	obj->efile.obj_buf_sz = 0;
 }
 
-static int bpf_object__elf_init(struct bpf_object *obj)
+static int bpf_object_elf_init(struct bpf_object *obj)
 {
 	Elf64_Ehdr *ehdr;
 	int err = 0;
@@ -1400,11 +1397,11 @@ static int bpf_object__elf_init(struct bpf_object *obj)
 
 	return 0;
 errout:
-	bpf_object__elf_finish(obj);
+	bpf_object_elf_finish(obj);
 	return err;
 }
 
-static int bpf_object__check_endianness(struct bpf_object *obj)
+static int bpf_object_check_endianness(struct bpf_object *obj)
 {
 #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
 	if (obj->efile.ehdr->e_ident[EI_DATA] == ELFDATA2LSB)
@@ -1419,8 +1416,7 @@ static int bpf_object__check_endianness(struct bpf_object *obj)
 	return -LIBBPF_ERRNO__ENDIAN;
 }
 
-static int
-bpf_object__init_license(struct bpf_object *obj, void *data, size_t size)
+static int bpf_object_init_license(struct bpf_object *obj, void *data, size_t size)
 {
 	if (!data) {
 		pr_warn("invalid license section in %s\n", obj->path);
@@ -1434,8 +1430,7 @@ bpf_object__init_license(struct bpf_object *obj, void *data, size_t size)
 	return 0;
 }
 
-static int
-bpf_object__init_kversion(struct bpf_object *obj, void *data, size_t size)
+static int bpf_object_init_kversion(struct bpf_object *obj, void *data, size_t size)
 {
 	__u32 kver;
 
@@ -1449,7 +1444,7 @@ bpf_object__init_kversion(struct bpf_object *obj, void *data, size_t size)
 	return 0;
 }
 
-static bool bpf_map_type__is_map_in_map(enum bpf_map_type type)
+static bool bpf_map_type_is_map_in_map(enum bpf_map_type type)
 {
 	if (type == BPF_MAP_TYPE_ARRAY_OF_MAPS ||
 	    type == BPF_MAP_TYPE_HASH_OF_MAPS)
@@ -1503,7 +1498,7 @@ static Elf64_Sym *find_elf_var_sym(const struct bpf_object *obj, const char *nam
 	return ERR_PTR(-ENOENT);
 }
 
-static struct bpf_map *bpf_object__add_map(struct bpf_object *obj)
+static struct bpf_map *bpf_object_add_map(struct bpf_object *obj)
 {
 	struct bpf_map *map;
 	int err;
@@ -1645,15 +1640,15 @@ static bool map_is_mmapable(struct bpf_object *obj, struct bpf_map *map)
 }
 
 static int
-bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
-			      const char *real_name, int sec_idx, void *data, size_t data_sz)
+bpf_object_init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
+			     const char *real_name, int sec_idx, void *data, size_t data_sz)
 {
 	struct bpf_map_def *def;
 	struct bpf_map *map;
 	size_t mmap_sz;
 	int err;
 
-	map = bpf_object__add_map(obj);
+	map = bpf_object_add_map(obj);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 
@@ -1705,7 +1700,7 @@ bpf_object__init_internal_map(struct bpf_object *obj, enum libbpf_map_type type,
 	return 0;
 }
 
-static int bpf_object__init_global_data_maps(struct bpf_object *obj)
+static int bpf_object_init_global_data_maps(struct bpf_object *obj)
 {
 	struct elf_sec_desc *sec_desc;
 	const char *sec_name;
@@ -1724,25 +1719,25 @@ static int bpf_object__init_global_data_maps(struct bpf_object *obj)
 		switch (sec_desc->sec_type) {
 		case SEC_DATA:
 			sec_name = elf_sec_name(obj, elf_sec_by_idx(obj, sec_idx));
-			err = bpf_object__init_internal_map(obj, LIBBPF_MAP_DATA,
-							    sec_name, sec_idx,
-							    sec_desc->data->d_buf,
-							    sec_desc->data->d_size);
+			err = bpf_object_init_internal_map(obj, LIBBPF_MAP_DATA,
+							   sec_name, sec_idx,
+							   sec_desc->data->d_buf,
+							   sec_desc->data->d_size);
 			break;
 		case SEC_RODATA:
 			obj->has_rodata = true;
 			sec_name = elf_sec_name(obj, elf_sec_by_idx(obj, sec_idx));
-			err = bpf_object__init_internal_map(obj, LIBBPF_MAP_RODATA,
-							    sec_name, sec_idx,
-							    sec_desc->data->d_buf,
-							    sec_desc->data->d_size);
+			err = bpf_object_init_internal_map(obj, LIBBPF_MAP_RODATA,
+							   sec_name, sec_idx,
+							   sec_desc->data->d_buf,
+							   sec_desc->data->d_size);
 			break;
 		case SEC_BSS:
 			sec_name = elf_sec_name(obj, elf_sec_by_idx(obj, sec_idx));
-			err = bpf_object__init_internal_map(obj, LIBBPF_MAP_BSS,
-							    sec_name, sec_idx,
-							    NULL,
-							    sec_desc->data->d_size);
+			err = bpf_object_init_internal_map(obj, LIBBPF_MAP_BSS,
+							   sec_name, sec_idx,
+							   NULL,
+							   sec_desc->data->d_size);
 			break;
 		default:
 			/* skip */
@@ -1917,8 +1912,7 @@ static int set_kcfg_value_num(struct extern_desc *ext, void *ext_val,
 	return 0;
 }
 
-static int bpf_object__process_kconfig_line(struct bpf_object *obj,
-					    char *buf, void *data)
+static int bpf_object_process_kconfig_line(struct bpf_object *obj, char *buf, void *data)
 {
 	struct extern_desc *ext;
 	char *sep, *value;
@@ -1981,7 +1975,7 @@ static int bpf_object__process_kconfig_line(struct bpf_object *obj,
 	return 0;
 }
 
-static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data)
+static int bpf_object_read_kconfig_file(struct bpf_object *obj, void *data)
 {
 	char buf[PATH_MAX];
 	struct utsname uts;
@@ -2006,7 +2000,7 @@ static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data)
 	}
 
 	while (gzgets(file, buf, sizeof(buf))) {
-		err = bpf_object__process_kconfig_line(obj, buf, data);
+		err = bpf_object_process_kconfig_line(obj, buf, data);
 		if (err) {
 			pr_warn("error parsing system Kconfig line '%s': %d\n",
 				buf, err);
@@ -2019,8 +2013,7 @@ static int bpf_object__read_kconfig_file(struct bpf_object *obj, void *data)
 	return err;
 }
 
-static int bpf_object__read_kconfig_mem(struct bpf_object *obj,
-					const char *config, void *data)
+static int bpf_object_read_kconfig_mem(struct bpf_object *obj, const char *config, void *data)
 {
 	char buf[PATH_MAX];
 	int err = 0;
@@ -2034,7 +2027,7 @@ static int bpf_object__read_kconfig_mem(struct bpf_object *obj,
 	}
 
 	while (fgets(buf, sizeof(buf), file)) {
-		err = bpf_object__process_kconfig_line(obj, buf, data);
+		err = bpf_object_process_kconfig_line(obj, buf, data);
 		if (err) {
 			pr_warn("error parsing in-memory Kconfig line '%s': %d\n",
 				buf, err);
@@ -2046,7 +2039,7 @@ static int bpf_object__read_kconfig_mem(struct bpf_object *obj,
 	return err;
 }
 
-static int bpf_object__init_kconfig_map(struct bpf_object *obj)
+static int bpf_object_init_kconfig_map(struct bpf_object *obj)
 {
 	struct extern_desc *last_ext = NULL, *ext;
 	size_t map_sz;
@@ -2062,9 +2055,9 @@ static int bpf_object__init_kconfig_map(struct bpf_object *obj)
 		return 0;
 
 	map_sz = last_ext->kcfg.data_off + last_ext->kcfg.sz;
-	err = bpf_object__init_internal_map(obj, LIBBPF_MAP_KCONFIG,
-					    ".kconfig", obj->efile.symbols_shndx,
-					    NULL, map_sz);
+	err = bpf_object_init_internal_map(obj, LIBBPF_MAP_KCONFIG,
+					   ".kconfig", obj->efile.symbols_shndx,
+					   NULL, map_sz);
 	if (err)
 		return err;
 
@@ -2324,7 +2317,7 @@ int parse_btf_map_def(const char *map_name, struct btf *btf,
 			map_def->parts |= MAP_DEF_VALUE_SIZE | MAP_DEF_VALUE_TYPE;
 		}
 		else if (strcmp(name, "values") == 0) {
-			bool is_map_in_map = bpf_map_type__is_map_in_map(map_def->map_type);
+			bool is_map_in_map = bpf_map_type_is_map_in_map(map_def->map_type);
 			bool is_prog_array = map_def->map_type == BPF_MAP_TYPE_PROG_ARRAY;
 			const char *desc = is_map_in_map ? "map-in-map inner" : "prog-array value";
 			char inner_map_name[128];
@@ -2524,11 +2517,11 @@ static const char *btf_var_linkage_str(__u32 linkage)
 	}
 }
 
-static int bpf_object__init_user_btf_map(struct bpf_object *obj,
-					 const struct btf_type *sec,
-					 int var_idx, int sec_idx,
-					 const Elf_Data *data, bool strict,
-					 const char *pin_root_path)
+static int bpf_object_init_user_btf_map(struct bpf_object *obj,
+					const struct btf_type *sec,
+					int var_idx, int sec_idx,
+					const Elf_Data *data, bool strict,
+					const char *pin_root_path)
 {
 	struct btf_map_def map_def = {}, inner_def = {};
 	const struct btf_type *var, *def;
@@ -2573,7 +2566,7 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
 		return -EINVAL;
 	}
 
-	map = bpf_object__add_map(obj);
+	map = bpf_object_add_map(obj);
 	if (IS_ERR(map))
 		return PTR_ERR(map);
 	map->name = strdup(map_name);
@@ -2624,8 +2617,8 @@ static int bpf_object__init_user_btf_map(struct bpf_object *obj,
 	return 0;
 }
 
-static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
-					  const char *pin_root_path)
+static int bpf_object_init_user_btf_maps(struct bpf_object *obj, bool strict,
+					 const char *pin_root_path)
 {
 	const struct btf_type *sec = NULL;
 	int nr_types, i, vlen, err;
@@ -2665,10 +2658,10 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
 
 	vlen = btf_vlen(sec);
 	for (i = 0; i < vlen; i++) {
-		err = bpf_object__init_user_btf_map(obj, sec, i,
-						    obj->efile.btf_maps_shndx,
-						    data, strict,
-						    pin_root_path);
+		err = bpf_object_init_user_btf_map(obj, sec, i,
+						   obj->efile.btf_maps_shndx,
+						   data, strict,
+						   pin_root_path);
 		if (err)
 			return err;
 	}
@@ -2676,8 +2669,7 @@ static int bpf_object__init_user_btf_maps(struct bpf_object *obj, bool strict,
 	return 0;
 }
 
-static int bpf_object__init_maps(struct bpf_object *obj,
-				 const struct bpf_object_open_opts *opts)
+static int bpf_object_init_maps(struct bpf_object *obj, const struct bpf_object_open_opts *opts)
 {
 	const char *pin_root_path;
 	bool strict;
@@ -2686,9 +2678,9 @@ static int bpf_object__init_maps(struct bpf_object *obj,
 	strict = !OPTS_GET(opts, relaxed_maps, false);
 	pin_root_path = OPTS_GET(opts, pin_root_path, NULL);
 
-	err = bpf_object__init_user_btf_maps(obj, strict, pin_root_path);
-	err = err ?: bpf_object__init_global_data_maps(obj);
-	err = err ?: bpf_object__init_kconfig_map(obj);
+	err = bpf_object_init_user_btf_maps(obj, strict, pin_root_path);
+	err = err ?: bpf_object_init_global_data_maps(obj);
+	err = err ?: bpf_object_init_kconfig_map(obj);
 	err = err ?: bpf_object_init_struct_ops(obj);
 
 	return err;
@@ -2719,7 +2711,7 @@ static bool btf_needs_sanitization(struct bpf_object *obj)
 	       !has_decl_tag || !has_type_tag || !has_enum64;
 }
 
-static int bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
+static int bpf_object_sanitize_btf(struct bpf_object *obj, struct btf *btf)
 {
 	bool has_func_global = kernel_supports(obj, FEAT_BTF_GLOBAL_FUNC);
 	bool has_datasec = kernel_supports(obj, FEAT_BTF_DATASEC);
@@ -2832,9 +2824,7 @@ static bool kernel_needs_btf(const struct bpf_object *obj)
 	return obj->efile.st_ops_shndx >= 0 || obj->efile.st_ops_link_shndx >= 0;
 }
 
-static int bpf_object__init_btf(struct bpf_object *obj,
-				Elf_Data *btf_data,
-				Elf_Data *btf_ext_data)
+static int bpf_object_init_btf(struct bpf_object *obj, Elf_Data *btf_data, Elf_Data *btf_ext_data)
 {
 	int err = -ENOENT;
 
@@ -3056,7 +3046,7 @@ static bool prog_needs_vmlinux_btf(struct bpf_program *prog)
 
 static bool map_needs_vmlinux_btf(struct bpf_map *map)
 {
-	return bpf_map__is_struct_ops(map);
+	return bpf_map_is_struct_ops(map);
 }
 
 static bool obj_needs_vmlinux_btf(const struct bpf_object *obj)
@@ -3095,7 +3085,7 @@ static bool obj_needs_vmlinux_btf(const struct bpf_object *obj)
 	return false;
 }
 
-static int bpf_object__load_vmlinux_btf(struct bpf_object *obj, bool force)
+static int bpf_object_load_vmlinux_btf(struct bpf_object *obj, bool force)
 {
 	int err;
 
@@ -3116,7 +3106,7 @@ static int bpf_object__load_vmlinux_btf(struct bpf_object *obj, bool force)
 	return 0;
 }
 
-static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
+static int bpf_object_sanitize_and_load_btf(struct bpf_object *obj)
 {
 	struct btf *kern_btf = obj->btf;
 	bool btf_mandatory, sanitize;
@@ -3260,7 +3250,7 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
 
 		/* enforce 8-byte pointers for BPF-targeted BTFs */
 		btf__set_pointer_size(obj->btf, 8);
-		err = bpf_object__sanitize_btf(obj, kern_btf);
+		err = bpf_object_sanitize_btf(obj, kern_btf);
 		if (err)
 			return err;
 	}
@@ -3486,7 +3476,7 @@ static int cmp_progs(const void *_a, const void *_b)
 	return a->sec_insn_off < b->sec_insn_off ? -1 : 1;
 }
 
-static int bpf_object__elf_collect(struct bpf_object *obj)
+static int bpf_object_elf_collect(struct bpf_object *obj)
 {
 	struct elf_sec_desc *sec_desc;
 	Elf *elf = obj->efile.elf;
@@ -3571,11 +3561,11 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
 			 (int)sh->sh_type);
 
 		if (strcmp(name, "license") == 0) {
-			err = bpf_object__init_license(obj, data->d_buf, data->d_size);
+			err = bpf_object_init_license(obj, data->d_buf, data->d_size);
 			if (err)
 				return err;
 		} else if (strcmp(name, "version") == 0) {
-			err = bpf_object__init_kversion(obj, data->d_buf, data->d_size);
+			err = bpf_object_init_kversion(obj, data->d_buf, data->d_size);
 			if (err)
 				return err;
 		} else if (strcmp(name, "maps") == 0) {
@@ -3597,7 +3587,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
 			if (sh->sh_flags & SHF_EXECINSTR) {
 				if (strcmp(name, ".text") == 0)
 					obj->efile.text_shndx = idx;
-				err = bpf_object__add_programs(obj, data, name, idx);
+				err = bpf_object_add_programs(obj, data, name, idx);
 				if (err)
 					return err;
 			} else if (strcmp(name, DATA_SEC) == 0 ||
@@ -3663,7 +3653,7 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
 	if (obj->nr_programs)
 		qsort(obj->programs, obj->nr_programs, sizeof(*obj->programs), cmp_progs);
 
-	return bpf_object__init_btf(obj, btf_data, btf_ext_data);
+	return bpf_object_init_btf(obj, btf_data, btf_ext_data);
 }
 
 static bool sym_is_extern(const Elf64_Sym *sym)
@@ -3872,7 +3862,7 @@ static int add_dummy_ksym_var(struct btf *btf)
 	return dummy_var_btf_id;
 }
 
-static int bpf_object__collect_externs(struct bpf_object *obj)
+static int bpf_object_collect_externs(struct bpf_object *obj)
 {
 	struct btf_type *sec, *kcfg_sec = NULL, *ksym_sec = NULL;
 	const struct btf_type *t;
@@ -4111,8 +4101,7 @@ bpf_object__find_program_by_name(const struct bpf_object *obj,
 	return errno = ENOENT, NULL;
 }
 
-static bool bpf_object__shndx_is_data(const struct bpf_object *obj,
-				      int shndx)
+static bool bpf_object_shndx_is_data(const struct bpf_object *obj, int shndx)
 {
 	switch (obj->efile.secs[shndx].sec_type) {
 	case SEC_BSS:
@@ -4124,14 +4113,13 @@ static bool bpf_object__shndx_is_data(const struct bpf_object *obj,
 	}
 }
 
-static bool bpf_object__shndx_is_maps(const struct bpf_object *obj,
+static bool bpf_object_shndx_is_maps(const struct bpf_object *obj,
 				      int shndx)
 {
 	return shndx == obj->efile.btf_maps_shndx;
 }
 
-static enum libbpf_map_type
-bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
+static enum libbpf_map_type map_section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
 {
 	if (shndx == obj->efile.symbols_shndx)
 		return LIBBPF_MAP_KCONFIG;
@@ -4148,10 +4136,10 @@ bpf_object__section_to_libbpf_map_type(const struct bpf_object *obj, int shndx)
 	}
 }
 
-static int bpf_program__record_reloc(struct bpf_program *prog,
-				     struct reloc_desc *reloc_desc,
-				     __u32 insn_idx, const char *sym_name,
-				     const Elf64_Sym *sym, const Elf64_Rel *rel)
+static int bpf_program_record_reloc(struct bpf_program *prog,
+				    struct reloc_desc *reloc_desc,
+				    __u32 insn_idx, const char *sym_name,
+				    const Elf64_Sym *sym, const Elf64_Rel *rel)
 {
 	struct bpf_insn *insn = &prog->insns[insn_idx];
 	size_t map_idx, nr_maps = prog->obj->nr_maps;
@@ -4240,12 +4228,12 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
 		return 0;
 	}
 
-	type = bpf_object__section_to_libbpf_map_type(obj, shdr_idx);
+	type = map_section_to_libbpf_map_type(obj, shdr_idx);
 	sym_sec_name = elf_sec_name(obj, elf_sec_by_idx(obj, shdr_idx));
 
 	/* generic map reference relocation */
 	if (type == LIBBPF_MAP_UNSPEC) {
-		if (!bpf_object__shndx_is_maps(obj, shdr_idx)) {
+		if (!bpf_object_shndx_is_maps(obj, shdr_idx)) {
 			pr_warn("prog '%s': bad map relo against '%s' in section '%s'\n",
 				prog->name, sym_name, sym_sec_name);
 			return -LIBBPF_ERRNO__RELOC;
@@ -4274,7 +4262,7 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
 	}
 
 	/* global data map relocation */
-	if (!bpf_object__shndx_is_data(obj, shdr_idx)) {
+	if (!bpf_object_shndx_is_data(obj, shdr_idx)) {
 		pr_warn("prog '%s': bad data relo against section '%s'\n",
 			prog->name, sym_sec_name);
 		return -LIBBPF_ERRNO__RELOC;
@@ -4335,8 +4323,7 @@ static struct bpf_program *find_prog_by_sec_insn(const struct bpf_object *obj,
 	return NULL;
 }
 
-static int
-bpf_object__collect_prog_relos(struct bpf_object *obj, Elf64_Shdr *shdr, Elf_Data *data)
+static int bpf_object_collect_prog_relos(struct bpf_object *obj, Elf64_Shdr *shdr, Elf_Data *data)
 {
 	const char *relo_sec_name, *sec_name;
 	size_t sec_idx = shdr->sh_info, sym_idx;
@@ -4425,8 +4412,8 @@ bpf_object__collect_prog_relos(struct bpf_object *obj, Elf64_Shdr *shdr, Elf_Dat
 
 		/* adjust insn_idx to local BPF program frame of reference */
 		insn_idx -= prog->sec_insn_off;
-		err = bpf_program__record_reloc(prog, &relos[prog->nr_reloc],
-						insn_idx, sym_name, sym, rel);
+		err = bpf_program_record_reloc(prog, &relos[prog->nr_reloc],
+					       insn_idx, sym_name, sym, rel);
 		if (err)
 			return err;
 
@@ -4446,7 +4433,7 @@ static int map_fill_btf_type_info(struct bpf_object *obj, struct bpf_map *map)
 	 * For struct_ops map, it does not need btf_key_type_id and
 	 * btf_value_type_id.
 	 */
-	if (map->sec_idx == obj->efile.btf_maps_shndx || bpf_map__is_struct_ops(map))
+	if (map->sec_idx == obj->efile.btf_maps_shndx || bpf_map_is_struct_ops(map))
 		return 0;
 
 	/*
@@ -4584,7 +4571,7 @@ __u32 bpf_map__max_entries(const struct bpf_map *map)
 
 struct bpf_map *bpf_map__inner_map(struct bpf_map *map)
 {
-	if (!bpf_map_type__is_map_in_map(map->def.type))
+	if (!bpf_map_type_is_map_in_map(map->def.type))
 		return errno = EINVAL, NULL;
 
 	return map->inner_map;
@@ -4604,8 +4591,7 @@ int bpf_map__set_max_entries(struct bpf_map *map, __u32 max_entries)
 	return 0;
 }
 
-static int
-bpf_object__probe_loading(struct bpf_object *obj)
+static int bpf_object_probe_loading(struct bpf_object *obj)
 {
 	char *cp, errmsg[STRERR_BUFSIZE];
 	struct bpf_insn insns[] = {
@@ -5122,8 +5108,7 @@ static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
 		map_info.map_extra == map->map_extra);
 }
 
-static int
-bpf_object__reuse_map(struct bpf_map *map)
+static int bpf_object_reuse_map(struct bpf_map *map)
 {
 	char *cp, errmsg[STRERR_BUFSIZE];
 	int err, pin_fd;
@@ -5161,8 +5146,7 @@ bpf_object__reuse_map(struct bpf_map *map)
 	return 0;
 }
 
-static int
-bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+static int bpf_object_populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
 {
 	enum libbpf_map_type map_type = map->libbpf_type;
 	char *cp, errmsg[STRERR_BUFSIZE];
@@ -5198,9 +5182,9 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
 	return 0;
 }
 
-static void bpf_map__destroy(struct bpf_map *map);
+static void bpf_map_destroy(struct bpf_map *map);
 
-static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
+static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
 {
 	LIBBPF_OPTS(bpf_map_create_opts, create_attr);
 	struct bpf_map_def *def = &map->def;
@@ -5214,7 +5198,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
 	create_attr.numa_node = map->numa_node;
 	create_attr.map_extra = map->map_extra;
 
-	if (bpf_map__is_struct_ops(map))
+	if (bpf_map_is_struct_ops(map))
 		create_attr.btf_vmlinux_value_type_id = map->btf_vmlinux_value_type_id;
 
 	if (obj->btf && btf__fd(obj->btf) >= 0) {
@@ -5223,9 +5207,9 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
 		create_attr.btf_value_type_id = map->btf_value_type_id;
 	}
 
-	if (bpf_map_type__is_map_in_map(def->type)) {
+	if (bpf_map_type_is_map_in_map(def->type)) {
 		if (map->inner_map) {
-			err = bpf_object__create_map(obj, map->inner_map, true);
+			err = bpf_object_create_map(obj, map->inner_map, true);
 			if (err) {
 				pr_warn("map '%s': failed to create inner map: %d\n",
 					map->name, err);
@@ -5293,10 +5277,10 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
 
 	err = map->fd < 0 ? -errno : 0;
 
-	if (bpf_map_type__is_map_in_map(def->type) && map->inner_map) {
+	if (bpf_map_type_is_map_in_map(def->type) && map->inner_map) {
 		if (obj->gen_loader)
 			map->inner_map->fd = -1;
-		bpf_map__destroy(map->inner_map);
+		bpf_map_destroy(map->inner_map);
 		zfree(&map->inner_map);
 	}
 
@@ -5410,8 +5394,7 @@ static int map_set_def_max_entries(struct bpf_map *map)
 	return 0;
 }
 
-static int
-bpf_object__create_maps(struct bpf_object *obj)
+static int bpf_object_create_maps(struct bpf_object *obj)
 {
 	struct bpf_map *map;
 	char *cp, errmsg[STRERR_BUFSIZE];
@@ -5451,7 +5434,7 @@ bpf_object__create_maps(struct bpf_object *obj)
 		retried = false;
 retry:
 		if (map->pin_path) {
-			err = bpf_object__reuse_map(map);
+			err = bpf_object_reuse_map(map);
 			if (err) {
 				pr_warn("map '%s': error reusing pinned map\n",
 					map->name);
@@ -5469,7 +5452,7 @@ bpf_object__create_maps(struct bpf_object *obj)
 			pr_debug("map '%s': skipping creation (preset fd=%d)\n",
 				 map->name, map->fd);
 		} else {
-			err = bpf_object__create_map(obj, map, false);
+			err = bpf_object_create_map(obj, map, false);
 			if (err)
 				goto err_out;
 
@@ -5477,7 +5460,7 @@ bpf_object__create_maps(struct bpf_object *obj)
 				 map->name, map->fd);
 
 			if (bpf_map__is_internal(map)) {
-				err = bpf_object__populate_internal_map(obj, map);
+				err = bpf_object_populate_internal_map(obj, map);
 				if (err < 0) {
 					zclose(map->fd);
 					goto err_out;
@@ -5879,8 +5862,7 @@ static int bpf_core_resolve_relo(struct bpf_program *prog,
 				       targ_res);
 }
 
-static int
-bpf_object__relocate_core(struct bpf_object *obj, const char *targ_btf_path)
+static int bpf_object_relocate_core(struct bpf_object *obj, const char *targ_btf_path)
 {
 	const struct btf_ext_info_sec *sec;
 	struct bpf_core_relo_res targ_res;
@@ -6057,8 +6039,7 @@ static void poison_kfunc_call(struct bpf_program *prog, int relo_idx,
  *  - global variable references;
  *  - extern references.
  */
-static int
-bpf_object__relocate_data(struct bpf_object *obj, struct bpf_program *prog)
+static int bpf_object_relocate_data(struct bpf_object *obj, struct bpf_program *prog)
 {
 	int i;
 
@@ -6340,9 +6321,8 @@ static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_progra
 	return 0;
 }
 
-static int
-bpf_object__append_subprog_code(struct bpf_object *obj, struct bpf_program *main_prog,
-				struct bpf_program *subprog)
+static int bpf_object_append_subprog_code(struct bpf_object *obj, struct bpf_program *main_prog,
+					  struct bpf_program *subprog)
 {
        struct bpf_insn *insns;
        size_t new_cnt;
@@ -6372,9 +6352,8 @@ bpf_object__append_subprog_code(struct bpf_object *obj, struct bpf_program *main
        return 0;
 }
 
-static int
-bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
-		       struct bpf_program *prog)
+static int bpf_object_reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
+				 struct bpf_program *prog)
 {
 	size_t sub_insn_idx, insn_idx;
 	struct bpf_program *subprog;
@@ -6394,7 +6373,7 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
 		relo = find_prog_insn_relo(prog, insn_idx);
 		if (relo && relo->type == RELO_EXTERN_CALL)
 			/* kfunc relocations will be handled later
-			 * in bpf_object__relocate_data()
+			 * in bpf_object_relocate_data()
 			 */
 			continue;
 		if (relo && relo->type != RELO_CALL && relo->type != RELO_SUBPROG_ADDR) {
@@ -6454,10 +6433,10 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
 		 *   and relocate.
 		 */
 		if (subprog->sub_insn_off == 0) {
-			err = bpf_object__append_subprog_code(obj, main_prog, subprog);
+			err = bpf_object_append_subprog_code(obj, main_prog, subprog);
 			if (err)
 				return err;
-			err = bpf_object__reloc_code(obj, main_prog, subprog);
+			err = bpf_object_reloc_code(obj, main_prog, subprog);
 			if (err)
 				return err;
 		}
@@ -6562,8 +6541,7 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
  *
  * At this point we unwind recursion, relocate calls in subC, then in mainB.
  */
-static int
-bpf_object__relocate_calls(struct bpf_object *obj, struct bpf_program *prog)
+static int bpf_object_relocate_calls(struct bpf_object *obj, struct bpf_program *prog)
 {
 	struct bpf_program *subprog;
 	int i, err;
@@ -6579,15 +6557,14 @@ bpf_object__relocate_calls(struct bpf_object *obj, struct bpf_program *prog)
 		subprog->sub_insn_off = 0;
 	}
 
-	err = bpf_object__reloc_code(obj, prog, prog);
+	err = bpf_object_reloc_code(obj, prog, prog);
 	if (err)
 		return err;
 
 	return 0;
 }
 
-static void
-bpf_object__free_relocs(struct bpf_object *obj)
+static void bpf_object_free_relocs(struct bpf_object *obj)
 {
 	struct bpf_program *prog;
 	int i;
@@ -6615,7 +6592,7 @@ static int cmp_relocs(const void *_a, const void *_b)
 	return 0;
 }
 
-static void bpf_object__sort_relos(struct bpf_object *obj)
+static void bpf_object_sort_relos(struct bpf_object *obj)
 {
 	int i;
 
@@ -6630,25 +6607,25 @@ static void bpf_object__sort_relos(struct bpf_object *obj)
 }
 
 static int
-bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
+bpf_object_relocate(struct bpf_object *obj, const char *targ_btf_path)
 {
 	struct bpf_program *prog;
 	size_t i, j;
 	int err;
 
 	if (obj->btf_ext) {
-		err = bpf_object__relocate_core(obj, targ_btf_path);
+		err = bpf_object_relocate_core(obj, targ_btf_path);
 		if (err) {
 			pr_warn("failed to perform CO-RE relocations: %d\n",
 				err);
 			return err;
 		}
-		bpf_object__sort_relos(obj);
+		bpf_object_sort_relos(obj);
 	}
 
 	/* Before relocating calls pre-process relocations and mark
 	 * few ld_imm64 instructions that points to subprogs.
-	 * Otherwise bpf_object__reloc_code() later would have to consider
+	 * Otherwise bpf_object_reloc_code() later would have to consider
 	 * all ld_imm64 insns as relocation candidates. That would
 	 * reduce relocation speed, since amount of find_prog_insn_relo()
 	 * would increase and most of them will fail to find a relo.
@@ -6682,7 +6659,7 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 		if (!prog->autoload)
 			continue;
 
-		err = bpf_object__relocate_calls(obj, prog);
+		err = bpf_object_relocate_calls(obj, prog);
 		if (err) {
 			pr_warn("prog '%s': failed to relocate calls: %d\n",
 				prog->name, err);
@@ -6699,10 +6676,10 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 			 * have to append exception callback now.
 			 */
 			if (subprog->sub_insn_off == 0) {
-				err = bpf_object__append_subprog_code(obj, prog, subprog);
+				err = bpf_object_append_subprog_code(obj, prog, subprog);
 				if (err)
 					return err;
-				err = bpf_object__reloc_code(obj, prog, subprog);
+				err = bpf_object_reloc_code(obj, prog, subprog);
 				if (err)
 					return err;
 			}
@@ -6715,7 +6692,7 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 			continue;
 		if (!prog->autoload)
 			continue;
-		err = bpf_object__relocate_data(obj, prog);
+		err = bpf_object_relocate_data(obj, prog);
 		if (err) {
 			pr_warn("prog '%s': failed to relocate data references: %d\n",
 				prog->name, err);
@@ -6726,11 +6703,11 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
 	return 0;
 }
 
-static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
-					    Elf64_Shdr *shdr, Elf_Data *data);
+static int bpf_object_collect_st_ops_relos(struct bpf_object *obj,
+					   Elf64_Shdr *shdr, Elf_Data *data);
 
-static int bpf_object__collect_map_relos(struct bpf_object *obj,
-					 Elf64_Shdr *shdr, Elf_Data *data)
+static int bpf_object_collect_map_relos(struct bpf_object *obj,
+					Elf64_Shdr *shdr, Elf_Data *data)
 {
 	const int bpf_ptr_sz = 8, host_ptr_sz = sizeof(void *);
 	int i, j, nrels, new_sz;
@@ -6788,7 +6765,7 @@ static int bpf_object__collect_map_relos(struct bpf_object *obj,
 			return -EINVAL;
 		}
 
-		is_map_in_map = bpf_map_type__is_map_in_map(map->def.type);
+		is_map_in_map = bpf_map_type_is_map_in_map(map->def.type);
 		is_prog_array = map->def.type == BPF_MAP_TYPE_PROG_ARRAY;
 		type = is_map_in_map ? "map" : "prog";
 		if (is_map_in_map) {
@@ -6866,7 +6843,7 @@ static int bpf_object__collect_map_relos(struct bpf_object *obj,
 	return 0;
 }
 
-static int bpf_object__collect_relos(struct bpf_object *obj)
+static int bpf_object_collect_relos(struct bpf_object *obj)
 {
 	int i, err;
 
@@ -6889,16 +6866,16 @@ static int bpf_object__collect_relos(struct bpf_object *obj)
 		}
 
 		if (idx == obj->efile.st_ops_shndx || idx == obj->efile.st_ops_link_shndx)
-			err = bpf_object__collect_st_ops_relos(obj, shdr, data);
+			err = bpf_object_collect_st_ops_relos(obj, shdr, data);
 		else if (idx == obj->efile.btf_maps_shndx)
-			err = bpf_object__collect_map_relos(obj, shdr, data);
+			err = bpf_object_collect_map_relos(obj, shdr, data);
 		else
-			err = bpf_object__collect_prog_relos(obj, shdr, data);
+			err = bpf_object_collect_prog_relos(obj, shdr, data);
 		if (err)
 			return err;
 	}
 
-	bpf_object__sort_relos(obj);
+	bpf_object_sort_relos(obj);
 	return 0;
 }
 
@@ -6915,7 +6892,7 @@ static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id
 	return false;
 }
 
-static int bpf_object__sanitize_prog(struct bpf_object *obj, struct bpf_program *prog)
+static int bpf_object_sanitize_prog(struct bpf_object *obj, struct bpf_program *prog)
 {
 	struct bpf_insn *insn = prog->insns;
 	enum bpf_func_id func_id;
@@ -7433,7 +7410,7 @@ static int bpf_program_record_relos(struct bpf_program *prog)
 }
 
 static int
-bpf_object__load_progs(struct bpf_object *obj, int log_level)
+bpf_object_load_progs(struct bpf_object *obj, int log_level)
 {
 	struct bpf_program *prog;
 	size_t i;
@@ -7441,7 +7418,7 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
 
 	for (i = 0; i < obj->nr_programs; i++) {
 		prog = &obj->programs[i];
-		err = bpf_object__sanitize_prog(obj, prog);
+		err = bpf_object_sanitize_prog(obj, prog);
 		if (err)
 			return err;
 	}
@@ -7467,7 +7444,7 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
 		}
 	}
 
-	bpf_object__free_relocs(obj);
+	bpf_object_free_relocs(obj);
 	return 0;
 }
 
@@ -7546,7 +7523,7 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
 	if (log_size && !log_buf)
 		return ERR_PTR(-EINVAL);
 
-	obj = bpf_object__new(path, obj_buf, obj_buf_sz, obj_name);
+	obj = bpf_object_new(path, obj_buf, obj_buf_sz, obj_name);
 	if (IS_ERR(obj))
 		return obj;
 
@@ -7576,18 +7553,18 @@ static struct bpf_object *bpf_object_open(const char *path, const void *obj_buf,
 		}
 	}
 
-	err = bpf_object__elf_init(obj);
-	err = err ? : bpf_object__check_endianness(obj);
-	err = err ? : bpf_object__elf_collect(obj);
-	err = err ? : bpf_object__collect_externs(obj);
+	err = bpf_object_elf_init(obj);
+	err = err ? : bpf_object_check_endianness(obj);
+	err = err ? : bpf_object_elf_collect(obj);
+	err = err ? : bpf_object_collect_externs(obj);
 	err = err ? : bpf_object_fixup_btf(obj);
-	err = err ? : bpf_object__init_maps(obj, opts);
+	err = err ? : bpf_object_init_maps(obj, opts);
 	err = err ? : bpf_object_init_progs(obj, opts);
-	err = err ? : bpf_object__collect_relos(obj);
+	err = err ? : bpf_object_collect_relos(obj);
 	if (err)
 		goto out;
 
-	bpf_object__elf_finish(obj);
+	bpf_object_elf_finish(obj);
 
 	return obj;
 out:
@@ -7640,7 +7617,7 @@ static int bpf_object_unload(struct bpf_object *obj)
 	return 0;
 }
 
-static int bpf_object__sanitize_maps(struct bpf_object *obj)
+static int bpf_object_sanitize_maps(struct bpf_object *obj)
 {
 	struct bpf_map *m;
 
@@ -7716,7 +7693,7 @@ static int kallsyms_cb(unsigned long long sym_addr, char sym_type,
 	return 0;
 }
 
-static int bpf_object__read_kallsyms_file(struct bpf_object *obj)
+static int bpf_object_read_kallsyms_file(struct bpf_object *obj)
 {
 	return libbpf_kallsyms_parse(kallsyms_cb, obj);
 }
@@ -7755,8 +7732,7 @@ static int find_ksym_btf_id(struct bpf_object *obj, const char *ksym_name,
 	return id;
 }
 
-static int bpf_object__resolve_ksym_var_btf_id(struct bpf_object *obj,
-					       struct extern_desc *ext)
+static int bpf_object_resolve_ksym_var_btf_id(struct bpf_object *obj, struct extern_desc *ext)
 {
 	const struct btf_type *targ_var, *targ_type;
 	__u32 targ_type_id, local_type_id;
@@ -7808,8 +7784,7 @@ static int bpf_object__resolve_ksym_var_btf_id(struct bpf_object *obj,
 	return 0;
 }
 
-static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
-						struct extern_desc *ext)
+static int bpf_object_resolve_ksym_func_btf_id(struct bpf_object *obj, struct extern_desc *ext)
 {
 	int local_func_proto_id, kfunc_proto_id, kfunc_id;
 	struct module_btf *mod_btf = NULL;
@@ -7868,7 +7843,7 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 	ext->is_set = true;
 	ext->ksym.kernel_btf_id = kfunc_id;
 	ext->ksym.btf_fd_idx = mod_btf ? mod_btf->fd_array_idx : 0;
-	/* Also set kernel_btf_obj_fd to make sure that bpf_object__relocate_data()
+	/* Also set kernel_btf_obj_fd to make sure that bpf_object_relocate_data()
 	 * populates FD into ld_imm64 insn when it's used to point to kfunc.
 	 * {kernel_btf_id, btf_fd_idx} -> fixup bpf_call.
 	 * {kernel_btf_id, kernel_btf_obj_fd} -> fixup ld_imm64.
@@ -7880,7 +7855,7 @@ static int bpf_object__resolve_ksym_func_btf_id(struct bpf_object *obj,
 	return 0;
 }
 
-static int bpf_object__resolve_ksyms_btf_id(struct bpf_object *obj)
+static int bpf_object_resolve_ksyms_btf_id(struct bpf_object *obj)
 {
 	const struct btf_type *t;
 	struct extern_desc *ext;
@@ -7899,17 +7874,16 @@ static int bpf_object__resolve_ksyms_btf_id(struct bpf_object *obj)
 		}
 		t = btf__type_by_id(obj->btf, ext->btf_id);
 		if (btf_is_var(t))
-			err = bpf_object__resolve_ksym_var_btf_id(obj, ext);
+			err = bpf_object_resolve_ksym_var_btf_id(obj, ext);
 		else
-			err = bpf_object__resolve_ksym_func_btf_id(obj, ext);
+			err = bpf_object_resolve_ksym_func_btf_id(obj, ext);
 		if (err)
 			return err;
 	}
 	return 0;
 }
 
-static int bpf_object__resolve_externs(struct bpf_object *obj,
-				       const char *extra_kconfig)
+static int bpf_object_resolve_externs(struct bpf_object *obj, const char *extra_kconfig)
 {
 	bool need_config = false, need_kallsyms = false;
 	bool need_vmlinux_btf = false;
@@ -7976,7 +7950,7 @@ static int bpf_object__resolve_externs(struct bpf_object *obj,
 		}
 	}
 	if (need_config && extra_kconfig) {
-		err = bpf_object__read_kconfig_mem(obj, extra_kconfig, kcfg_data);
+		err = bpf_object_read_kconfig_mem(obj, extra_kconfig, kcfg_data);
 		if (err)
 			return -EINVAL;
 		need_config = false;
@@ -7989,17 +7963,17 @@ static int bpf_object__resolve_externs(struct bpf_object *obj,
 		}
 	}
 	if (need_config) {
-		err = bpf_object__read_kconfig_file(obj, kcfg_data);
+		err = bpf_object_read_kconfig_file(obj, kcfg_data);
 		if (err)
 			return -EINVAL;
 	}
 	if (need_kallsyms) {
-		err = bpf_object__read_kallsyms_file(obj);
+		err = bpf_object_read_kallsyms_file(obj);
 		if (err)
 			return -EINVAL;
 	}
 	if (need_vmlinux_btf) {
-		err = bpf_object__resolve_ksyms_btf_id(obj);
+		err = bpf_object_resolve_ksyms_btf_id(obj);
 		if (err)
 			return -EINVAL;
 	}
@@ -8043,7 +8017,7 @@ static int bpf_object_prepare_struct_ops(struct bpf_object *obj)
 	int i;
 
 	for (i = 0; i < obj->nr_maps; i++)
-		if (bpf_map__is_struct_ops(&obj->maps[i]))
+		if (bpf_map_is_struct_ops(&obj->maps[i]))
 			bpf_map_prepare_vdata(&obj->maps[i]);
 
 	return 0;
@@ -8064,15 +8038,15 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 	if (obj->gen_loader)
 		bpf_gen__init(obj->gen_loader, extra_log_level, obj->nr_programs, obj->nr_maps);
 
-	err = bpf_object__probe_loading(obj);
-	err = err ? : bpf_object__load_vmlinux_btf(obj, false);
-	err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
-	err = err ? : bpf_object__sanitize_and_load_btf(obj);
-	err = err ? : bpf_object__sanitize_maps(obj);
-	err = err ? : bpf_object__init_kern_struct_ops_maps(obj);
-	err = err ? : bpf_object__create_maps(obj);
-	err = err ? : bpf_object__relocate(obj, obj->btf_custom_path ? : target_btf_path);
-	err = err ? : bpf_object__load_progs(obj, extra_log_level);
+	err = bpf_object_probe_loading(obj);
+	err = err ? : bpf_object_load_vmlinux_btf(obj, false);
+	err = err ? : bpf_object_resolve_externs(obj, obj->kconfig);
+	err = err ? : bpf_object_sanitize_and_load_btf(obj);
+	err = err ? : bpf_object_sanitize_maps(obj);
+	err = err ? : bpf_object_init_kern_struct_ops_maps(obj);
+	err = err ? : bpf_object_create_maps(obj);
+	err = err ? : bpf_object_relocate(obj, obj->btf_custom_path ? : target_btf_path);
+	err = err ? : bpf_object_load_progs(obj, extra_log_level);
 	err = err ? : bpf_object_init_prog_arrays(obj);
 	err = err ? : bpf_object_prepare_struct_ops(obj);
 
@@ -8530,10 +8504,10 @@ int bpf_object__unpin(struct bpf_object *obj, const char *path)
 	return 0;
 }
 
-static void bpf_map__destroy(struct bpf_map *map)
+static void bpf_map_destroy(struct bpf_map *map)
 {
 	if (map->inner_map) {
-		bpf_map__destroy(map->inner_map);
+		bpf_map_destroy(map->inner_map);
 		zfree(&map->inner_map);
 	}
 
@@ -8574,14 +8548,14 @@ void bpf_object__close(struct bpf_object *obj)
 	obj->usdt_man = NULL;
 
 	bpf_gen__free(obj->gen_loader);
-	bpf_object__elf_finish(obj);
+	bpf_object_elf_finish(obj);
 	bpf_object_unload(obj);
 	btf__free(obj->btf);
 	btf__free(obj->btf_vmlinux);
 	btf_ext__free(obj->btf_ext);
 
 	for (i = 0; i < obj->nr_maps; i++)
-		bpf_map__destroy(&obj->maps[i]);
+		bpf_map_destroy(&obj->maps[i]);
 
 	zfree(&obj->btf_custom_path);
 	zfree(&obj->kconfig);
@@ -8597,7 +8571,7 @@ void bpf_object__close(struct bpf_object *obj)
 
 	if (obj->programs && obj->nr_programs) {
 		for (i = 0; i < obj->nr_programs; i++)
-			bpf_program__exit(&obj->programs[i]);
+			bpf_program_exit(&obj->programs[i]);
 	}
 	zfree(&obj->programs);
 
@@ -8651,8 +8625,8 @@ int bpf_object__gen_loader(struct bpf_object *obj, struct gen_loader_opts *opts)
 }
 
 static struct bpf_program *
-__bpf_program__iter(const struct bpf_program *p, const struct bpf_object *obj,
-		    bool forward)
+bpf_object_prog_iter(const struct bpf_program *p, const struct bpf_object *obj,
+		     bool forward)
 {
 	size_t nr_programs = obj->nr_programs;
 	ssize_t idx;
@@ -8682,7 +8656,7 @@ bpf_object__next_program(const struct bpf_object *obj, struct bpf_program *prev)
 	struct bpf_program *prog = prev;
 
 	do {
-		prog = __bpf_program__iter(prog, obj, true);
+		prog = bpf_object_prog_iter(prog, obj, true);
 	} while (prog && prog_is_subprog(obj, prog));
 
 	return prog;
@@ -8694,7 +8668,7 @@ bpf_object__prev_program(const struct bpf_object *obj, struct bpf_program *next)
 	struct bpf_program *prog = next;
 
 	do {
-		prog = __bpf_program__iter(prog, obj, false);
+		prog = bpf_object_prog_iter(prog, obj, false);
 	} while (prog && prog_is_subprog(obj, prog));
 
 	return prog;
@@ -9250,7 +9224,7 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
 
 	for (i = 0; i < obj->nr_maps; i++) {
 		map = &obj->maps[i];
-		if (!bpf_map__is_struct_ops(map))
+		if (!bpf_map_is_struct_ops(map))
 			continue;
 		if (map->sec_idx == sec_idx &&
 		    map->sec_offset <= offset &&
@@ -9262,8 +9236,8 @@ static struct bpf_map *find_struct_ops_map_by_offset(struct bpf_object *obj,
 }
 
 /* Collect the reloc from ELF and populate the st_ops->progs[] */
-static int bpf_object__collect_st_ops_relos(struct bpf_object *obj,
-					    Elf64_Shdr *shdr, Elf_Data *data)
+static int bpf_object_collect_st_ops_relos(struct bpf_object *obj,
+					   Elf64_Shdr *shdr, Elf_Data *data)
 {
 	const struct btf_member *member;
 	struct bpf_struct_ops *st_ops;
@@ -9850,7 +9824,7 @@ int bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex)
 
 int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
 {
-	if (!bpf_map_type__is_map_in_map(map->def.type)) {
+	if (!bpf_map_type_is_map_in_map(map->def.type)) {
 		pr_warn("error: unsupported map type\n");
 		return libbpf_err(-EINVAL);
 	}
@@ -9859,7 +9833,7 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
 		return libbpf_err(-EINVAL);
 	}
 	if (map->inner_map) {
-		bpf_map__destroy(map->inner_map);
+		bpf_map_destroy(map->inner_map);
 		zfree(&map->inner_map);
 	}
 	map->inner_map_fd = fd;
@@ -9867,7 +9841,7 @@ int bpf_map__set_inner_map_fd(struct bpf_map *map, int fd)
 }
 
 static struct bpf_map *
-__bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
+bpf_object_iter_map(const struct bpf_map *m, const struct bpf_object *obj, int i)
 {
 	ssize_t idx;
 	struct bpf_map *s, *e;
@@ -9896,7 +9870,7 @@ bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
 	if (prev == NULL)
 		return obj->maps;
 
-	return __bpf_map__iter(prev, obj, 1);
+	return bpf_object_iter_map(prev, obj, 1);
 }
 
 struct bpf_map *
@@ -9908,7 +9882,7 @@ bpf_object__prev_map(const struct bpf_object *obj, const struct bpf_map *next)
 		return obj->maps + obj->nr_maps - 1;
 	}
 
-	return __bpf_map__iter(next, obj, -1);
+	return bpf_object_iter_map(next, obj, -1);
 }
 
 struct bpf_map *
@@ -10117,7 +10091,7 @@ const char *bpf_link__pin_path(const struct bpf_link *link)
 	return link->pin_path;
 }
 
-static int bpf_link__detach_fd(struct bpf_link *link)
+static int bpf_link_detach_fd(struct bpf_link *link)
 {
 	return libbpf_err_errno(close(link->fd));
 }
@@ -10139,7 +10113,7 @@ struct bpf_link *bpf_link__open(const char *path)
 		close(fd);
 		return libbpf_err_ptr(-ENOMEM);
 	}
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 	link->fd = fd;
 
 	link->pin_path = strdup(path);
@@ -11025,7 +10999,7 @@ bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog,
 		err = -ENOMEM;
 		goto error;
 	}
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	prog_fd = bpf_program__fd(prog);
 	link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_KPROBE_MULTI, &lopts);
@@ -11483,7 +11457,7 @@ bpf_program__attach_uprobe_multi(const struct bpf_program *prog,
 		err = -ENOMEM;
 		goto error;
 	}
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	prog_fd = bpf_program__fd(prog);
 	link_fd = bpf_link_create(prog_fd, 0, BPF_TRACE_UPROBE_MULTI, &lopts);
@@ -11936,7 +11910,7 @@ struct bpf_link *bpf_program__attach_raw_tracepoint(const struct bpf_program *pr
 	link = calloc(1, sizeof(*link));
 	if (!link)
 		return libbpf_err_ptr(-ENOMEM);
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	pfd = bpf_raw_tracepoint_open(tp_name, prog_fd);
 	if (pfd < 0) {
@@ -11992,8 +11966,8 @@ static int attach_raw_tp(const struct bpf_program *prog, long cookie, struct bpf
 }
 
 /* Common logic for all BPF program types that attach to a btf_id */
-static struct bpf_link *bpf_program__attach_btf_id(const struct bpf_program *prog,
-						   const struct bpf_trace_opts *opts)
+static struct bpf_link *bpf_program_attach_btf_id(const struct bpf_program *prog,
+						  const struct bpf_trace_opts *opts)
 {
 	LIBBPF_OPTS(bpf_link_create_opts, link_opts);
 	char errmsg[STRERR_BUFSIZE];
@@ -12012,7 +11986,7 @@ static struct bpf_link *bpf_program__attach_btf_id(const struct bpf_program *pro
 	link = calloc(1, sizeof(*link));
 	if (!link)
 		return libbpf_err_ptr(-ENOMEM);
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	/* libbpf is smart enough to redirect to BPF_RAW_TRACEPOINT_OPEN on old kernels */
 	link_opts.tracing.cookie = OPTS_GET(opts, cookie, 0);
@@ -12030,18 +12004,18 @@ static struct bpf_link *bpf_program__attach_btf_id(const struct bpf_program *pro
 
 struct bpf_link *bpf_program__attach_trace(const struct bpf_program *prog)
 {
-	return bpf_program__attach_btf_id(prog, NULL);
+	return bpf_program_attach_btf_id(prog, NULL);
 }
 
 struct bpf_link *bpf_program__attach_trace_opts(const struct bpf_program *prog,
 						const struct bpf_trace_opts *opts)
 {
-	return bpf_program__attach_btf_id(prog, opts);
+	return bpf_program_attach_btf_id(prog, opts);
 }
 
 struct bpf_link *bpf_program__attach_lsm(const struct bpf_program *prog)
 {
-	return bpf_program__attach_btf_id(prog, NULL);
+	return bpf_program_attach_btf_id(prog, NULL);
 }
 
 static int attach_trace(const struct bpf_program *prog, long cookie, struct bpf_link **link)
@@ -12075,7 +12049,7 @@ bpf_program_attach_fd(const struct bpf_program *prog,
 	link = calloc(1, sizeof(*link));
 	if (!link)
 		return libbpf_err_ptr(-ENOMEM);
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	attach_type = bpf_program__expected_attach_type(prog);
 	link_fd = bpf_link_create(prog_fd, target_fd, attach_type, opts);
@@ -12240,7 +12214,7 @@ bpf_program__attach_iter(const struct bpf_program *prog,
 	link = calloc(1, sizeof(*link));
 	if (!link)
 		return libbpf_err_ptr(-ENOMEM);
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	link_fd = bpf_link_create(prog_fd, target_fd, BPF_TRACE_ITER,
 				  &link_create_opts);
@@ -12281,7 +12255,7 @@ struct bpf_link *bpf_program__attach_netfilter(const struct bpf_program *prog,
 	if (!link)
 		return libbpf_err_ptr(-ENOMEM);
 
-	link->detach = &bpf_link__detach_fd;
+	link->detach = &bpf_link_detach_fd;
 
 	lopts.netfilter.pf = OPTS_GET(opts, pf, 0);
 	lopts.netfilter.hooknum = OPTS_GET(opts, hooknum, 0);
@@ -12331,7 +12305,7 @@ struct bpf_link_struct_ops {
 	int map_fd;
 };
 
-static int bpf_link__detach_struct_ops(struct bpf_link *link)
+static int bpf_link_detach_struct_ops(struct bpf_link *link)
 {
 	struct bpf_link_struct_ops *st_link;
 	__u32 zero = 0;
@@ -12351,7 +12325,7 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map)
 	__u32 zero = 0;
 	int err, fd;
 
-	if (!bpf_map__is_struct_ops(map) || map->fd == -1)
+	if (!bpf_map_is_struct_ops(map) || map->fd == -1)
 		return libbpf_err_ptr(-EINVAL);
 
 	link = calloc(1, sizeof(*link));
@@ -12370,7 +12344,7 @@ struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map)
 		return libbpf_err_ptr(err);
 	}
 
-	link->link.detach = bpf_link__detach_struct_ops;
+	link->link.detach = bpf_link_detach_struct_ops;
 
 	if (!(map->def.map_flags & BPF_F_LINK)) {
 		/* w/o a real link */
@@ -12400,7 +12374,7 @@ int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map)
 	__u32 zero = 0;
 	int err;
 
-	if (!bpf_map__is_struct_ops(map) || map->fd < 0)
+	if (!bpf_map_is_struct_ops(map) || map->fd < 0)
 		return -EINVAL;
 
 	st_ops_link = container_of(link, struct bpf_link_struct_ops, link);
@@ -12517,8 +12491,7 @@ struct perf_buffer {
 	int map_fd; /* BPF_MAP_TYPE_PERF_EVENT_ARRAY BPF map FD */
 };
 
-static void perf_buffer__free_cpu_buf(struct perf_buffer *pb,
-				      struct perf_cpu_buf *cpu_buf)
+static void perf_buffer_free_cpu_buf(struct perf_buffer *pb, struct perf_cpu_buf *cpu_buf)
 {
 	if (!cpu_buf)
 		return;
@@ -12547,7 +12520,7 @@ void perf_buffer__free(struct perf_buffer *pb)
 				continue;
 
 			bpf_map_delete_elem(pb->map_fd, &cpu_buf->map_key);
-			perf_buffer__free_cpu_buf(pb, cpu_buf);
+			perf_buffer_free_cpu_buf(pb, cpu_buf);
 		}
 		free(pb->cpu_bufs);
 	}
@@ -12558,8 +12531,8 @@ void perf_buffer__free(struct perf_buffer *pb)
 }
 
 static struct perf_cpu_buf *
-perf_buffer__open_cpu_buf(struct perf_buffer *pb, struct perf_event_attr *attr,
-			  int cpu, int map_key)
+perf_buffer_open_cpu_buf(struct perf_buffer *pb, struct perf_event_attr *attr,
+			 int cpu, int map_key)
 {
 	struct perf_cpu_buf *cpu_buf;
 	char msg[STRERR_BUFSIZE];
@@ -12603,12 +12576,12 @@ perf_buffer__open_cpu_buf(struct perf_buffer *pb, struct perf_event_attr *attr,
 	return cpu_buf;
 
 error:
-	perf_buffer__free_cpu_buf(pb, cpu_buf);
+	perf_buffer_free_cpu_buf(pb, cpu_buf);
 	return (struct perf_cpu_buf *)ERR_PTR(err);
 }
 
-static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
-					      struct perf_buffer_params *p);
+static struct perf_buffer *perf_buffer_new(int map_fd, size_t page_cnt,
+					   struct perf_buffer_params *p);
 
 struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
 				     perf_buffer_sample_fn sample_cb,
@@ -12641,7 +12614,7 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
 	p.lost_cb = lost_cb;
 	p.ctx = ctx;
 
-	return libbpf_ptr(__perf_buffer__new(map_fd, page_cnt, &p));
+	return libbpf_ptr(perf_buffer_new(map_fd, page_cnt, &p));
 }
 
 struct perf_buffer *perf_buffer__new_raw(int map_fd, size_t page_cnt,
@@ -12664,11 +12637,11 @@ struct perf_buffer *perf_buffer__new_raw(int map_fd, size_t page_cnt,
 	p.cpus = OPTS_GET(opts, cpus, NULL);
 	p.map_keys = OPTS_GET(opts, map_keys, NULL);
 
-	return libbpf_ptr(__perf_buffer__new(map_fd, page_cnt, &p));
+	return libbpf_ptr(perf_buffer_new(map_fd, page_cnt, &p));
 }
 
-static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
-					      struct perf_buffer_params *p)
+static struct perf_buffer *perf_buffer_new(int map_fd, size_t page_cnt,
+					   struct perf_buffer_params *p)
 {
 	const char *online_cpus_file = "/sys/devices/system/cpu/online";
 	struct bpf_map_info map;
@@ -12773,7 +12746,7 @@ static struct perf_buffer *__perf_buffer__new(int map_fd, size_t page_cnt,
 		if (p->cpu_cnt <= 0 && (cpu >= n || !online[cpu]))
 			continue;
 
-		cpu_buf = perf_buffer__open_cpu_buf(pb, p->attr, cpu, map_key);
+		cpu_buf = perf_buffer_open_cpu_buf(pb, p->attr, cpu, map_key);
 		if (IS_ERR(cpu_buf)) {
 			err = PTR_ERR(cpu_buf);
 			goto error;
@@ -12828,8 +12801,7 @@ struct perf_sample_lost {
 	uint64_t sample_id;
 };
 
-static enum bpf_perf_event_ret
-perf_buffer__process_record(struct perf_event_header *e, void *ctx)
+static enum bpf_perf_event_ret perf_buffer_process_record(struct perf_event_header *e, void *ctx)
 {
 	struct perf_cpu_buf *cpu_buf = ctx;
 	struct perf_buffer *pb = cpu_buf->pb;
@@ -12861,15 +12833,15 @@ perf_buffer__process_record(struct perf_event_header *e, void *ctx)
 	return LIBBPF_PERF_EVENT_CONT;
 }
 
-static int perf_buffer__process_records(struct perf_buffer *pb,
-					struct perf_cpu_buf *cpu_buf)
+static int perf_buffer_process_records(struct perf_buffer *pb,
+				       struct perf_cpu_buf *cpu_buf)
 {
 	enum bpf_perf_event_ret ret;
 
 	ret = perf_event_read_simple(cpu_buf->base, pb->mmap_size,
 				     pb->page_size, &cpu_buf->buf,
 				     &cpu_buf->buf_size,
-				     perf_buffer__process_record, cpu_buf);
+				     perf_buffer_process_record, cpu_buf);
 	if (ret != LIBBPF_PERF_EVENT_CONT)
 		return ret;
 	return 0;
@@ -12891,7 +12863,7 @@ int perf_buffer__poll(struct perf_buffer *pb, int timeout_ms)
 	for (i = 0; i < cnt; i++) {
 		struct perf_cpu_buf *cpu_buf = pb->events[i].data.ptr;
 
-		err = perf_buffer__process_records(pb, cpu_buf);
+		err = perf_buffer_process_records(pb, cpu_buf);
 		if (err) {
 			pr_warn("error while processing records: %d\n", err);
 			return libbpf_err(err);
@@ -12962,7 +12934,7 @@ int perf_buffer__consume_buffer(struct perf_buffer *pb, size_t buf_idx)
 	if (!cpu_buf)
 		return libbpf_err(-ENOENT);
 
-	return perf_buffer__process_records(pb, cpu_buf);
+	return perf_buffer_process_records(pb, cpu_buf);
 }
 
 int perf_buffer__consume(struct perf_buffer *pb)
@@ -12975,7 +12947,7 @@ int perf_buffer__consume(struct perf_buffer *pb)
 		if (!cpu_buf)
 			continue;
 
-		err = perf_buffer__process_records(pb, cpu_buf);
+		err = perf_buffer_process_records(pb, cpu_buf);
 		if (err) {
 			pr_warn("perf_buffer: failed to process records in buffer #%d: %d\n", i, err);
 			return libbpf_err(err);
@@ -12997,7 +12969,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
 		return libbpf_err(-EINVAL);
 
 	if (attach_prog_fd && !attach_func_name) {
-		/* remember attach_prog_fd and let bpf_program__load() find
+		/* remember attach_prog_fd and let bpf_object_load_prog() find
 		 * BTF ID during the program load
 		 */
 		prog->attach_prog_fd = attach_prog_fd;
@@ -13014,7 +12986,7 @@ int bpf_program__set_attach_target(struct bpf_program *prog,
 			return libbpf_err(-EINVAL);
 
 		/* load btf_vmlinux, if not yet */
-		err = bpf_object__load_vmlinux_btf(prog->obj, true);
+		err = bpf_object_load_vmlinux_btf(prog->obj, true);
 		if (err)
 			return libbpf_err(err);
 		err = find_kernel_btf_id(prog->obj, attach_func_name,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 2/9] libbpf: make uniform use of btf__fd() accessor inside libbpf
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 3/9] libbpf: use explicit map reuse flag to skip map creation steps Andrii Nakryiko
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

It makes future grepping and code analysis a bit easier.

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 8e7a50c1ce89..4d4951933836 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7027,7 +7027,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
 	load_attr.prog_ifindex = prog->prog_ifindex;
 
 	/* specify func_info/line_info only if kernel supports them */
-	btf_fd = bpf_object__btf_fd(obj);
+	btf_fd = btf__fd(obj->btf);
 	if (btf_fd >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
 		load_attr.prog_btf_fd = btf_fd;
 		load_attr.func_info = prog->func_info;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 3/9] libbpf: use explicit map reuse flag to skip map creation steps
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 2/9] libbpf: make uniform use of btf__fd() accessor inside libbpf Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 4/9] libbpf: don't rely on map->fd as an indicator of map being created Andrii Nakryiko
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

Instead of inferring whether map already point to previously
created/pinned BPF map (which user can specify with bpf_map__reuse_fd()) API),
use explicit map->reused flag that is set in such case.

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 4d4951933836..a0cf57b59b6a 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5448,7 +5448,7 @@ static int bpf_object_create_maps(struct bpf_object *obj)
 			}
 		}
 
-		if (map->fd >= 0) {
+		if (map->reused) {
 			pr_debug("map '%s': skipping creation (preset fd=%d)\n",
 				 map->name, map->fd);
 		} else {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 4/9] libbpf: don't rely on map->fd as an indicator of map being created
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 3/9] libbpf: use explicit map reuse flag to skip map creation steps Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs Andrii Nakryiko
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

With the upcoming switch to preallocated placeholder FDs for maps,
switch various getters/setter away from checking map->fd. Use
map_is_created() helper that detect whether BPF map can be modified based
on map->obj->loaded state, with special provision for maps set up with
bpf_map__reuse_fd().

For backwards compatibility, we take map_is_created() into account in
bpf_map__fd() getter as well. This way before bpf_object__load() phase
bpf_map__fd() will always return -1, just as before the changes in
subsequent patches adding stable map->fd placeholders.

We also get rid of all internal uses of bpf_map__fd() getter, as it's
more oriented for uses external to libbpf. The above map_is_created()
check actually interferes with some of the internal uses, if map FD is
fetched through bpf_map__fd().

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 42 +++++++++++++++++++++++++++---------------
 1 file changed, 27 insertions(+), 15 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index a0cf57b59b6a..f29cfb344f80 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -5184,6 +5184,11 @@ static int bpf_object_populate_internal_map(struct bpf_object *obj, struct bpf_m
 
 static void bpf_map_destroy(struct bpf_map *map);
 
+static bool map_is_created(const struct bpf_map *map)
+{
+	return map->obj->loaded || map->reused;
+}
+
 static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bool is_inner)
 {
 	LIBBPF_OPTS(bpf_map_create_opts, create_attr);
@@ -5215,7 +5220,7 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
 					map->name, err);
 				return err;
 			}
-			map->inner_map_fd = bpf_map__fd(map->inner_map);
+			map->inner_map_fd = map->inner_map->fd;
 		}
 		if (map->inner_map_fd >= 0)
 			create_attr.inner_map_fd = map->inner_map_fd;
@@ -5298,7 +5303,7 @@ static int init_map_in_map_slots(struct bpf_object *obj, struct bpf_map *map)
 			continue;
 
 		targ_map = map->init_slots[i];
-		fd = bpf_map__fd(targ_map);
+		fd = targ_map->fd;
 
 		if (obj->gen_loader) {
 			bpf_gen__populate_outer_map(obj->gen_loader,
@@ -7112,7 +7117,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
 				if (map->libbpf_type != LIBBPF_MAP_RODATA)
 					continue;
 
-				if (bpf_prog_bind_map(ret, bpf_map__fd(map), NULL)) {
+				if (bpf_prog_bind_map(ret, map->fd, NULL)) {
 					cp = libbpf_strerror_r(errno, errmsg, sizeof(errmsg));
 					pr_warn("prog '%s': failed to bind map '%s': %s\n",
 						prog->name, map->real_name, cp);
@@ -9575,7 +9580,11 @@ int libbpf_attach_type_by_name(const char *name,
 
 int bpf_map__fd(const struct bpf_map *map)
 {
-	return map ? map->fd : libbpf_err(-EINVAL);
+	if (!map)
+		return libbpf_err(-EINVAL);
+	if (!map_is_created(map))
+		return -1;
+	return map->fd;
 }
 
 static bool map_uses_real_name(const struct bpf_map *map)
@@ -9611,7 +9620,7 @@ enum bpf_map_type bpf_map__type(const struct bpf_map *map)
 
 int bpf_map__set_type(struct bpf_map *map, enum bpf_map_type type)
 {
-	if (map->fd >= 0)
+	if (map_is_created(map))
 		return libbpf_err(-EBUSY);
 	map->def.type = type;
 	return 0;
@@ -9624,7 +9633,7 @@ __u32 bpf_map__map_flags(const struct bpf_map *map)
 
 int bpf_map__set_map_flags(struct bpf_map *map, __u32 flags)
 {
-	if (map->fd >= 0)
+	if (map_is_created(map))
 		return libbpf_err(-EBUSY);
 	map->def.map_flags = flags;
 	return 0;
@@ -9637,7 +9646,7 @@ __u64 bpf_map__map_extra(const struct bpf_map *map)
 
 int bpf_map__set_map_extra(struct bpf_map *map, __u64 map_extra)
 {
-	if (map->fd >= 0)
+	if (map_is_created(map))
 		return libbpf_err(-EBUSY);
 	map->map_extra = map_extra;
 	return 0;
@@ -9650,7 +9659,7 @@ __u32 bpf_map__numa_node(const struct bpf_map *map)
 
 int bpf_map__set_numa_node(struct bpf_map *map, __u32 numa_node)
 {
-	if (map->fd >= 0)
+	if (map_is_created(map))
 		return libbpf_err(-EBUSY);
 	map->numa_node = numa_node;
 	return 0;
@@ -9663,7 +9672,7 @@ __u32 bpf_map__key_size(const struct bpf_map *map)
 
 int bpf_map__set_key_size(struct bpf_map *map, __u32 size)
 {
-	if (map->fd >= 0)
+	if (map_is_created(map))
 		return libbpf_err(-EBUSY);
 	map->def.key_size = size;
 	return 0;
@@ -9747,7 +9756,7 @@ static int map_btf_datasec_resize(struct bpf_map *map, __u32 size)
 
 int bpf_map__set_value_size(struct bpf_map *map, __u32 size)
 {
-	if (map->fd >= 0)
+	if (map->obj->loaded || map->reused)
 		return libbpf_err(-EBUSY);
 
 	if (map->mmaped) {
@@ -9788,8 +9797,11 @@ __u32 bpf_map__btf_value_type_id(const struct bpf_map *map)
 int bpf_map__set_initial_value(struct bpf_map *map,
 			       const void *data, size_t size)
 {
+	if (map->obj->loaded || map->reused)
+		return libbpf_err(-EBUSY);
+
 	if (!map->mmaped || map->libbpf_type == LIBBPF_MAP_KCONFIG ||
-	    size != map->def.value_size || map->fd >= 0)
+	    size != map->def.value_size)
 		return libbpf_err(-EINVAL);
 
 	memcpy(map->mmaped, data, size);
@@ -9816,7 +9828,7 @@ __u32 bpf_map__ifindex(const struct bpf_map *map)
 
 int bpf_map__set_ifindex(struct bpf_map *map, __u32 ifindex)
 {
-	if (map->fd >= 0)
+	if (map_is_created(map))
 		return libbpf_err(-EBUSY);
 	map->map_ifindex = ifindex;
 	return 0;
@@ -9921,7 +9933,7 @@ bpf_object__find_map_fd_by_name(const struct bpf_object *obj, const char *name)
 static int validate_map_op(const struct bpf_map *map, size_t key_sz,
 			   size_t value_sz, bool check_value_sz)
 {
-	if (map->fd <= 0)
+	if (!map_is_created(map)) /* map is not yet created */
 		return -ENOENT;
 
 	if (map->def.key_size != key_sz) {
@@ -12374,7 +12386,7 @@ int bpf_link__update_map(struct bpf_link *link, const struct bpf_map *map)
 	__u32 zero = 0;
 	int err;
 
-	if (!bpf_map_is_struct_ops(map) || map->fd < 0)
+	if (!bpf_map_is_struct_ops(map) || !map_is_created(map))
 		return -EINVAL;
 
 	st_ops_link = container_of(link, struct bpf_link_struct_ops, link);
@@ -13276,7 +13288,7 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s)
 	for (i = 0; i < s->map_cnt; i++) {
 		struct bpf_map *map = *s->maps[i].map;
 		size_t mmap_sz = bpf_map_mmap_sz(map->def.value_size, map->def.max_entries);
-		int prot, map_fd = bpf_map__fd(map);
+		int prot, map_fd = map->fd;
 		void **mmaped = s->maps[i].mmaped;
 
 		if (!mmaped)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 4/9] libbpf: don't rely on map->fd as an indicator of map being created Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-03 20:57   ` Eduard Zingerman
  2024-01-02 19:00 ` [PATCH v2 bpf-next 6/9] libbpf: move exception callbacks assignment logic into relocation step Andrii Nakryiko
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

Move map creation to later during BPF object loading by pre-creating
stable placeholder FDs (initially pointing to /dev/null). Use dup2()
syscall to then atomically make those placeholder FDs point to real
kernel BPF map objects.

This change allows to delay BPF map creation to after all the BPF
program relocations. That, in turn, allows to delay BTF finalization and
loading into kernel to after all the relocations as well. We'll take
advantage of the latter in subsequent patches to allow libbpf to adjust
BTF in a way that helps with BPF global function usage.

Clean up a few places where we close map->fd, which now shouldn't
happen, because map->fd should be a valid FD regardless of whether map
was created or not. Surprisingly and nicely it simplifies a bunch of
error handling code. If this change doesn't backfire, I'm tempted to
pre-create such stable FDs for other entities (progs, maybe even BTF).
We previously did some manipulations to make gen_loader work with fake
map FDs, with stable map FDs this hack is not necessary for maps (we
still have it for BTF, but I left it as is for now).

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c          | 91 +++++++++++++++++++--------------
 tools/lib/bpf/libbpf_internal.h | 24 +++++++++
 2 files changed, 77 insertions(+), 38 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index f29cfb344f80..e0085aef17d7 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -1510,7 +1510,21 @@ static struct bpf_map *bpf_object_add_map(struct bpf_object *obj)
 
 	map = &obj->maps[obj->nr_maps++];
 	map->obj = obj;
-	map->fd = -1;
+	/* Preallocate map FD without actually creating BPF map just yet.
+	 * These map FD "placeholders" will be reused later without changing
+	 * FD value when map is actually created in the kernel.
+	 *
+	 * This is useful to be able to perform BPF program relocations
+	 * without having to create BPF maps before that step. This allows us
+	 * to finalize and load BTF very late in BPF object's loading phase,
+	 * right before BPF maps have to be created and BPF programs have to
+	 * be loaded. By having these map FD placeholders we can perform all
+	 * the sanitizations, relocations, and any other adjustments before we
+	 * start creating actual BPF kernel objects (BTF, maps, progs).
+	 */
+	map->fd = create_placeholder_fd();
+	if (map->fd < 0)
+		return ERR_PTR(map->fd);
 	map->inner_map_fd = -1;
 	map->autocreate = true;
 
@@ -2600,7 +2614,9 @@ static int bpf_object_init_user_btf_map(struct bpf_object *obj,
 		map->inner_map = calloc(1, sizeof(*map->inner_map));
 		if (!map->inner_map)
 			return -ENOMEM;
-		map->inner_map->fd = -1;
+		map->inner_map->fd = create_placeholder_fd();
+		if (map->inner_map->fd < 0)
+			return map->inner_map->fd;
 		map->inner_map->sec_idx = sec_idx;
 		map->inner_map->name = malloc(strlen(map_name) + sizeof(".inner") + 1);
 		if (!map->inner_map->name)
@@ -4536,14 +4552,12 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
 		goto err_free_new_name;
 	}
 
-	err = zclose(map->fd);
-	if (err) {
-		err = -errno;
-		goto err_close_new_fd;
-	}
+	err = reuse_fd(map->fd, new_fd);
+	if (err)
+		goto err_free_new_name;
+
 	free(map->name);
 
-	map->fd = new_fd;
 	map->name = new_name;
 	map->def.type = info.type;
 	map->def.key_size = info.key_size;
@@ -4557,8 +4571,6 @@ int bpf_map__reuse_fd(struct bpf_map *map, int fd)
 
 	return 0;
 
-err_close_new_fd:
-	close(new_fd);
 err_free_new_name:
 	free(new_name);
 	return libbpf_err(err);
@@ -5194,7 +5206,7 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
 	LIBBPF_OPTS(bpf_map_create_opts, create_attr);
 	struct bpf_map_def *def = &map->def;
 	const char *map_name = NULL;
-	int err = 0;
+	int err = 0, map_fd;
 
 	if (kernel_supports(obj, FEAT_PROG_NAME))
 		map_name = map->name;
@@ -5253,17 +5265,19 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
 		bpf_gen__map_create(obj->gen_loader, def->type, map_name,
 				    def->key_size, def->value_size, def->max_entries,
 				    &create_attr, is_inner ? -1 : map - obj->maps);
-		/* Pretend to have valid FD to pass various fd >= 0 checks.
-		 * This fd == 0 will not be used with any syscall and will be reset to -1 eventually.
+		/* We keep pretenting we have valid FD to pass various fd >= 0
+		 * checks by just keeping original placeholder FDs in place.
+		 * See bpf_object_prepare_maps() comments.
+		 * This placeholder fd will not be used with any syscall and
+		 * will be reset to -1 eventually.
 		 */
-		map->fd = 0;
+		map_fd = map->fd;
 	} else {
-		map->fd = bpf_map_create(def->type, map_name,
-					 def->key_size, def->value_size,
-					 def->max_entries, &create_attr);
+		map_fd = bpf_map_create(def->type, map_name,
+					def->key_size, def->value_size,
+					def->max_entries, &create_attr);
 	}
-	if (map->fd < 0 && (create_attr.btf_key_type_id ||
-			    create_attr.btf_value_type_id)) {
+	if (map_fd < 0 && (create_attr.btf_key_type_id || create_attr.btf_value_type_id)) {
 		char *cp, errmsg[STRERR_BUFSIZE];
 
 		err = -errno;
@@ -5275,13 +5289,11 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
 		create_attr.btf_value_type_id = 0;
 		map->btf_key_type_id = 0;
 		map->btf_value_type_id = 0;
-		map->fd = bpf_map_create(def->type, map_name,
-					 def->key_size, def->value_size,
-					 def->max_entries, &create_attr);
+		map_fd = bpf_map_create(def->type, map_name,
+					def->key_size, def->value_size,
+					def->max_entries, &create_attr);
 	}
 
-	err = map->fd < 0 ? -errno : 0;
-
 	if (bpf_map_type_is_map_in_map(def->type) && map->inner_map) {
 		if (obj->gen_loader)
 			map->inner_map->fd = -1;
@@ -5289,7 +5301,19 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
 		zfree(&map->inner_map);
 	}
 
-	return err;
+	if (map_fd < 0)
+		return -errno;
+
+	/* obj->gen_loader case, prevent reuse_fd() from closing map_fd */
+	if (map->fd == map_fd)
+		return 0;
+
+	/* Keep placeholder FD value but now point it to the BPF map object.
+	 * This way everything that relied on this map's FD (e.g., relocated
+	 * ldimm64 instructions) will stay valid and won't need adjustments.
+	 * map->fd stays valid but now point to what map_fd points to.
+	 */
+	return reuse_fd(map->fd, map_fd);
 }
 
 static int init_map_in_map_slots(struct bpf_object *obj, struct bpf_map *map)
@@ -5373,10 +5397,8 @@ static int bpf_object_init_prog_arrays(struct bpf_object *obj)
 			continue;
 
 		err = init_prog_array_slots(obj, map);
-		if (err < 0) {
-			zclose(map->fd);
+		if (err < 0)
 			return err;
-		}
 	}
 	return 0;
 }
@@ -5466,25 +5488,20 @@ static int bpf_object_create_maps(struct bpf_object *obj)
 
 			if (bpf_map__is_internal(map)) {
 				err = bpf_object_populate_internal_map(obj, map);
-				if (err < 0) {
-					zclose(map->fd);
+				if (err < 0)
 					goto err_out;
-				}
 			}
 
 			if (map->init_slots_sz && map->def.type != BPF_MAP_TYPE_PROG_ARRAY) {
 				err = init_map_in_map_slots(obj, map);
-				if (err < 0) {
-					zclose(map->fd);
+				if (err < 0)
 					goto err_out;
-				}
 			}
 		}
 
 		if (map->pin_path && !map->pinned) {
 			err = bpf_map__pin(map, NULL);
 			if (err) {
-				zclose(map->fd);
 				if (!retried && err == -EEXIST) {
 					retried = true;
 					goto retry;
@@ -8049,8 +8066,8 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 	err = err ? : bpf_object_sanitize_and_load_btf(obj);
 	err = err ? : bpf_object_sanitize_maps(obj);
 	err = err ? : bpf_object_init_kern_struct_ops_maps(obj);
-	err = err ? : bpf_object_create_maps(obj);
 	err = err ? : bpf_object_relocate(obj, obj->btf_custom_path ? : target_btf_path);
+	err = err ? : bpf_object_create_maps(obj);
 	err = err ? : bpf_object_load_progs(obj, extra_log_level);
 	err = err ? : bpf_object_init_prog_arrays(obj);
 	err = err ? : bpf_object_prepare_struct_ops(obj);
@@ -8059,8 +8076,6 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 		/* reset FDs */
 		if (obj->btf)
 			btf__set_fd(obj->btf, -1);
-		for (i = 0; i < obj->nr_maps; i++)
-			obj->maps[i].fd = -1;
 		if (!err)
 			err = bpf_gen__finish(obj->gen_loader, obj->nr_programs, obj->nr_maps);
 	}
diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
index b5d334754e5d..662a3df1e29f 100644
--- a/tools/lib/bpf/libbpf_internal.h
+++ b/tools/lib/bpf/libbpf_internal.h
@@ -555,6 +555,30 @@ static inline int ensure_good_fd(int fd)
 	return fd;
 }
 
+static inline int create_placeholder_fd(void)
+{
+	int fd;
+
+	fd = ensure_good_fd(open("/dev/null", O_WRONLY | O_CLOEXEC));
+	if (fd < 0)
+		return -errno;
+	return fd;
+}
+
+/* Point *fixed_fd* to the same file that *tmp_fd* points to.
+ * Regardless of success, *tmp_fd* is closed.
+ * Whatever *fixed_fd* pointed to is closed silently.
+ */
+static inline int reuse_fd(int fixed_fd, int tmp_fd)
+{
+	int err;
+
+	err = dup2(tmp_fd, fixed_fd);
+	err = err < 0 ? -errno : 0;
+	close(tmp_fd); /* clean up temporary FD */
+	return err;
+}
+
 /* The following two functions are exposed to bpftool */
 int bpf_core_add_cands(struct bpf_core_cand *local_cand,
 		       size_t local_essent_len,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 6/9] libbpf: move exception callbacks assignment logic into relocation step
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (4 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 7/9] libbpf: move BTF loading step after " Andrii Nakryiko
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

Move the logic of finding and assigning exception callback indices from
BTF sanitization step to program relocations step, which seems more
logical and will unblock moving BTF loading to after relocation step.

Exception callbacks discovery and assignment has no dependency on BTF
being loaded into the kernel, it only uses BTF information. It does need
to happen before subprogram relocations happen, though. Which is why the
split.

No functional changes.

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 165 +++++++++++++++++++++--------------------
 1 file changed, 85 insertions(+), 80 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index e0085aef17d7..d44aea1cc428 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3172,86 +3172,6 @@ static int bpf_object_sanitize_and_load_btf(struct bpf_object *obj)
 		}
 	}
 
-	if (!kernel_supports(obj, FEAT_BTF_DECL_TAG))
-		goto skip_exception_cb;
-	for (i = 0; i < obj->nr_programs; i++) {
-		struct bpf_program *prog = &obj->programs[i];
-		int j, k, n;
-
-		if (prog_is_subprog(obj, prog))
-			continue;
-		n = btf__type_cnt(obj->btf);
-		for (j = 1; j < n; j++) {
-			const char *str = "exception_callback:", *name;
-			size_t len = strlen(str);
-			struct btf_type *t;
-
-			t = btf_type_by_id(obj->btf, j);
-			if (!btf_is_decl_tag(t) || btf_decl_tag(t)->component_idx != -1)
-				continue;
-
-			name = btf__str_by_offset(obj->btf, t->name_off);
-			if (strncmp(name, str, len))
-				continue;
-
-			t = btf_type_by_id(obj->btf, t->type);
-			if (!btf_is_func(t) || btf_func_linkage(t) != BTF_FUNC_GLOBAL) {
-				pr_warn("prog '%s': exception_callback:<value> decl tag not applied to the main program\n",
-					prog->name);
-				return -EINVAL;
-			}
-			if (strcmp(prog->name, btf__str_by_offset(obj->btf, t->name_off)))
-				continue;
-			/* Multiple callbacks are specified for the same prog,
-			 * the verifier will eventually return an error for this
-			 * case, hence simply skip appending a subprog.
-			 */
-			if (prog->exception_cb_idx >= 0) {
-				prog->exception_cb_idx = -1;
-				break;
-			}
-
-			name += len;
-			if (str_is_empty(name)) {
-				pr_warn("prog '%s': exception_callback:<value> decl tag contains empty value\n",
-					prog->name);
-				return -EINVAL;
-			}
-
-			for (k = 0; k < obj->nr_programs; k++) {
-				struct bpf_program *subprog = &obj->programs[k];
-
-				if (!prog_is_subprog(obj, subprog))
-					continue;
-				if (strcmp(name, subprog->name))
-					continue;
-				/* Enforce non-hidden, as from verifier point of
-				 * view it expects global functions, whereas the
-				 * mark_btf_static fixes up linkage as static.
-				 */
-				if (!subprog->sym_global || subprog->mark_btf_static) {
-					pr_warn("prog '%s': exception callback %s must be a global non-hidden function\n",
-						prog->name, subprog->name);
-					return -EINVAL;
-				}
-				/* Let's see if we already saw a static exception callback with the same name */
-				if (prog->exception_cb_idx >= 0) {
-					pr_warn("prog '%s': multiple subprogs with same name as exception callback '%s'\n",
-					        prog->name, subprog->name);
-					return -EINVAL;
-				}
-				prog->exception_cb_idx = k;
-				break;
-			}
-
-			if (prog->exception_cb_idx >= 0)
-				continue;
-			pr_warn("prog '%s': cannot find exception callback '%s'\n", prog->name, name);
-			return -ENOENT;
-		}
-	}
-skip_exception_cb:
-
 	sanitize = btf_needs_sanitization(obj);
 	if (sanitize) {
 		const void *raw_data;
@@ -6628,6 +6548,88 @@ static void bpf_object_sort_relos(struct bpf_object *obj)
 	}
 }
 
+static int bpf_prog_assign_exc_cb(struct bpf_object *obj, struct bpf_program *prog)
+{
+	const char *str = "exception_callback:";
+	size_t pfx_len = strlen(str);
+	int i, j, n;
+
+	if (!obj->btf || !kernel_supports(obj, FEAT_BTF_DECL_TAG))
+		return 0;
+
+	n = btf__type_cnt(obj->btf);
+	for (i = 1; i < n; i++) {
+		const char *name;
+		struct btf_type *t;
+
+		t = btf_type_by_id(obj->btf, i);
+		if (!btf_is_decl_tag(t) || btf_decl_tag(t)->component_idx != -1)
+			continue;
+
+		name = btf__str_by_offset(obj->btf, t->name_off);
+		if (strncmp(name, str, pfx_len) != 0)
+			continue;
+
+		t = btf_type_by_id(obj->btf, t->type);
+		if (!btf_is_func(t) || btf_func_linkage(t) != BTF_FUNC_GLOBAL) {
+			pr_warn("prog '%s': exception_callback:<value> decl tag not applied to the main program\n",
+				prog->name);
+			return -EINVAL;
+		}
+		if (strcmp(prog->name, btf__str_by_offset(obj->btf, t->name_off)) != 0)
+			continue;
+		/* Multiple callbacks are specified for the same prog,
+		 * the verifier will eventually return an error for this
+		 * case, hence simply skip appending a subprog.
+		 */
+		if (prog->exception_cb_idx >= 0) {
+			prog->exception_cb_idx = -1;
+			break;
+		}
+
+		name += pfx_len;
+		if (str_is_empty(name)) {
+			pr_warn("prog '%s': exception_callback:<value> decl tag contains empty value\n",
+				prog->name);
+			return -EINVAL;
+		}
+
+		for (j = 0; j < obj->nr_programs; j++) {
+			struct bpf_program *subprog = &obj->programs[j];
+
+			if (!prog_is_subprog(obj, subprog))
+				continue;
+			if (strcmp(name, subprog->name) != 0)
+				continue;
+			/* Enforce non-hidden, as from verifier point of
+			 * view it expects global functions, whereas the
+			 * mark_btf_static fixes up linkage as static.
+			 */
+			if (!subprog->sym_global || subprog->mark_btf_static) {
+				pr_warn("prog '%s': exception callback %s must be a global non-hidden function\n",
+					prog->name, subprog->name);
+				return -EINVAL;
+			}
+			/* Let's see if we already saw a static exception callback with the same name */
+			if (prog->exception_cb_idx >= 0) {
+				pr_warn("prog '%s': multiple subprogs with same name as exception callback '%s'\n",
+					prog->name, subprog->name);
+				return -EINVAL;
+			}
+			prog->exception_cb_idx = j;
+			break;
+		}
+
+		if (prog->exception_cb_idx >= 0)
+			continue;
+
+		pr_warn("prog '%s': cannot find exception callback '%s'\n", prog->name, name);
+		return -ENOENT;
+	}
+
+	return 0;
+}
+
 static int
 bpf_object_relocate(struct bpf_object *obj, const char *targ_btf_path)
 {
@@ -6688,6 +6690,9 @@ bpf_object_relocate(struct bpf_object *obj, const char *targ_btf_path)
 			return err;
 		}
 
+		err = bpf_prog_assign_exc_cb(obj, prog);
+		if (err)
+			return err;
 		/* Now, also append exception callback if it has not been done already. */
 		if (prog->exception_cb_idx >= 0) {
 			struct bpf_program *subprog = &obj->programs[prog->exception_cb_idx];
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 7/9] libbpf: move BTF loading step after relocation step
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (5 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 6/9] libbpf: move exception callbacks assignment logic into relocation step Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-02 19:00 ` [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic Andrii Nakryiko
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

With all the preparations in previous patches done we are ready to
postpone BTF loading and sanitization step until after all the
relocations are performed.

While at it, simplify step name from bpf_object_sanitize_and_load_btf
to bpf_object_load_btf. Map creation and program loading steps also
perform sanitization, but they don't explicitly reflect it in overly
verbose function name. So keep naming and approch consistent here.

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d44aea1cc428..d5f5c1a8f543 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3122,7 +3122,7 @@ static int bpf_object_load_vmlinux_btf(struct bpf_object *obj, bool force)
 	return 0;
 }
 
-static int bpf_object_sanitize_and_load_btf(struct bpf_object *obj)
+static int bpf_object_load_btf(struct bpf_object *obj)
 {
 	struct btf *kern_btf = obj->btf;
 	bool btf_mandatory, sanitize;
@@ -8068,10 +8068,10 @@ static int bpf_object_load(struct bpf_object *obj, int extra_log_level, const ch
 	err = bpf_object_probe_loading(obj);
 	err = err ? : bpf_object_load_vmlinux_btf(obj, false);
 	err = err ? : bpf_object_resolve_externs(obj, obj->kconfig);
-	err = err ? : bpf_object_sanitize_and_load_btf(obj);
 	err = err ? : bpf_object_sanitize_maps(obj);
 	err = err ? : bpf_object_init_kern_struct_ops_maps(obj);
 	err = err ? : bpf_object_relocate(obj, obj->btf_custom_path ? : target_btf_path);
+	err = err ? : bpf_object_load_btf(obj);
 	err = err ? : bpf_object_create_maps(obj);
 	err = err ? : bpf_object_load_progs(obj, extra_log_level);
 	err = err ? : bpf_object_init_prog_arrays(obj);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (6 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 7/9] libbpf: move BTF loading step after " Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-03 20:57   ` Eduard Zingerman
  2024-01-02 19:00 ` [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests Andrii Nakryiko
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

Out of all special global func arg tag annotations, __arg_ctx is
practically is the most immediately useful and most critical to have
working across multitude kernel version, if possible. This would allow
end users to write much simpler code if __arg_ctx semantics worked for
older kernels that don't natively understand btf_decl_tag("arg:ctx") in
verifier logic.

Luckily, it is possible to ensure __arg_ctx works on old kernels through
a bit of extra work done by libbpf, at least in a lot of common cases.

To explain the overall idea, we need to go back at how context argument
was supported in global funcs before __arg_ctx support was added. This
was done based on special struct name checks in kernel. E.g., for
BPF_PROG_TYPE_PERF_EVENT the expectation is that argument type `struct
bpf_perf_event_data *` mark that argument as PTR_TO_CTX. This is all
good as long as global function is used from the same BPF program types
only, which is often not the case. If the same subprog has to be called
from, say, kprobe and perf_event program types, there is no single
definition that would satisfy BPF verifier. Subprog will have context
argument either for kprobe (if using bpf_user_pt_regs_t struct name) or
perf_event (with bpf_perf_event_data struct name), but not both.

This limitation was the reason to add btf_decl_tag("arg:ctx"), making
the actual argument type not important, so that user can just define
"generic" signature:

  __noinline int global_subprog(void *ctx __arg_ctx) { ... }

I won't belabor how libbpf is implementing subprograms, see a huge
comment next to bpf_object_relocate_calls() function. The idea is that
each main/entry BPF program gets its own copy of global_subprog's code
appended.

This per-program copy of global subprog code *and* associated func_info
.BTF.ext information, pointing to FUNC -> FUNC_PROTO BTF type chain
allows libbpf to simulate __arg_ctx behavior transparently, even if the
kernel doesn't yet support __arg_ctx annotation natively.

The idea is straightforward: each time we append global subprog's code
and func_info information, we adjust its FUNC -> FUNC_PROTO type
information, if necessary (that is, libbpf can detect the presence of
btf_decl_tag("arg:ctx") just like BPF verifier would do it).

The rest is just mechanical and somewhat painful BTF manipulation code.
It's painful because we need to clone FUNC -> FUNC_PROTO, instead of
reusing it, as same FUNC -> FUNC_PROTO chain might be used by another
main BPF program within the same BPF object, so we can't just modify it
in-place (and cloning BTF types within the same struct btf object is
painful due to constant memory invalidation, see comments in code).
Uploaded BPF object's BTF information has to work for all BPF
programs at the same time.

Once we have FUNC -> FUNC_PROTO clones, we make sure that instead of
using some `void *ctx` parameter definition, we have an expected `struct
bpf_perf_event_data *ctx` definition (as far as BPF verifier and kernel
is concerned), which will mark it as context for BPF verifier. Same
global subprog relocated and copied into another main BPF program will
get different type information according to main program's type. It all
works out in the end in a completely transparent way for end user.

Libbpf maintains internal program type -> expected context struct name
mapping internally. Note, not all BPF program types have named context
struct, so this approach won't work for such programs (just like it
didn't before __arg_ctx). So native __arg_ctx is still important to have
in kernel to have generic context support across all BPF program types.

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 257 ++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 251 insertions(+), 6 deletions(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d5f5c1a8f543..edd80fd2728e 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -6152,7 +6152,7 @@ reloc_prog_func_and_line_info(const struct bpf_object *obj,
 	int err;
 
 	/* no .BTF.ext relocation if .BTF.ext is missing or kernel doesn't
-	 * supprot func/line info
+	 * support func/line info
 	 */
 	if (!obj->btf_ext || !kernel_supports(obj, FEAT_BTF_FUNC))
 		return 0;
@@ -6630,8 +6630,245 @@ static int bpf_prog_assign_exc_cb(struct bpf_object *obj, struct bpf_program *pr
 	return 0;
 }
 
-static int
-bpf_object_relocate(struct bpf_object *obj, const char *targ_btf_path)
+static struct {
+	enum bpf_prog_type prog_type;
+	const char *ctx_name;
+} global_ctx_map[] = {
+	{ BPF_PROG_TYPE_CGROUP_DEVICE,           "bpf_cgroup_dev_ctx" },
+	{ BPF_PROG_TYPE_CGROUP_SKB,              "__sk_buff" },
+	{ BPF_PROG_TYPE_CGROUP_SOCK,             "bpf_sock" },
+	{ BPF_PROG_TYPE_CGROUP_SOCK_ADDR,        "bpf_sock_addr" },
+	{ BPF_PROG_TYPE_CGROUP_SOCKOPT,          "bpf_sockopt" },
+	{ BPF_PROG_TYPE_CGROUP_SYSCTL,           "bpf_sysctl" },
+	{ BPF_PROG_TYPE_FLOW_DISSECTOR,          "__sk_buff" },
+	{ BPF_PROG_TYPE_KPROBE,                  "bpf_user_pt_regs_t" },
+	{ BPF_PROG_TYPE_LWT_IN,                  "__sk_buff" },
+	{ BPF_PROG_TYPE_LWT_OUT,                 "__sk_buff" },
+	{ BPF_PROG_TYPE_LWT_SEG6LOCAL,           "__sk_buff" },
+	{ BPF_PROG_TYPE_LWT_XMIT,                "__sk_buff" },
+	{ BPF_PROG_TYPE_NETFILTER,               "bpf_nf_ctx" },
+	{ BPF_PROG_TYPE_PERF_EVENT,              "bpf_perf_event_data" },
+	{ BPF_PROG_TYPE_RAW_TRACEPOINT,          "bpf_raw_tracepoint_args" },
+	{ BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE, "bpf_raw_tracepoint_args" },
+	{ BPF_PROG_TYPE_SCHED_ACT,               "__sk_buff" },
+	{ BPF_PROG_TYPE_SCHED_CLS,               "__sk_buff" },
+	{ BPF_PROG_TYPE_SK_LOOKUP,               "bpf_sk_lookup" },
+	{ BPF_PROG_TYPE_SK_MSG,                  "sk_msg_md" },
+	{ BPF_PROG_TYPE_SK_REUSEPORT,            "sk_reuseport_md" },
+	{ BPF_PROG_TYPE_SK_SKB,                  "__sk_buff" },
+	{ BPF_PROG_TYPE_SOCK_OPS,                "bpf_sock_ops" },
+	{ BPF_PROG_TYPE_SOCKET_FILTER,           "__sk_buff" },
+	{ BPF_PROG_TYPE_XDP,                     "xdp_md" },
+	/* all other program types don't have "named" context structs */
+};
+
+static int clone_func_btf_info(struct btf *btf, int orig_fn_id, struct bpf_program *prog)
+{
+	int fn_id, fn_proto_id, ret_type_id, orig_proto_id;
+	int i, err, arg_cnt, name_off;
+	struct btf_type *fn_t, *fn_proto_t, *t;
+	struct btf_param *p;
+
+	/* caller already validated FUNC -> FUNC_PROTO validity */
+	fn_t = btf_type_by_id(btf, orig_fn_id);
+	fn_proto_t = btf_type_by_id(btf, fn_t->type);
+
+	/* Note that each btf__add_xxx() operation invalidates
+	 * all btf_type and string pointers, so we need to be
+	 * very careful when cloning BTF types. BTF type
+	 * pointers have to be always refetched. And to avoid
+	 * problems with invalidated string pointers, we
+	 * add empty strings initially, then just fix up
+	 * name_off offsets in place. Offsets are stable for
+	 * existing strings, so that works out.
+	 */
+	name_off = fn_t->name_off; /* we are about to invalidate fn_t */
+	ret_type_id = fn_proto_t->type; /* and fn_proto_t as well */
+	arg_cnt = btf_vlen(fn_proto_t);
+	orig_proto_id = fn_t->type; /* original FUNC_PROTO ID */
+
+	/* clone FUNC first, btf__add_func() enforces
+	 * non-empty name, so use entry program's name as
+	 * a placeholder, which we replace immediately
+	 */
+	fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);
+	if (fn_id < 0)
+		return -EINVAL;
+
+	fn_t = btf_type_by_id(btf, fn_id);
+	fn_t->name_off = name_off; /* reuse original string */
+	fn_t->type = fn_id + 1; /* we can predict FUNC_PROTO ID */
+
+	/* clone FUNC_PROTO and its params now */
+	fn_proto_id = btf__add_func_proto(btf, ret_type_id);
+	if (fn_proto_id < 0)
+		return -EINVAL;
+
+	for (i = 0; i < arg_cnt; i++) {
+		/* copy original parameter data */
+		t = btf_type_by_id(btf, orig_proto_id);
+		p = &btf_params(t)[i];
+		name_off = p->name_off;
+
+		err = btf__add_func_param(btf, "", p->type);
+		if (err)
+			return err;
+
+		fn_proto_t = btf_type_by_id(btf, fn_proto_id);
+		p = &btf_params(fn_proto_t)[i];
+		p->name_off = name_off; /* use remembered str offset */
+	}
+
+	return fn_id;
+}
+
+/* Check if main program or global subprog's function prototype has `arg:ctx`
+ * argument tags, and, if necessary, substitute correct type to match what BPF
+ * verifier would expect, taking into account specific program type. This
+ * allows to support __arg_ctx tag transparently on old kernels that don't yet
+ * have a native support for it in the verifier, making user's life much
+ * easier.
+ */
+static int bpf_program_fixup_func_info(struct bpf_object *obj, struct bpf_program *prog)
+{
+	const char *ctx_name = NULL, *ctx_tag = "arg:ctx";
+	struct bpf_func_info_min *func_rec;
+	struct btf_type *fn_t, *fn_proto_t;
+	struct btf *btf = obj->btf;
+	const struct btf_type *t;
+	struct btf_param *p;
+	int ptr_id = 0, struct_id, tag_id, orig_fn_id;
+	int i, n, arg_idx, arg_cnt, err, rec_idx;
+	int *orig_ids;
+
+	/* no .BTF.ext, no problem */
+	if (!obj->btf_ext || !prog->func_info)
+		return 0;
+
+	/* some BPF program types just don't have named context structs, so
+	 * this fallback mechanism doesn't work for them
+	 */
+	for (i = 0; i < ARRAY_SIZE(global_ctx_map); i++) {
+		if (global_ctx_map[i].prog_type != prog->type)
+			continue;
+		ctx_name = global_ctx_map[i].ctx_name;
+		break;
+	}
+	if (!ctx_name)
+		return 0;
+
+	/* remember original func BTF IDs to detect if we already cloned them */
+	orig_ids = calloc(prog->func_info_cnt, sizeof(*orig_ids));
+	if (!orig_ids)
+		return -ENOMEM;
+	for (i = 0; i < prog->func_info_cnt; i++) {
+		func_rec = prog->func_info + prog->func_info_rec_size * i;
+		orig_ids[i] = func_rec->type_id;
+	}
+
+	/* go through each DECL_TAG with "arg:ctx" and see if it points to one
+	 * of our subprogs; if yes and subprog is global and needs adjustment,
+	 * clone and adjust FUNC -> FUNC_PROTO combo
+	 */
+	for (i = 1, n = btf__type_cnt(btf); i < n; i++) {
+		/* only DECL_TAG with "arg:ctx" value are interesting */
+		t = btf__type_by_id(btf, i);
+		if (!btf_is_decl_tag(t))
+			continue;
+		if (strcmp(btf__str_by_offset(btf, t->name_off), ctx_tag) != 0)
+			continue;
+
+		/* only global funcs need adjustment, if at all */
+		orig_fn_id = t->type;
+		fn_t = btf_type_by_id(btf, orig_fn_id);
+		if (!btf_is_func(fn_t) || btf_func_linkage(fn_t) != BTF_FUNC_GLOBAL)
+			continue;
+
+		/* sanity check FUNC -> FUNC_PROTO chain, just in case */
+		fn_proto_t = btf_type_by_id(btf, fn_t->type);
+		if (!fn_proto_t || !btf_is_func_proto(fn_proto_t))
+			continue;
+
+		/* find corresponding func_info record */
+		func_rec = NULL;
+		for (rec_idx = 0; rec_idx < prog->func_info_cnt; rec_idx++) {
+			if (orig_ids[rec_idx] == t->type) {
+				func_rec = prog->func_info + prog->func_info_rec_size * rec_idx;
+				break;
+			}
+		}
+		/* current main program doesn't call into this subprog */
+		if (!func_rec)
+			continue;
+
+		/* some more sanity checking of DECL_TAG */
+		arg_cnt = btf_vlen(fn_proto_t);
+		arg_idx = btf_decl_tag(t)->component_idx;
+		if (arg_idx < 0 || arg_idx >= arg_cnt)
+			continue;
+
+		/* check if existing parameter already matches verifier expectations */
+		p = &btf_params(fn_proto_t)[arg_idx];
+		t = skip_mods_and_typedefs(btf, p->type, NULL);
+		if (btf_is_ptr(t) &&
+		    (t = skip_mods_and_typedefs(btf, t->type, NULL)) &&
+		    btf_is_struct(t) &&
+		    strcmp(btf__str_by_offset(btf, t->name_off), ctx_name) == 0) {
+			continue; /* no need for fix up */
+		}
+
+		/* clone fn/fn_proto, unless we already did it for another arg */
+		if (func_rec->type_id == orig_fn_id) {
+			int fn_id;
+
+			fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
+			if (fn_id < 0) {
+				err = fn_id;
+				goto err_out;
+			}
+
+			/* point func_info record to a cloned FUNC type */
+			func_rec->type_id = fn_id;
+		}
+
+		/* create PTR -> STRUCT type chain to mark PTR_TO_CTX argument;
+		 * we do it just once per main BPF program, as all global
+		 * funcs share the same program type, so need only PTR ->
+		 * STRUCT type chain
+		 */
+		if (ptr_id == 0) {
+			struct_id = btf__add_struct(btf, ctx_name, 0);
+			ptr_id = btf__add_ptr(btf, struct_id);
+			if (ptr_id < 0 || struct_id < 0) {
+				err = -EINVAL;
+				goto err_out;
+			}
+		}
+
+		/* for completeness, clone DECL_TAG and point it to cloned param */
+		tag_id = btf__add_decl_tag(btf, ctx_tag, func_rec->type_id, arg_idx);
+		if (tag_id < 0) {
+			err = -EINVAL;
+			goto err_out;
+		}
+
+		/* all the BTF manipulations invalidated pointers, refetch them */
+		fn_t = btf_type_by_id(btf, func_rec->type_id);
+		fn_proto_t = btf_type_by_id(btf, fn_t->type);
+
+		/* fix up type ID pointed to by param */
+		p = &btf_params(fn_proto_t)[arg_idx];
+		p->type = ptr_id;
+	}
+
+	free(orig_ids);
+	return 0;
+err_out:
+	free(orig_ids);
+	return err;
+}
+
+static int bpf_object_relocate(struct bpf_object *obj, const char *targ_btf_path)
 {
 	struct bpf_program *prog;
 	size_t i, j;
@@ -6712,19 +6949,28 @@ bpf_object_relocate(struct bpf_object *obj, const char *targ_btf_path)
 			}
 		}
 	}
-	/* Process data relos for main programs */
 	for (i = 0; i < obj->nr_programs; i++) {
 		prog = &obj->programs[i];
 		if (prog_is_subprog(obj, prog))
 			continue;
 		if (!prog->autoload)
 			continue;
+
+		/* Process data relos for main programs */
 		err = bpf_object_relocate_data(obj, prog);
 		if (err) {
 			pr_warn("prog '%s': failed to relocate data references: %d\n",
 				prog->name, err);
 			return err;
 		}
+
+		/* Fix up .BTF.ext information, if necessary */
+		err = bpf_program_fixup_func_info(obj, prog);
+		if (err) {
+			pr_warn("prog '%s': failed to perform .BTF.ext fix ups: %d\n",
+				prog->name, err);
+			return err;
+		}
 	}
 
 	return 0;
@@ -7436,8 +7682,7 @@ static int bpf_program_record_relos(struct bpf_program *prog)
 	return 0;
 }
 
-static int
-bpf_object_load_progs(struct bpf_object *obj, int log_level)
+static int bpf_object_load_progs(struct bpf_object *obj, int log_level)
 {
 	struct bpf_program *prog;
 	size_t i;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (7 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic Andrii Nakryiko
@ 2024-01-02 19:00 ` Andrii Nakryiko
  2024-01-03 20:57   ` Eduard Zingerman
  2024-01-02 19:57 ` [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
  2024-01-03 20:57 ` Eduard Zingerman
  10 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:00 UTC (permalink / raw)
  To: bpf, ast, daniel, martin.lau; +Cc: andrii, kernel-team, Jiri Olsa

Add a few extra cases of global funcs with context arguments. This time
rely on "arg:ctx" decl_tag (__arg_ctx macro), but put it next to
"classic" cases where context argument has to be of an exact type that
BPF verifier expects (e.g., bpf_user_pt_regs_t for kprobe/uprobe).

Colocating all these cases separately from other global func args that
rely on arg:xxx decl tags (in verifier_global_subprogs.c) allows for
simpler backwards compatibility testing on old kernels. All the cases in
test_global_func_ctx_args.c are supposed to work on older kernels, which
was manually validated during development.

Acked-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 .../bpf/progs/test_global_func_ctx_args.c     | 49 +++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/tools/testing/selftests/bpf/progs/test_global_func_ctx_args.c b/tools/testing/selftests/bpf/progs/test_global_func_ctx_args.c
index 7faa8eef0598..9a06e5eb1fbe 100644
--- a/tools/testing/selftests/bpf/progs/test_global_func_ctx_args.c
+++ b/tools/testing/selftests/bpf/progs/test_global_func_ctx_args.c
@@ -102,3 +102,52 @@ int perf_event_ctx(void *ctx)
 {
 	return perf_event_ctx_subprog(ctx);
 }
+
+/* this global subprog can be now called from many types of entry progs, each
+ * with different context type
+ */
+__weak int subprog_ctx_tag(void *ctx __arg_ctx)
+{
+	return bpf_get_stack(ctx, stack, sizeof(stack), 0);
+}
+
+struct my_struct { int x; };
+
+__weak int subprog_multi_ctx_tags(void *ctx1 __arg_ctx,
+				  struct my_struct *mem,
+				  void *ctx2 __arg_ctx)
+{
+	if (!mem)
+		return 0;
+
+	return bpf_get_stack(ctx1, stack, sizeof(stack), 0) +
+	       mem->x +
+	       bpf_get_stack(ctx2, stack, sizeof(stack), 0);
+}
+
+SEC("?raw_tp")
+__success __log_level(2)
+int arg_tag_ctx_raw_tp(void *ctx)
+{
+	struct my_struct x = { .x = 123 };
+
+	return subprog_ctx_tag(ctx) + subprog_multi_ctx_tags(ctx, &x, ctx);
+}
+
+SEC("?perf_event")
+__success __log_level(2)
+int arg_tag_ctx_perf(void *ctx)
+{
+	struct my_struct x = { .x = 123 };
+
+	return subprog_ctx_tag(ctx) + subprog_multi_ctx_tags(ctx, &x, ctx);
+}
+
+SEC("?kprobe")
+__success __log_level(2)
+int arg_tag_ctx_kprobe(void *ctx)
+{
+	struct my_struct x = { .x = 123 };
+
+	return subprog_ctx_tag(ctx) + subprog_multi_ctx_tags(ctx, &x, ctx);
+}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (8 preceding siblings ...)
  2024-01-02 19:00 ` [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests Andrii Nakryiko
@ 2024-01-02 19:57 ` Andrii Nakryiko
  2024-01-03 20:57 ` Eduard Zingerman
  10 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-02 19:57 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, martin.lau, kernel-team

On Tue, Jan 2, 2024 at 11:01 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Support __arg_ctx global function argument tag semantics even on older kernels
> that don't natively support it through btf_decl_tag("arg:ctx").
>
> Patch #1 does a bunch of internal renaming to make internal function naming
> consistent. We were doing it lazily up until now, but mixing single and double
> underscored names are confusing, so let's bite a bullet and get it over the
> finish line in one go.
>
> Patches #3-#7 are preparatory work to allow to postpone BTF loading into the
> kernel until after all the BPF program relocations (including global func
> appending to main programs) are done. Patch #5 is perhaps the most important
> and establishes pre-created stable placeholder FDs, so that relocations can
> embed valid map FDs into ldimm64 instructions.
>
> Once BTF is done after relocation, what's left is to adjust BTF information to
> have each main program's copy of each used global subprog to point to its own
> adjusted FUNC -> FUNC_PROTO type chain (if they use __arg_ctx) in such a way
> as to satisfy type expectations of BPF verifier regarding the PTR_TO_CTX
> argument definition. See patch #8 for details.
>
> Patch #9 adds few more __arg_ctx use cases (edge cases like multiple arguments
> having __arg_ctx, etc) to test_global_func_ctx_args.c, to make it simple to
> validate that this logic indeed works on old kernels. It does.
>

Oops, I forgot to add version history. It's basically:

v1->v2:
  - do internal functions renaming in patch #1 (Alexei);
  - extract cloning of FUNC -> FUNC_PROTO information into separate
function (Alexei);

> Andrii Nakryiko (9):
>   libbpf: name internal functions consistently
>   libbpf: make uniform use of btf__fd() accessor inside libbpf
>   libbpf: use explicit map reuse flag to skip map creation steps
>   libbpf: don't rely on map->fd as an indicator of map being created
>   libbpf: use stable map placeholder FDs
>   libbpf: move exception callbacks assignment logic into relocation step
>   libbpf: move BTF loading step after relocation step
>   libbpf: implement __arg_ctx fallback logic
>   selftests/bpf: add arg:ctx cases to test_global_funcs tests
>
>  tools/lib/bpf/libbpf.c                        | 1055 ++++++++++-------
>  tools/lib/bpf/libbpf_internal.h               |   24 +
>  .../bpf/progs/test_global_func_ctx_args.c     |   49 +
>  3 files changed, 725 insertions(+), 403 deletions(-)
>
> --
> 2.34.1
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support
  2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
                   ` (9 preceding siblings ...)
  2024-01-02 19:57 ` [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
@ 2024-01-03 20:57 ` Eduard Zingerman
  10 siblings, 0 replies; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 20:57 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team

On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:
> Support __arg_ctx global function argument tag semantics even on older kernels
> that don't natively support it through btf_decl_tag("arg:ctx").
> 
> Patch #1 does a bunch of internal renaming to make internal function naming
> consistent. We were doing it lazily up until now, but mixing single and double
> underscored names are confusing, so let's bite a bullet and get it over the
> finish line in one go.
> 
> Patches #3-#7 are preparatory work to allow to postpone BTF loading into the
> kernel until after all the BPF program relocations (including global func
> appending to main programs) are done. Patch #5 is perhaps the most important
> and establishes pre-created stable placeholder FDs, so that relocations can
> embed valid map FDs into ldimm64 instructions.
> 
> Once BTF is done after relocation, what's left is to adjust BTF information to
> have each main program's copy of each used global subprog to point to its own
> adjusted FUNC -> FUNC_PROTO type chain (if they use __arg_ctx) in such a way
> as to satisfy type expectations of BPF verifier regarding the PTR_TO_CTX
> argument definition. See patch #8 for details.
> 
> Patch #9 adds few more __arg_ctx use cases (edge cases like multiple arguments
> having __arg_ctx, etc) to test_global_func_ctx_args.c, to make it simple to
> validate that this logic indeed works on old kernels. It does.

I've read through the patch-set and it seems to be fine,
as far as my (limited) understanding of the code base goes.
Left a few nitpicks.

[...]



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs
  2024-01-02 19:00 ` [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs Andrii Nakryiko
@ 2024-01-03 20:57   ` Eduard Zingerman
  2024-01-03 22:46     ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 20:57 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team, Jiri Olsa

Tbh, it looks like calls to zclose(map->fd) were unnecessary
regardless of this patch, as all maps are closed at the end of
bpf_object_load() in case of an error.

[...]

> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index f29cfb344f80..e0085aef17d7 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c

[...]

> @@ -5275,13 +5289,11 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
>  		create_attr.btf_value_type_id = 0;
>  		map->btf_key_type_id = 0;
>  		map->btf_value_type_id = 0;
> -		map->fd = bpf_map_create(def->type, map_name,
> -					 def->key_size, def->value_size,
> -					 def->max_entries, &create_attr);
> +		map_fd = bpf_map_create(def->type, map_name,
> +					def->key_size, def->value_size,
> +					def->max_entries, &create_attr);
>  	}
>  
> -	err = map->fd < 0 ? -errno : 0;
> -
>  	if (bpf_map_type_is_map_in_map(def->type) && map->inner_map) {
>  		if (obj->gen_loader)
>  			map->inner_map->fd = -1;
> @@ -5289,7 +5301,19 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
>  		zfree(&map->inner_map);
>  	}
>  
> -	return err;
> +	if (map_fd < 0)
> +		return -errno;

Nit: this check is now placed after call to bpf_map_destroy(),
     which might call munmap(), which might overwrite "errno",
     set by some of the previous calls to bpf_map_create().

[...]

> diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> index b5d334754e5d..662a3df1e29f 100644
> --- a/tools/lib/bpf/libbpf_internal.h
> +++ b/tools/lib/bpf/libbpf_internal.h
> @@ -555,6 +555,30 @@ static inline int ensure_good_fd(int fd)
>  	return fd;
>  }
>  
> +static inline int create_placeholder_fd(void)
> +{
> +	int fd;
> +
> +	fd = ensure_good_fd(open("/dev/null", O_WRONLY | O_CLOEXEC));

Stupid question: is it ok to assume that /dev is always mounted?
Googling says that kernel chooses if to mount it automatically
depending on the value of CONFIG_DEVTMPFS_MOUNT option.
Another option might be memfd_create().

> +	if (fd < 0)
> +		return -errno;
> +	return fd;
> +}

[...]



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-02 19:00 ` [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic Andrii Nakryiko
@ 2024-01-03 20:57   ` Eduard Zingerman
  2024-01-03 23:10     ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 20:57 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team, Jiri Olsa

On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:

[...]

> +static int clone_func_btf_info(struct btf *btf, int orig_fn_id, struct bpf_program *prog)
> +{

[...]

> +	/* clone FUNC first, btf__add_func() enforces
> +	 * non-empty name, so use entry program's name as
> +	 * a placeholder, which we replace immediately
> +	 */
> +	fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);

Nit: Why not call this function near the end, when fn_proto_id is available?
     Thus avoiding need to "guess" fn_t->type.

[...]

> +static int bpf_program_fixup_func_info(struct bpf_object *obj, struct bpf_program *prog)
> +{

[...]

> +	for (i = 1, n = btf__type_cnt(btf); i < n; i++) {

[...]

> +
> +		/* clone fn/fn_proto, unless we already did it for another arg */
> +		if (func_rec->type_id == orig_fn_id) {
> +			int fn_id;
> +
> +			fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
> +			if (fn_id < 0) {
> +				err = fn_id;
> +				goto err_out;
> +			}
> +
> +			/* point func_info record to a cloned FUNC type */
> +			func_rec->type_id = fn_id;

Would it be helpful to add a log here, saying that BTF for function
so and so is changed before load?

> +		}
> +
> +		/* create PTR -> STRUCT type chain to mark PTR_TO_CTX argument;
> +		 * we do it just once per main BPF program, as all global
> +		 * funcs share the same program type, so need only PTR ->
> +		 * STRUCT type chain
> +		 */
> +		if (ptr_id == 0) {
> +			struct_id = btf__add_struct(btf, ctx_name, 0);

Nit: Maybe try looking up existing id for type ctx_name first?

> +			ptr_id = btf__add_ptr(btf, struct_id);
> +			if (ptr_id < 0 || struct_id < 0) {
> +				err = -EINVAL;
> +				goto err_out;
> +			}
> +		}
> +

[...]



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests
  2024-01-02 19:00 ` [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests Andrii Nakryiko
@ 2024-01-03 20:57   ` Eduard Zingerman
  2024-01-03 23:17     ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 20:57 UTC (permalink / raw)
  To: Andrii Nakryiko, bpf, ast, daniel, martin.lau; +Cc: kernel-team, Jiri Olsa

On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:
> Add a few extra cases of global funcs with context arguments. This time
> rely on "arg:ctx" decl_tag (__arg_ctx macro), but put it next to
> "classic" cases where context argument has to be of an exact type that
> BPF verifier expects (e.g., bpf_user_pt_regs_t for kprobe/uprobe).
> 
> Colocating all these cases separately from other global func args that
> rely on arg:xxx decl tags (in verifier_global_subprogs.c) allows for
> simpler backwards compatibility testing on old kernels. All the cases in
> test_global_func_ctx_args.c are supposed to work on older kernels, which
> was manually validated during development.
> 
> Acked-by: Jiri Olsa <jolsa@kernel.org>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---

At first I thought that we would need to add a new CI job that would
run these tests for some older kernel version.

However, the transformation of the sub-program parameters happens
unconditionally. So it should be possible to read BTF for some of the
programs after they are loaded and check if transformation is applied
as expected. Thus allowing to check __arg_ctx handling on libbpf side
w/o the need to run on old kernel.
I think it's worth it to add such test, wdyt?



^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs
  2024-01-03 20:57   ` Eduard Zingerman
@ 2024-01-03 22:46     ` Andrii Nakryiko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-03 22:46 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, Jan 3, 2024 at 12:57 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> Tbh, it looks like calls to zclose(map->fd) were unnecessary
> regardless of this patch, as all maps are closed at the end of
> bpf_object_load() in case of an error.
>

yep, agreed

> [...]
>
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index f29cfb344f80..e0085aef17d7 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
>
> [...]
>
> > @@ -5275,13 +5289,11 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
> >               create_attr.btf_value_type_id = 0;
> >               map->btf_key_type_id = 0;
> >               map->btf_value_type_id = 0;
> > -             map->fd = bpf_map_create(def->type, map_name,
> > -                                      def->key_size, def->value_size,
> > -                                      def->max_entries, &create_attr);
> > +             map_fd = bpf_map_create(def->type, map_name,
> > +                                     def->key_size, def->value_size,
> > +                                     def->max_entries, &create_attr);
> >       }
> >
> > -     err = map->fd < 0 ? -errno : 0;
> > -
> >       if (bpf_map_type_is_map_in_map(def->type) && map->inner_map) {
> >               if (obj->gen_loader)
> >                       map->inner_map->fd = -1;
> > @@ -5289,7 +5301,19 @@ static int bpf_object_create_map(struct bpf_object *obj, struct bpf_map *map, bo
> >               zfree(&map->inner_map);
> >       }
> >
> > -     return err;
> > +     if (map_fd < 0)
> > +             return -errno;
>
> Nit: this check is now placed after call to bpf_map_destroy(),
>      which might call munmap(), which might overwrite "errno",
>      set by some of the previous calls to bpf_map_create().
>

this should be `return map_fd;`, no need for errno, good catch, fixed

> [...]
>
> > diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h
> > index b5d334754e5d..662a3df1e29f 100644
> > --- a/tools/lib/bpf/libbpf_internal.h
> > +++ b/tools/lib/bpf/libbpf_internal.h
> > @@ -555,6 +555,30 @@ static inline int ensure_good_fd(int fd)
> >       return fd;
> >  }
> >
> > +static inline int create_placeholder_fd(void)
> > +{
> > +     int fd;
> > +
> > +     fd = ensure_good_fd(open("/dev/null", O_WRONLY | O_CLOEXEC));
>
> Stupid question: is it ok to assume that /dev is always mounted?
> Googling says that kernel chooses if to mount it automatically
> depending on the value of CONFIG_DEVTMPFS_MOUNT option.
> Another option might be memfd_create().

Yeah, good point, I actually don't know how reliable is /dev/null.
memfd_create() is not a bad idea, Linux 3.17+, should be fine. I'll
switch.

>
> > +     if (fd < 0)
> > +             return -errno;
> > +     return fd;
> > +}
>
> [...]
>
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-03 20:57   ` Eduard Zingerman
@ 2024-01-03 23:10     ` Andrii Nakryiko
  2024-01-03 23:43       ` Eduard Zingerman
  0 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-03 23:10 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, Jan 3, 2024 at 12:57 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:
>
> [...]
>
> > +static int clone_func_btf_info(struct btf *btf, int orig_fn_id, struct bpf_program *prog)
> > +{
>
> [...]
>
> > +     /* clone FUNC first, btf__add_func() enforces
> > +      * non-empty name, so use entry program's name as
> > +      * a placeholder, which we replace immediately
> > +      */
> > +     fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);
>
> Nit: Why not call this function near the end, when fn_proto_id is available?
>      Thus avoiding need to "guess" fn_t->type.
>

I think I did it to not have to remember btf_func_linkage(fn_t)
(because fn_t will be invalidated) and because name_off will be reused
for parameters. Neither is a big deal, I'll adjust to your suggestion.

But note, we are not guessing ID, it's guaranteed to be +1, it's an
API contract of btf__add_xxx() APIs.

> [...]
>
> > +static int bpf_program_fixup_func_info(struct bpf_object *obj, struct bpf_program *prog)
> > +{
>
> [...]
>
> > +     for (i = 1, n = btf__type_cnt(btf); i < n; i++) {
>
> [...]
>
> > +
> > +             /* clone fn/fn_proto, unless we already did it for another arg */
> > +             if (func_rec->type_id == orig_fn_id) {
> > +                     int fn_id;
> > +
> > +                     fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
> > +                     if (fn_id < 0) {
> > +                             err = fn_id;
> > +                             goto err_out;
> > +                     }
> > +
> > +                     /* point func_info record to a cloned FUNC type */
> > +                     func_rec->type_id = fn_id;
>
> Would it be helpful to add a log here, saying that BTF for function
> so and so is changed before load?

Would it? We don't have global subprog's name readily available, it
seems. So I'd need to refetch it by fn_id, then btf__str_by_offset()
just to emit cryptic (for most users) notifications that something
about some func info was adjusted. And then the user would get this
same message for the same subprog but in the context of a different
entry program. Just confusing, tbh.

Unless you insist, I'd leave it as is. This logic is supposed to be
bullet-proof, so I'm not worried about debugging regressions with it
(but maybe I'm delusional).

>
> > +             }
> > +
> > +             /* create PTR -> STRUCT type chain to mark PTR_TO_CTX argument;
> > +              * we do it just once per main BPF program, as all global
> > +              * funcs share the same program type, so need only PTR ->
> > +              * STRUCT type chain
> > +              */
> > +             if (ptr_id == 0) {
> > +                     struct_id = btf__add_struct(btf, ctx_name, 0);
>
> Nit: Maybe try looking up existing id for type ctx_name first?

It didn't feel important and I didn't want to do another linear BTF
search for each such argument. It's trivial to look it up, but I still
feel like that's a waste... I tried to avoid many linear searches,
which is why I structured the logic to do one pass over BTF to find
all decl_tags instead of going over each function and arg and
searching for decl_tag.

Let's keep it as is, if there are any reasons to try to reuse struct
(if it is at all present, which for kprobe, for example, is quite
unlikely due to fancy bpf_user_pt_regs_t name), then we can easily add
it with no regressions.

>
> > +                     ptr_id = btf__add_ptr(btf, struct_id);
> > +                     if (ptr_id < 0 || struct_id < 0) {
> > +                             err = -EINVAL;
> > +                             goto err_out;
> > +                     }
> > +             }
> > +
>
> [...]
>
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently
  2024-01-02 19:00 ` [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently Andrii Nakryiko
@ 2024-01-03 23:12   ` Alexei Starovoitov
  2024-01-03 23:17     ` Eduard Zingerman
  0 siblings, 1 reply; 28+ messages in thread
From: Alexei Starovoitov @ 2024-01-03 23:12 UTC (permalink / raw)
  To: Andrii Nakryiko, Eddy Z
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Kernel Team

On Tue, Jan 2, 2024 at 11:01 AM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> For a while now all new internal libbpf functions stopped using
> <obj>__<method>() naming, which was historically used both for public
> APIs and all the helper functions that can be thought of as "methods" of
> libbpf "objects" (bpf_object, bpf_map, bpf_program, etc).

I don't think this shift to single underscore was discussed before.
I could have missed it. Personally I was under the impression that
we're still doing double for methods.
This convention came from perf, since back then
libbpf was part of it and perf is using double everywhere.
For external api-s and for internal functions.
I feel we should continue doing double for existing objects.
This rename feels too churn-y.

At the same time I agree that a public function looking different from
internal is a good thing to have.
We have LIBBPF_API that is used in the headers.
Maybe we should start using something similar in .c files
than there will be no confusion.

Not a strong opinion.

Eduard,
what's your take?

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests
  2024-01-03 20:57   ` Eduard Zingerman
@ 2024-01-03 23:17     ` Andrii Nakryiko
  2024-01-03 23:51       ` Eduard Zingerman
  0 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-03 23:17 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, Jan 3, 2024 at 12:57 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:
> > Add a few extra cases of global funcs with context arguments. This time
> > rely on "arg:ctx" decl_tag (__arg_ctx macro), but put it next to
> > "classic" cases where context argument has to be of an exact type that
> > BPF verifier expects (e.g., bpf_user_pt_regs_t for kprobe/uprobe).
> >
> > Colocating all these cases separately from other global func args that
> > rely on arg:xxx decl tags (in verifier_global_subprogs.c) allows for
> > simpler backwards compatibility testing on old kernels. All the cases in
> > test_global_func_ctx_args.c are supposed to work on older kernels, which
> > was manually validated during development.
> >
> > Acked-by: Jiri Olsa <jolsa@kernel.org>
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
>
> At first I thought that we would need to add a new CI job that would
> run these tests for some older kernel version.

libbpf CI will test this on 5.15 kernel, so we will have regression tests

>
> However, the transformation of the sub-program parameters happens
> unconditionally. So it should be possible to read BTF for some of the
> programs after they are loaded and check if transformation is applied
> as expected. Thus allowing to check __arg_ctx handling on libbpf side
> w/o the need to run on old kernel.

Yes, but it's bpf_prog_info to get func_info (actually two calls due
to how API is), parse func_info (pain without libbpf internal helpers
from libbpf_internal.h, and with it's more coupling) to find subprog's
func BTF ID and then check BTF.

It's so painful that I don't think it's worth it given we'll test this
in libbpf CI (and I did manual check locally as well).

Also, nothing actually prevents us from not doing this if the kernel
supports __arg_ctx natively, which is just a painful feature detector
to write, using low-level APIs, which is why I decided that it's
simpler to just do this unconditionally.

> I think it's worth it to add such test, wdyt?
>

I feel like slacking and not adding this :) This will definitely fail
in libbpf CI, if it's wrong.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently
  2024-01-03 23:12   ` Alexei Starovoitov
@ 2024-01-03 23:17     ` Eduard Zingerman
  2024-01-04  0:30       ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 23:17 UTC (permalink / raw)
  To: Alexei Starovoitov, Andrii Nakryiko
  Cc: bpf, Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Kernel Team

On Wed, 2024-01-03 at 15:12 -0800, Alexei Starovoitov wrote:
[...]
> At the same time I agree that a public function looking different from
> internal is a good thing to have.
> We have LIBBPF_API that is used in the headers.
> Maybe we should start using something similar in .c files
> than there will be no confusion.
> 
> Not a strong opinion.
> 
> Eduard,
> what's your take?

I kind-off like private vs. public method encoded as '_' vs. '__'.
But this seem to be a minor detail, personally I grep header file
each time to see if LIBBPF_API is used for certain function and
that is not a big deal.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-03 23:10     ` Andrii Nakryiko
@ 2024-01-03 23:43       ` Eduard Zingerman
  2024-01-03 23:59         ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 23:43 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, 2024-01-03 at 15:10 -0800, Andrii Nakryiko wrote:
> On Wed, Jan 3, 2024 at 12:57 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > 
> > On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:
> > 
> > [...]
> > 
> > > +static int clone_func_btf_info(struct btf *btf, int orig_fn_id, struct bpf_program *prog)
> > > +{
> > 
> > [...]
> > 
> > > +     /* clone FUNC first, btf__add_func() enforces
> > > +      * non-empty name, so use entry program's name as
> > > +      * a placeholder, which we replace immediately
> > > +      */
> > > +     fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);
> > 
> > Nit: Why not call this function near the end, when fn_proto_id is available?
> >      Thus avoiding need to "guess" fn_t->type.
> > 
> 
> I think I did it to not have to remember btf_func_linkage(fn_t)
> (because fn_t will be invalidated) and because name_off will be reused
> for parameters. Neither is a big deal, I'll adjust to your suggestion.
> 
> But note, we are not guessing ID, it's guaranteed to be +1, it's an
> API contract of btf__add_xxx() APIs.

Noted, well, maybe just skip this nit in such a case.

> > [...]
> > 
> > > +static int bpf_program_fixup_func_info(struct bpf_object *obj, struct bpf_program *prog)
> > > +{
> > 
> > [...]
> > 
> > > +     for (i = 1, n = btf__type_cnt(btf); i < n; i++) {
> > 
> > [...]
> > 
> > > +
> > > +             /* clone fn/fn_proto, unless we already did it for another arg */
> > > +             if (func_rec->type_id == orig_fn_id) {
> > > +                     int fn_id;
> > > +
> > > +                     fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
> > > +                     if (fn_id < 0) {
> > > +                             err = fn_id;
> > > +                             goto err_out;
> > > +                     }
> > > +
> > > +                     /* point func_info record to a cloned FUNC type */
> > > +                     func_rec->type_id = fn_id;
> > 
> > Would it be helpful to add a log here, saying that BTF for function
> > so and so is changed before load?
> 
> Would it? We don't have global subprog's name readily available, it
> seems. So I'd need to refetch it by fn_id, then btf__str_by_offset()
> just to emit cryptic (for most users) notifications that something
> about some func info was adjusted. And then the user would get this
> same message for the same subprog but in the context of a different
> entry program. Just confusing, tbh.
> 
> Unless you insist, I'd leave it as is. This logic is supposed to be
> bullet-proof, so I'm not worried about debugging regressions with it
> (but maybe I'm delusional).

I was thinking about someone finding out that actual in-kernel BTF
is different from that in the program object file, while debugging
something. Might be a bit surprising. I'm not insisting on this, though.

> > > +             }
> > > +
> > > +             /* create PTR -> STRUCT type chain to mark PTR_TO_CTX argument;
> > > +              * we do it just once per main BPF program, as all global
> > > +              * funcs share the same program type, so need only PTR ->
> > > +              * STRUCT type chain
> > > +              */
> > > +             if (ptr_id == 0) {
> > > +                     struct_id = btf__add_struct(btf, ctx_name, 0);
> > 
> > Nit: Maybe try looking up existing id for type ctx_name first?
> 
> It didn't feel important and I didn't want to do another linear BTF
> search for each such argument. It's trivial to look it up, but I still
> feel like that's a waste... I tried to avoid many linear searches,
> which is why I structured the logic to do one pass over BTF to find
> all decl_tags instead of going over each function and arg and
> searching for decl_tag.
>
> Let's keep it as is, if there are any reasons to try to reuse struct
> (if it is at all present, which for kprobe, for example, is quite
> unlikely due to fancy bpf_user_pt_regs_t name), then we can easily add
> it with no regressions.

I was thinking about possible interaction with btf_struct_access(),
but that is not used to verify context access at the moment.
So, probably not important.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests
  2024-01-03 23:17     ` Andrii Nakryiko
@ 2024-01-03 23:51       ` Eduard Zingerman
  2024-01-04  0:26         ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-03 23:51 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, 2024-01-03 at 15:17 -0800, Andrii Nakryiko wrote:
[...]
> > However, the transformation of the sub-program parameters happens
> > unconditionally. So it should be possible to read BTF for some of the
> > programs after they are loaded and check if transformation is applied
> > as expected. Thus allowing to check __arg_ctx handling on libbpf side
> > w/o the need to run on old kernel.
> 
> Yes, but it's bpf_prog_info to get func_info (actually two calls due
> to how API is), parse func_info (pain without libbpf internal helpers
> from libbpf_internal.h, and with it's more coupling) to find subprog's
> func BTF ID and then check BTF.
> 
> It's so painful that I don't think it's worth it given we'll test this
> in libbpf CI (and I did manual check locally as well).
> 
> Also, nothing actually prevents us from not doing this if the kernel
> supports __arg_ctx natively, which is just a painful feature detector
> to write, using low-level APIs, which is why I decided that it's
> simpler to just do this unconditionally.

I agree that there is no need for feature detection in this case.

> > I think it's worth it to add such test, wdyt?
> > 
> 
> I feel like slacking and not adding this :) This will definitely fail
> in libbpf CI, if it's wrong.

Very few people look at libbpf CI results and those results would be
available only after sync.

Idk, I think that some form of testing is necessary for kernel CI.
Either this, or an additional job that executes selected set of tests
on old kernel.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-03 23:43       ` Eduard Zingerman
@ 2024-01-03 23:59         ` Andrii Nakryiko
  2024-01-04  0:09           ` Eduard Zingerman
  0 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-03 23:59 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, Jan 3, 2024 at 3:43 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2024-01-03 at 15:10 -0800, Andrii Nakryiko wrote:
> > On Wed, Jan 3, 2024 at 12:57 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
> > >
> > > On Tue, 2024-01-02 at 11:00 -0800, Andrii Nakryiko wrote:
> > >
> > > [...]
> > >
> > > > +static int clone_func_btf_info(struct btf *btf, int orig_fn_id, struct bpf_program *prog)
> > > > +{
> > >
> > > [...]
> > >
> > > > +     /* clone FUNC first, btf__add_func() enforces
> > > > +      * non-empty name, so use entry program's name as
> > > > +      * a placeholder, which we replace immediately
> > > > +      */
> > > > +     fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);
> > >
> > > Nit: Why not call this function near the end, when fn_proto_id is available?
> > >      Thus avoiding need to "guess" fn_t->type.
> > >
> >
> > I think I did it to not have to remember btf_func_linkage(fn_t)
> > (because fn_t will be invalidated) and because name_off will be reused
> > for parameters. Neither is a big deal, I'll adjust to your suggestion.
> >
> > But note, we are not guessing ID, it's guaranteed to be +1, it's an
> > API contract of btf__add_xxx() APIs.
>
> Noted, well, maybe just skip this nit in such a case.
>

I already did the change locally, as I said it's a small change, so no problem.


> > > [...]
> > >
> > > > +static int bpf_program_fixup_func_info(struct bpf_object *obj, struct bpf_program *prog)
> > > > +{
> > >
> > > [...]
> > >
> > > > +     for (i = 1, n = btf__type_cnt(btf); i < n; i++) {
> > >
> > > [...]
> > >
> > > > +
> > > > +             /* clone fn/fn_proto, unless we already did it for another arg */
> > > > +             if (func_rec->type_id == orig_fn_id) {
> > > > +                     int fn_id;
> > > > +
> > > > +                     fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
> > > > +                     if (fn_id < 0) {
> > > > +                             err = fn_id;
> > > > +                             goto err_out;
> > > > +                     }
> > > > +
> > > > +                     /* point func_info record to a cloned FUNC type */
> > > > +                     func_rec->type_id = fn_id;
> > >
> > > Would it be helpful to add a log here, saying that BTF for function
> > > so and so is changed before load?
> >
> > Would it? We don't have global subprog's name readily available, it
> > seems. So I'd need to refetch it by fn_id, then btf__str_by_offset()
> > just to emit cryptic (for most users) notifications that something
> > about some func info was adjusted. And then the user would get this
> > same message for the same subprog but in the context of a different
> > entry program. Just confusing, tbh.
> >
> > Unless you insist, I'd leave it as is. This logic is supposed to be
> > bullet-proof, so I'm not worried about debugging regressions with it
> > (but maybe I'm delusional).
>
> I was thinking about someone finding out that actual in-kernel BTF
> is different from that in the program object file, while debugging
> something. Might be a bit surprising. I'm not insisting on this, though.

Note the "/* check if existing parameter already matches verifier
expectations */", if program is using correct types, we don't touch
BTF for that subprog. If there was `void *ctx`, we don't really lose
any information.

If they use `struct pt_regs *ctx __arg_ctx`, then yeah, it will be
updated to `struct bpf_user_pt_regs_t *ctx __arg_ctx`, but even then,
original BTF has original FUNC -> FUNC_PROTO definition. You'd need to
fetch func_info and follow BTF IDs (I'm not sure if bpftool even shows
this today).

In short, I don't see why this would be a problem, but perhaps I
should just bite a bullet and do feature detector for this support.

>
> > > > +             }
> > > > +
> > > > +             /* create PTR -> STRUCT type chain to mark PTR_TO_CTX argument;
> > > > +              * we do it just once per main BPF program, as all global
> > > > +              * funcs share the same program type, so need only PTR ->
> > > > +              * STRUCT type chain
> > > > +              */
> > > > +             if (ptr_id == 0) {
> > > > +                     struct_id = btf__add_struct(btf, ctx_name, 0);
> > >
> > > Nit: Maybe try looking up existing id for type ctx_name first?
> >
> > It didn't feel important and I didn't want to do another linear BTF
> > search for each such argument. It's trivial to look it up, but I still
> > feel like that's a waste... I tried to avoid many linear searches,
> > which is why I structured the logic to do one pass over BTF to find
> > all decl_tags instead of going over each function and arg and
> > searching for decl_tag.
> >
> > Let's keep it as is, if there are any reasons to try to reuse struct
> > (if it is at all present, which for kprobe, for example, is quite
> > unlikely due to fancy bpf_user_pt_regs_t name), then we can easily add
> > it with no regressions.
>
> I was thinking about possible interaction with btf_struct_access(),
> but that is not used to verify context access at the moment.
> So, probably not important.

Right, and kernel can't trust user-provided BTF info, it will have to
find a match for kernel-side BTF info. I actually add this logic in a
patch set (pending locally for now) that adds support for
PTR_TO_BTF_ID arguments for global funcs.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-03 23:59         ` Andrii Nakryiko
@ 2024-01-04  0:09           ` Eduard Zingerman
  2024-01-04  0:27             ` Andrii Nakryiko
  0 siblings, 1 reply; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-04  0:09 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, 2024-01-03 at 15:59 -0800, Andrii Nakryiko wrote:
[...]
> > > > > +     fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);
> > > > 
> > > > Nit: Why not call this function near the end, when fn_proto_id is available?
> > > >      Thus avoiding need to "guess" fn_t->type.
> > > > 
> > > 
> > > I think I did it to not have to remember btf_func_linkage(fn_t)
> > > (because fn_t will be invalidated) and because name_off will be reused
> > > for parameters. Neither is a big deal, I'll adjust to your suggestion.
> > > 
> > > But note, we are not guessing ID, it's guaranteed to be +1, it's an
> > > API contract of btf__add_xxx() APIs.
> > 
> > Noted, well, maybe just skip this nit in such a case.
> > 
> 
> I already did the change locally, as I said it's a small change, so no problem.

Oh, ok, thanks.

[...]

> > > > > +             /* clone fn/fn_proto, unless we already did it for another arg */
> > > > > +             if (func_rec->type_id == orig_fn_id) {
> > > > > +                     int fn_id;
> > > > > +
> > > > > +                     fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
> > > > > +                     if (fn_id < 0) {
> > > > > +                             err = fn_id;
> > > > > +                             goto err_out;
> > > > > +                     }
> > > > > +
> > > > > +                     /* point func_info record to a cloned FUNC type */
> > > > > +                     func_rec->type_id = fn_id;
> > > > 
> > > > Would it be helpful to add a log here, saying that BTF for function
> > > > so and so is changed before load?
> > > 
> > > Would it? We don't have global subprog's name readily available, it
> > > seems. So I'd need to refetch it by fn_id, then btf__str_by_offset()
> > > just to emit cryptic (for most users) notifications that something
> > > about some func info was adjusted. And then the user would get this
> > > same message for the same subprog but in the context of a different
> > > entry program. Just confusing, tbh.
> > > 
> > > Unless you insist, I'd leave it as is. This logic is supposed to be
> > > bullet-proof, so I'm not worried about debugging regressions with it
> > > (but maybe I'm delusional).
> > 
> > I was thinking about someone finding out that actual in-kernel BTF
> > is different from that in the program object file, while debugging
> > something. Might be a bit surprising. I'm not insisting on this, though.
> 
> Note the "/* check if existing parameter already matches verifier
> expectations */", if program is using correct types, we don't touch
> BTF for that subprog. If there was `void *ctx`, we don't really lose
> any information.

But `void *ctx` would be changed to `struct bpf_user_pt_regs_t *ctx`, right?
And that might be a bit surprising. But whatever, if you think that adding
log entry here is too much of hassle -- let's leave it as is.

> If they use `struct pt_regs *ctx __arg_ctx`, then yeah, it will be
> updated to `struct bpf_user_pt_regs_t *ctx __arg_ctx`, but even then,
> original BTF has original FUNC -> FUNC_PROTO definition. You'd need to
> fetch func_info and follow BTF IDs (I'm not sure if bpftool even shows
> this today).
> 
> In short, I don't see why this would be a problem, but perhaps I
> should just bite a bullet and do feature detector for this support.

I like that current implementation does the transformation unconditionally,
it does no harm and avoids unnecessary branching.

[...]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests
  2024-01-03 23:51       ` Eduard Zingerman
@ 2024-01-04  0:26         ` Andrii Nakryiko
  2024-01-04  0:28           ` Eduard Zingerman
  0 siblings, 1 reply; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-04  0:26 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, Jan 3, 2024 at 3:51 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2024-01-03 at 15:17 -0800, Andrii Nakryiko wrote:
> [...]
> > > However, the transformation of the sub-program parameters happens
> > > unconditionally. So it should be possible to read BTF for some of the
> > > programs after they are loaded and check if transformation is applied
> > > as expected. Thus allowing to check __arg_ctx handling on libbpf side
> > > w/o the need to run on old kernel.
> >
> > Yes, but it's bpf_prog_info to get func_info (actually two calls due
> > to how API is), parse func_info (pain without libbpf internal helpers
> > from libbpf_internal.h, and with it's more coupling) to find subprog's
> > func BTF ID and then check BTF.
> >
> > It's so painful that I don't think it's worth it given we'll test this
> > in libbpf CI (and I did manual check locally as well).
> >
> > Also, nothing actually prevents us from not doing this if the kernel
> > supports __arg_ctx natively, which is just a painful feature detector
> > to write, using low-level APIs, which is why I decided that it's
> > simpler to just do this unconditionally.
>
> I agree that there is no need for feature detection in this case.
>

ok

> > > I think it's worth it to add such test, wdyt?
> > >
> >
> > I feel like slacking and not adding this :) This will definitely fail
> > in libbpf CI, if it's wrong.
>
> Very few people look at libbpf CI results and those results would be
> available only after sync.
>
> Idk, I think that some form of testing is necessary for kernel CI.
> Either this, or an additional job that executes selected set of tests
> on old kernel.

Alright, I'll add a test then.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic
  2024-01-04  0:09           ` Eduard Zingerman
@ 2024-01-04  0:27             ` Andrii Nakryiko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-04  0:27 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, Jan 3, 2024 at 4:09 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2024-01-03 at 15:59 -0800, Andrii Nakryiko wrote:
> [...]
> > > > > > +     fn_id = btf__add_func(btf, prog->name, btf_func_linkage(fn_t), fn_t->type);
> > > > >
> > > > > Nit: Why not call this function near the end, when fn_proto_id is available?
> > > > >      Thus avoiding need to "guess" fn_t->type.
> > > > >
> > > >
> > > > I think I did it to not have to remember btf_func_linkage(fn_t)
> > > > (because fn_t will be invalidated) and because name_off will be reused
> > > > for parameters. Neither is a big deal, I'll adjust to your suggestion.
> > > >
> > > > But note, we are not guessing ID, it's guaranteed to be +1, it's an
> > > > API contract of btf__add_xxx() APIs.
> > >
> > > Noted, well, maybe just skip this nit in such a case.
> > >
> >
> > I already did the change locally, as I said it's a small change, so no problem.
>
> Oh, ok, thanks.
>

np

> [...]
>
> > > > > > +             /* clone fn/fn_proto, unless we already did it for another arg */
> > > > > > +             if (func_rec->type_id == orig_fn_id) {
> > > > > > +                     int fn_id;
> > > > > > +
> > > > > > +                     fn_id = clone_func_btf_info(btf, orig_fn_id, prog);
> > > > > > +                     if (fn_id < 0) {
> > > > > > +                             err = fn_id;
> > > > > > +                             goto err_out;
> > > > > > +                     }
> > > > > > +
> > > > > > +                     /* point func_info record to a cloned FUNC type */
> > > > > > +                     func_rec->type_id = fn_id;
> > > > >
> > > > > Would it be helpful to add a log here, saying that BTF for function
> > > > > so and so is changed before load?
> > > >
> > > > Would it? We don't have global subprog's name readily available, it
> > > > seems. So I'd need to refetch it by fn_id, then btf__str_by_offset()
> > > > just to emit cryptic (for most users) notifications that something
> > > > about some func info was adjusted. And then the user would get this
> > > > same message for the same subprog but in the context of a different
> > > > entry program. Just confusing, tbh.
> > > >
> > > > Unless you insist, I'd leave it as is. This logic is supposed to be
> > > > bullet-proof, so I'm not worried about debugging regressions with it
> > > > (but maybe I'm delusional).
> > >
> > > I was thinking about someone finding out that actual in-kernel BTF
> > > is different from that in the program object file, while debugging
> > > something. Might be a bit surprising. I'm not insisting on this, though.
> >
> > Note the "/* check if existing parameter already matches verifier
> > expectations */", if program is using correct types, we don't touch
> > BTF for that subprog. If there was `void *ctx`, we don't really lose
> > any information.
>
> But `void *ctx` would be changed to `struct bpf_user_pt_regs_t *ctx`, right?
> And that might be a bit surprising. But whatever, if you think that adding
> log entry here is too much of hassle -- let's leave it as is.
>

Ok.

> > If they use `struct pt_regs *ctx __arg_ctx`, then yeah, it will be
> > updated to `struct bpf_user_pt_regs_t *ctx __arg_ctx`, but even then,
> > original BTF has original FUNC -> FUNC_PROTO definition. You'd need to
> > fetch func_info and follow BTF IDs (I'm not sure if bpftool even shows
> > this today).
> >
> > In short, I don't see why this would be a problem, but perhaps I
> > should just bite a bullet and do feature detector for this support.
>
> I like that current implementation does the transformation unconditionally,
> it does no harm and avoids unnecessary branching.
>

ack

> [...]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests
  2024-01-04  0:26         ` Andrii Nakryiko
@ 2024-01-04  0:28           ` Eduard Zingerman
  0 siblings, 0 replies; 28+ messages in thread
From: Eduard Zingerman @ 2024-01-04  0:28 UTC (permalink / raw)
  To: Andrii Nakryiko
  Cc: Andrii Nakryiko, bpf, ast, daniel, martin.lau, kernel-team, Jiri Olsa

On Wed, 2024-01-03 at 16:26 -0800, Andrii Nakryiko wrote:
[...]
> > > > I think it's worth it to add such test, wdyt?
> > > > 
> > > 
> > > I feel like slacking and not adding this :) This will definitely fail
> > > in libbpf CI, if it's wrong.
> > 
> > Very few people look at libbpf CI results and those results would be
> > available only after sync.
> > 
> > Idk, I think that some form of testing is necessary for kernel CI.
> > Either this, or an additional job that executes selected set of tests
> > on old kernel.
> 
> Alright, I'll add a test then.

Thank you.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently
  2024-01-03 23:17     ` Eduard Zingerman
@ 2024-01-04  0:30       ` Andrii Nakryiko
  0 siblings, 0 replies; 28+ messages in thread
From: Andrii Nakryiko @ 2024-01-04  0:30 UTC (permalink / raw)
  To: Eduard Zingerman
  Cc: Alexei Starovoitov, Andrii Nakryiko, bpf, Alexei Starovoitov,
	Daniel Borkmann, Martin KaFai Lau, Kernel Team

On Wed, Jan 3, 2024 at 3:18 PM Eduard Zingerman <eddyz87@gmail.com> wrote:
>
> On Wed, 2024-01-03 at 15:12 -0800, Alexei Starovoitov wrote:
> [...]
> > At the same time I agree that a public function looking different from
> > internal is a good thing to have.
> > We have LIBBPF_API that is used in the headers.
> > Maybe we should start using something similar in .c files
> > than there will be no confusion.
> >
> > Not a strong opinion.
> >
> > Eduard,
> > what's your take?
>
> I kind-off like private vs. public method encoded as '_' vs. '__'.
> But this seem to be a minor detail, personally I grep header file
> each time to see if LIBBPF_API is used for certain function and
> that is not a big deal.

I'll drop patch #1 in v3. This whole naming discussion is just a
distraction in this patch set.

Long term I think single underscores for internal functions is the
right approach, and makes working with the code simpler. But we can
save that discussion to another day.

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2024-01-04  0:30 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-02 19:00 [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 1/9] libbpf: name internal functions consistently Andrii Nakryiko
2024-01-03 23:12   ` Alexei Starovoitov
2024-01-03 23:17     ` Eduard Zingerman
2024-01-04  0:30       ` Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 2/9] libbpf: make uniform use of btf__fd() accessor inside libbpf Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 3/9] libbpf: use explicit map reuse flag to skip map creation steps Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 4/9] libbpf: don't rely on map->fd as an indicator of map being created Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 5/9] libbpf: use stable map placeholder FDs Andrii Nakryiko
2024-01-03 20:57   ` Eduard Zingerman
2024-01-03 22:46     ` Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 6/9] libbpf: move exception callbacks assignment logic into relocation step Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 7/9] libbpf: move BTF loading step after " Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 8/9] libbpf: implement __arg_ctx fallback logic Andrii Nakryiko
2024-01-03 20:57   ` Eduard Zingerman
2024-01-03 23:10     ` Andrii Nakryiko
2024-01-03 23:43       ` Eduard Zingerman
2024-01-03 23:59         ` Andrii Nakryiko
2024-01-04  0:09           ` Eduard Zingerman
2024-01-04  0:27             ` Andrii Nakryiko
2024-01-02 19:00 ` [PATCH v2 bpf-next 9/9] selftests/bpf: add arg:ctx cases to test_global_funcs tests Andrii Nakryiko
2024-01-03 20:57   ` Eduard Zingerman
2024-01-03 23:17     ` Andrii Nakryiko
2024-01-03 23:51       ` Eduard Zingerman
2024-01-04  0:26         ` Andrii Nakryiko
2024-01-04  0:28           ` Eduard Zingerman
2024-01-02 19:57 ` [PATCH v2 bpf-next 0/9] Libbpf-side __arg_ctx fallback support Andrii Nakryiko
2024-01-03 20:57 ` Eduard Zingerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.