All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups
@ 2022-08-16  0:19 Andrii Nakryiko
  2022-08-16  0:19 ` [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF Andrii Nakryiko
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  0:19 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Few fixes and clean up in preparation for finalizing libbpf 1.0.

Main change is switching libbpf to initializing only relevant portions of
union bpf_attr for any given BPF command. This has been on a wishlist for
a while, so finally this is done. While cleaning this up, I've also cleaned up
few other placed were we didn't use explicit memset() with kernel UAPI structs
(perf_event_attr, bpf_map_info, bpf_prog_info, etc).

Also few fixes to test_progs that came up while testing selftests in release
mode.


Andrii Nakryiko (4):
  libbpf: fix potential NULL dereference when parsing ELF
  libbpf: streamline bpf_attr and perf_event_attr initialization
  libbpf: clean up deprecated and legacy aliases
  selftests/bpf: few fixes for selftests/bpf built in release mode

 tools/lib/bpf/bpf.c                           | 178 ++++++++++--------
 tools/lib/bpf/btf.c                           |   2 -
 tools/lib/bpf/btf.h                           |   1 -
 tools/lib/bpf/libbpf.c                        |  47 +++--
 tools/lib/bpf/libbpf_legacy.h                 |   2 +
 tools/lib/bpf/netlink.c                       |   3 +-
 tools/lib/bpf/skel_internal.h                 |  10 +-
 .../selftests/bpf/prog_tests/attach_probe.c   |   6 +-
 .../selftests/bpf/prog_tests/bpf_cookie.c     |   2 +-
 .../selftests/bpf/prog_tests/task_pt_regs.c   |   2 +-
 tools/testing/selftests/bpf/xskxceiver.c      |   2 +-
 11 files changed, 147 insertions(+), 108 deletions(-)

-- 
2.30.2


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF
  2022-08-16  0:19 [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups Andrii Nakryiko
@ 2022-08-16  0:19 ` Andrii Nakryiko
  2022-08-16 21:31   ` Hao Luo
  2022-08-16  0:19 ` [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization Andrii Nakryiko
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  0:19 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Fix if condition filtering empty ELF sections to prevent NULL
dereference.

Fixes: 47ea7417b074 ("libbpf: Skip empty sections in bpf_object__init_global_data_maps")
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/libbpf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index aa05a99b913d..5f0281e61437 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -1646,7 +1646,7 @@ static int bpf_object__init_global_data_maps(struct bpf_object *obj)
 		sec_desc = &obj->efile.secs[sec_idx];
 
 		/* Skip recognized sections with size 0. */
-		if (sec_desc->data && sec_desc->data->d_size == 0)
+		if (!sec_desc->data || sec_desc->data->d_size == 0)
 			continue;
 
 		switch (sec_desc->sec_type) {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization
  2022-08-16  0:19 [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups Andrii Nakryiko
  2022-08-16  0:19 ` [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF Andrii Nakryiko
@ 2022-08-16  0:19 ` Andrii Nakryiko
  2022-08-16 21:33   ` Hao Luo
  2022-08-16  0:19 ` [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases Andrii Nakryiko
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  0:19 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Make sure that entire libbpf code base is initializing bpf_attr and
perf_event_attr with memset(0). Also for bpf_attr make sure we
clear and pass to kernel only relevant parts of bpf_attr. bpf_attr is
a huge union of independent sub-command attributes, so there is no need
to clear and pass entire union bpf_attr, which over time grows quite
a lot and for most commands this growth is completely irrelevant.

Few cases where we were relying on compiler initialization of BPF UAPI
structs (like bpf_prog_info, bpf_map_info, etc) with `= {};` were
switched to memset(0) pattern for future-proofing.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/bpf.c           | 173 ++++++++++++++++++++--------------
 tools/lib/bpf/libbpf.c        |  43 ++++++---
 tools/lib/bpf/netlink.c       |   3 +-
 tools/lib/bpf/skel_internal.h |  10 +-
 4 files changed, 138 insertions(+), 91 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 575867d69496..e3a0bd7efa2f 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -105,7 +105,7 @@ int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts)
  */
 int probe_memcg_account(void)
 {
-	const size_t prog_load_attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
+	const size_t attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
 	struct bpf_insn insns[] = {
 		BPF_EMIT_CALL(BPF_FUNC_ktime_get_coarse_ns),
 		BPF_EXIT_INSN(),
@@ -115,13 +115,13 @@ int probe_memcg_account(void)
 	int prog_fd;
 
 	/* attempt loading freplace trying to use custom BTF */
-	memset(&attr, 0, prog_load_attr_sz);
+	memset(&attr, 0, attr_sz);
 	attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
 	attr.insns = ptr_to_u64(insns);
 	attr.insn_cnt = insn_cnt;
 	attr.license = ptr_to_u64("GPL");
 
-	prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
+	prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, attr_sz);
 	if (prog_fd >= 0) {
 		close(prog_fd);
 		return 1;
@@ -232,6 +232,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 		  const struct bpf_insn *insns, size_t insn_cnt,
 		  const struct bpf_prog_load_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, fd_array);
 	void *finfo = NULL, *linfo = NULL;
 	const char *func_info, *line_info;
 	__u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
@@ -251,7 +252,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 	if (attempts == 0)
 		attempts = PROG_LOAD_ATTEMPTS;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 
 	attr.prog_type = prog_type;
 	attr.expected_attach_type = OPTS_GET(opts, expected_attach_type, 0);
@@ -314,7 +315,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 		attr.log_level = log_level;
 	}
 
-	fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
+	fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
 	if (fd >= 0)
 		return fd;
 
@@ -354,7 +355,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 			break;
 		}
 
-		fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
+		fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
 		if (fd >= 0)
 			goto done;
 	}
@@ -368,7 +369,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 		attr.log_size = log_size;
 		attr.log_level = 1;
 
-		fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
+		fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
 	}
 done:
 	/* free() doesn't affect errno, so we don't need to restore it */
@@ -380,127 +381,136 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
 int bpf_map_update_elem(int fd, const void *key, const void *value,
 			__u64 flags)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.value = ptr_to_u64(value);
 	attr.flags = flags;
 
-	ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_lookup_elem(int fd, const void *key, void *value)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.value = ptr_to_u64(value);
 
-	ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.value = ptr_to_u64(value);
 	attr.flags = flags;
 
-	ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.value = ptr_to_u64(value);
 
-	ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.value = ptr_to_u64(value);
 	attr.flags = flags;
 
-	ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_delete_elem(int fd, const void *key)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 
-	ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_delete_elem_flags(int fd, const void *key, __u64 flags)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.flags = flags;
 
-	ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_get_next_key(int fd, const void *key, void *next_key)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, next_key);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 	attr.key = ptr_to_u64(key);
 	attr.next_key = ptr_to_u64(next_key);
 
-	ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_map_freeze(int fd)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, map_fd);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_fd = fd;
 
-	ret = sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_MAP_FREEZE, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
@@ -509,13 +519,14 @@ static int bpf_map_batch_common(int cmd, int fd, void  *in_batch,
 				__u32 *count,
 				const struct bpf_map_batch_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, batch);
 	union bpf_attr attr;
 	int ret;
 
 	if (!OPTS_VALID(opts, bpf_map_batch_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.batch.map_fd = fd;
 	attr.batch.in_batch = ptr_to_u64(in_batch);
 	attr.batch.out_batch = ptr_to_u64(out_batch);
@@ -525,7 +536,7 @@ static int bpf_map_batch_common(int cmd, int fd, void  *in_batch,
 	attr.batch.elem_flags  = OPTS_GET(opts, elem_flags, 0);
 	attr.batch.flags = OPTS_GET(opts, flags, 0);
 
-	ret = sys_bpf(cmd, &attr, sizeof(attr));
+	ret = sys_bpf(cmd, &attr, attr_sz);
 	*count = attr.batch.count;
 
 	return libbpf_err_errno(ret);
@@ -564,14 +575,15 @@ int bpf_map_update_batch(int fd, const void *keys, const void *values, __u32 *co
 
 int bpf_obj_pin(int fd, const char *pathname)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, file_flags);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.pathname = ptr_to_u64((void *)pathname);
 	attr.bpf_fd = fd;
 
-	ret = sys_bpf(BPF_OBJ_PIN, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_OBJ_PIN, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
@@ -582,17 +594,18 @@ int bpf_obj_get(const char *pathname)
 
 int bpf_obj_get_opts(const char *pathname, const struct bpf_obj_get_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, file_flags);
 	union bpf_attr attr;
 	int fd;
 
 	if (!OPTS_VALID(opts, bpf_obj_get_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.pathname = ptr_to_u64((void *)pathname);
 	attr.file_flags = OPTS_GET(opts, file_flags, 0);
 
-	fd = sys_bpf_fd(BPF_OBJ_GET, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_OBJ_GET, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
@@ -610,20 +623,21 @@ int bpf_prog_attach_opts(int prog_fd, int target_fd,
 			  enum bpf_attach_type type,
 			  const struct bpf_prog_attach_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
 	union bpf_attr attr;
 	int ret;
 
 	if (!OPTS_VALID(opts, bpf_prog_attach_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.target_fd	   = target_fd;
 	attr.attach_bpf_fd = prog_fd;
 	attr.attach_type   = type;
 	attr.attach_flags  = OPTS_GET(opts, flags, 0);
 	attr.replace_bpf_fd = OPTS_GET(opts, replace_prog_fd, 0);
 
-	ret = sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_PROG_ATTACH, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
@@ -634,28 +648,30 @@ int bpf_prog_attach_xattr(int prog_fd, int target_fd,
 
 int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.target_fd	 = target_fd;
 	attr.attach_type = type;
 
-	ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_PROG_DETACH, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_prog_detach2(int prog_fd, int target_fd, enum bpf_attach_type type)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.target_fd	 = target_fd;
 	attr.attach_bpf_fd = prog_fd;
 	attr.attach_type = type;
 
-	ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_PROG_DETACH, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
@@ -663,6 +679,7 @@ int bpf_link_create(int prog_fd, int target_fd,
 		    enum bpf_attach_type attach_type,
 		    const struct bpf_link_create_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, link_create);
 	__u32 target_btf_id, iter_info_len;
 	union bpf_attr attr;
 	int fd, err;
@@ -681,7 +698,7 @@ int bpf_link_create(int prog_fd, int target_fd,
 			return libbpf_err(-EINVAL);
 	}
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.link_create.prog_fd = prog_fd;
 	attr.link_create.target_fd = target_fd;
 	attr.link_create.attach_type = attach_type;
@@ -725,7 +742,7 @@ int bpf_link_create(int prog_fd, int target_fd,
 		break;
 	}
 proceed:
-	fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, attr_sz);
 	if (fd >= 0)
 		return fd;
 	/* we'll get EINVAL if LINK_CREATE doesn't support attaching fentry
@@ -761,44 +778,47 @@ int bpf_link_create(int prog_fd, int target_fd,
 
 int bpf_link_detach(int link_fd)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, link_detach);
 	union bpf_attr attr;
 	int ret;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.link_detach.link_fd = link_fd;
 
-	ret = sys_bpf(BPF_LINK_DETACH, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_LINK_DETACH, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_link_update(int link_fd, int new_prog_fd,
 		    const struct bpf_link_update_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, link_update);
 	union bpf_attr attr;
 	int ret;
 
 	if (!OPTS_VALID(opts, bpf_link_update_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.link_update.link_fd = link_fd;
 	attr.link_update.new_prog_fd = new_prog_fd;
 	attr.link_update.flags = OPTS_GET(opts, flags, 0);
 	attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
 
-	ret = sys_bpf(BPF_LINK_UPDATE, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_LINK_UPDATE, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
 
 int bpf_iter_create(int link_fd)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, iter_create);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.iter_create.link_fd = link_fd;
 
-	fd = sys_bpf_fd(BPF_ITER_CREATE, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_ITER_CREATE, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
@@ -806,13 +826,14 @@ int bpf_prog_query_opts(int target_fd,
 			enum bpf_attach_type type,
 			struct bpf_prog_query_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, query);
 	union bpf_attr attr;
 	int ret;
 
 	if (!OPTS_VALID(opts, bpf_prog_query_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 
 	attr.query.target_fd	= target_fd;
 	attr.query.attach_type	= type;
@@ -821,7 +842,7 @@ int bpf_prog_query_opts(int target_fd,
 	attr.query.prog_ids	= ptr_to_u64(OPTS_GET(opts, prog_ids, NULL));
 	attr.query.prog_attach_flags = ptr_to_u64(OPTS_GET(opts, prog_attach_flags, NULL));
 
-	ret = sys_bpf(BPF_PROG_QUERY, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_PROG_QUERY, &attr, attr_sz);
 
 	OPTS_SET(opts, attach_flags, attr.query.attach_flags);
 	OPTS_SET(opts, prog_cnt, attr.query.prog_cnt);
@@ -850,13 +871,14 @@ int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
 
 int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, test);
 	union bpf_attr attr;
 	int ret;
 
 	if (!OPTS_VALID(opts, bpf_test_run_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.test.prog_fd = prog_fd;
 	attr.test.batch_size = OPTS_GET(opts, batch_size, 0);
 	attr.test.cpu = OPTS_GET(opts, cpu, 0);
@@ -872,7 +894,7 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
 	attr.test.data_in = ptr_to_u64(OPTS_GET(opts, data_in, NULL));
 	attr.test.data_out = ptr_to_u64(OPTS_GET(opts, data_out, NULL));
 
-	ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, attr_sz);
 
 	OPTS_SET(opts, data_size_out, attr.test.data_size_out);
 	OPTS_SET(opts, ctx_size_out, attr.test.ctx_size_out);
@@ -884,13 +906,14 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
 
 static int bpf_obj_get_next_id(__u32 start_id, __u32 *next_id, int cmd)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
 	union bpf_attr attr;
 	int err;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.start_id = start_id;
 
-	err = sys_bpf(cmd, &attr, sizeof(attr));
+	err = sys_bpf(cmd, &attr, attr_sz);
 	if (!err)
 		*next_id = attr.next_id;
 
@@ -919,80 +942,84 @@ int bpf_link_get_next_id(__u32 start_id, __u32 *next_id)
 
 int bpf_prog_get_fd_by_id(__u32 id)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.prog_id = id;
 
-	fd = sys_bpf_fd(BPF_PROG_GET_FD_BY_ID, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_PROG_GET_FD_BY_ID, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
 int bpf_map_get_fd_by_id(__u32 id)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.map_id = id;
 
-	fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
 int bpf_btf_get_fd_by_id(__u32 id)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.btf_id = id;
 
-	fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
 int bpf_link_get_fd_by_id(__u32 id)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.link_id = id;
 
-	fd = sys_bpf_fd(BPF_LINK_GET_FD_BY_ID, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_LINK_GET_FD_BY_ID, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
 int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, info);
 	union bpf_attr attr;
 	int err;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.info.bpf_fd = bpf_fd;
 	attr.info.info_len = *info_len;
 	attr.info.info = ptr_to_u64(info);
 
-	err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, sizeof(attr));
-
+	err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, attr_sz);
 	if (!err)
 		*info_len = attr.info.info_len;
-
 	return libbpf_err_errno(err);
 }
 
 int bpf_raw_tracepoint_open(const char *name, int prog_fd)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.raw_tracepoint.name = ptr_to_u64(name);
 	attr.raw_tracepoint.prog_fd = prog_fd;
 
-	fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
@@ -1048,16 +1075,18 @@ int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len,
 		      __u32 *prog_id, __u32 *fd_type, __u64 *probe_offset,
 		      __u64 *probe_addr)
 {
-	union bpf_attr attr = {};
+	const size_t attr_sz = offsetofend(union bpf_attr, task_fd_query);
+	union bpf_attr attr;
 	int err;
 
+	memset(&attr, 0, attr_sz);
 	attr.task_fd_query.pid = pid;
 	attr.task_fd_query.fd = fd;
 	attr.task_fd_query.flags = flags;
 	attr.task_fd_query.buf = ptr_to_u64(buf);
 	attr.task_fd_query.buf_len = *buf_len;
 
-	err = sys_bpf(BPF_TASK_FD_QUERY, &attr, sizeof(attr));
+	err = sys_bpf(BPF_TASK_FD_QUERY, &attr, attr_sz);
 
 	*buf_len = attr.task_fd_query.buf_len;
 	*prog_id = attr.task_fd_query.prog_id;
@@ -1070,30 +1099,32 @@ int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len,
 
 int bpf_enable_stats(enum bpf_stats_type type)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, enable_stats);
 	union bpf_attr attr;
 	int fd;
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.enable_stats.type = type;
 
-	fd = sys_bpf_fd(BPF_ENABLE_STATS, &attr, sizeof(attr));
+	fd = sys_bpf_fd(BPF_ENABLE_STATS, &attr, attr_sz);
 	return libbpf_err_errno(fd);
 }
 
 int bpf_prog_bind_map(int prog_fd, int map_fd,
 		      const struct bpf_prog_bind_opts *opts)
 {
+	const size_t attr_sz = offsetofend(union bpf_attr, prog_bind_map);
 	union bpf_attr attr;
 	int ret;
 
 	if (!OPTS_VALID(opts, bpf_prog_bind_opts))
 		return libbpf_err(-EINVAL);
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, attr_sz);
 	attr.prog_bind_map.prog_fd = prog_fd;
 	attr.prog_bind_map.map_fd = map_fd;
 	attr.prog_bind_map.flags = OPTS_GET(opts, flags, 0);
 
-	ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr));
+	ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, attr_sz);
 	return libbpf_err_errno(ret);
 }
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 5f0281e61437..89f192a3ef77 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -4284,11 +4284,12 @@ int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
 
 int bpf_map__reuse_fd(struct bpf_map *map, int fd)
 {
-	struct bpf_map_info info = {};
+	struct bpf_map_info info;
 	__u32 len = sizeof(info), name_len;
 	int new_fd, err;
 	char *new_name;
 
+	memset(&info, 0, len);
 	err = bpf_obj_get_info_by_fd(fd, &info, &len);
 	if (err && errno == EINVAL)
 		err = bpf_get_map_info_from_fdinfo(fd, &info);
@@ -4830,13 +4831,12 @@ bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id)
 
 static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
 {
-	struct bpf_map_info map_info = {};
+	struct bpf_map_info map_info;
 	char msg[STRERR_BUFSIZE];
-	__u32 map_info_len;
+	__u32 map_info_len = sizeof(map_info);
 	int err;
 
-	map_info_len = sizeof(map_info);
-
+	memset(&map_info, 0, map_info_len);
 	err = bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len);
 	if (err && errno == EINVAL)
 		err = bpf_get_map_info_from_fdinfo(map_fd, &map_info);
@@ -8994,11 +8994,12 @@ int libbpf_find_vmlinux_btf_id(const char *name,
 
 static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
 {
-	struct bpf_prog_info info = {};
+	struct bpf_prog_info info;
 	__u32 info_len = sizeof(info);
 	struct btf *btf;
 	int err;
 
+	memset(&info, 0, info_len);
 	err = bpf_obj_get_info_by_fd(attach_prog_fd, &info, &info_len);
 	if (err) {
 		pr_warn("failed bpf_obj_get_info_by_fd for FD %d: %d\n",
@@ -9826,13 +9827,16 @@ static int determine_uprobe_retprobe_bit(void)
 static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
 				 uint64_t offset, int pid, size_t ref_ctr_off)
 {
-	struct perf_event_attr attr = {};
+	const size_t attr_sz = sizeof(struct perf_event_attr);
+	struct perf_event_attr attr;
 	char errmsg[STRERR_BUFSIZE];
 	int type, pfd;
 
 	if (ref_ctr_off >= (1ULL << PERF_UPROBE_REF_CTR_OFFSET_BITS))
 		return -EINVAL;
 
+	memset(&attr, 0, attr_sz);
+
 	type = uprobe ? determine_uprobe_perf_type()
 		      : determine_kprobe_perf_type();
 	if (type < 0) {
@@ -9853,7 +9857,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
 		}
 		attr.config |= 1 << bit;
 	}
-	attr.size = sizeof(attr);
+	attr.size = attr_sz;
 	attr.type = type;
 	attr.config |= (__u64)ref_ctr_off << PERF_UPROBE_REF_CTR_OFFSET_SHIFT;
 	attr.config1 = ptr_to_u64(name); /* kprobe_func or uprobe_path */
@@ -9952,7 +9956,8 @@ static int determine_kprobe_perf_type_legacy(const char *probe_name, bool retpro
 static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
 					 const char *kfunc_name, size_t offset, int pid)
 {
-	struct perf_event_attr attr = {};
+	const size_t attr_sz = sizeof(struct perf_event_attr);
+	struct perf_event_attr attr;
 	char errmsg[STRERR_BUFSIZE];
 	int type, pfd, err;
 
@@ -9971,7 +9976,9 @@ static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
 			libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
 		goto err_clean_legacy;
 	}
-	attr.size = sizeof(attr);
+
+	memset(&attr, 0, attr_sz);
+	attr.size = attr_sz;
 	attr.config = type;
 	attr.type = PERF_TYPE_TRACEPOINT;
 
@@ -10428,6 +10435,7 @@ static int determine_uprobe_perf_type_legacy(const char *probe_name, bool retpro
 static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
 					 const char *binary_path, size_t offset, int pid)
 {
+	const size_t attr_sz = sizeof(struct perf_event_attr);
 	struct perf_event_attr attr;
 	int type, pfd, err;
 
@@ -10445,8 +10453,8 @@ static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
 		goto err_clean_legacy;
 	}
 
-	memset(&attr, 0, sizeof(attr));
-	attr.size = sizeof(attr);
+	memset(&attr, 0, attr_sz);
+	attr.size = attr_sz;
 	attr.config = type;
 	attr.type = PERF_TYPE_TRACEPOINT;
 
@@ -10985,7 +10993,8 @@ static int determine_tracepoint_id(const char *tp_category,
 static int perf_event_open_tracepoint(const char *tp_category,
 				      const char *tp_name)
 {
-	struct perf_event_attr attr = {};
+	const size_t attr_sz = sizeof(struct perf_event_attr);
+	struct perf_event_attr attr;
 	char errmsg[STRERR_BUFSIZE];
 	int tp_id, pfd, err;
 
@@ -10997,8 +11006,9 @@ static int perf_event_open_tracepoint(const char *tp_category,
 		return tp_id;
 	}
 
+	memset(&attr, 0, attr_sz);
 	attr.type = PERF_TYPE_TRACEPOINT;
-	attr.size = sizeof(attr);
+	attr.size = attr_sz;
 	attr.config = tp_id;
 
 	pfd = syscall(__NR_perf_event_open, &attr, -1 /* pid */, 0 /* cpu */,
@@ -11618,12 +11628,15 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
 				     void *ctx,
 				     const struct perf_buffer_opts *opts)
 {
+	const size_t attr_sz = sizeof(struct perf_event_attr);
 	struct perf_buffer_params p = {};
-	struct perf_event_attr attr = {};
+	struct perf_event_attr attr;
 
 	if (!OPTS_VALID(opts, perf_buffer_opts))
 		return libbpf_err_ptr(-EINVAL);
 
+	memset(&attr, 0, attr_sz);
+	attr.size = attr_sz;
 	attr.config = PERF_COUNT_SW_BPF_OUTPUT;
 	attr.type = PERF_TYPE_SOFTWARE;
 	attr.sample_type = PERF_SAMPLE_RAW;
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index 6c013168032d..35104580870c 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -587,11 +587,12 @@ static int get_tc_info(struct nlmsghdr *nh, libbpf_dump_nlmsg_t fn,
 
 static int tc_add_fd_and_name(struct libbpf_nla_req *req, int fd)
 {
-	struct bpf_prog_info info = {};
+	struct bpf_prog_info info;
 	__u32 info_len = sizeof(info);
 	char name[256];
 	int len, ret;
 
+	memset(&info, 0, info_len);
 	ret = bpf_obj_get_info_by_fd(fd, &info, &info_len);
 	if (ret < 0)
 		return ret;
diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
index bd6f4505e7b1..365d769e0357 100644
--- a/tools/lib/bpf/skel_internal.h
+++ b/tools/lib/bpf/skel_internal.h
@@ -285,6 +285,8 @@ static inline int skel_link_create(int prog_fd, int target_fd,
 
 static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
 {
+	const size_t prog_load_attr_sz = offsetofend(union bpf_attr, fd_array);
+	const size_t test_run_attr_sz = offsetofend(union bpf_attr, test);
 	int map_fd = -1, prog_fd = -1, key = 0, err;
 	union bpf_attr attr;
 
@@ -302,7 +304,7 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
 		goto out;
 	}
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, prog_load_attr_sz);
 	attr.prog_type = BPF_PROG_TYPE_SYSCALL;
 	attr.insns = (long) opts->insns;
 	attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
@@ -313,18 +315,18 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
 	attr.log_size = opts->ctx->log_size;
 	attr.log_buf = opts->ctx->log_buf;
 	attr.prog_flags = BPF_F_SLEEPABLE;
-	err = prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
+	err = prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
 	if (prog_fd < 0) {
 		opts->errstr = "failed to load loader prog";
 		set_err;
 		goto out;
 	}
 
-	memset(&attr, 0, sizeof(attr));
+	memset(&attr, 0, test_run_attr_sz);
 	attr.test.prog_fd = prog_fd;
 	attr.test.ctx_in = (long) opts->ctx;
 	attr.test.ctx_size_in = opts->ctx->sz;
-	err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr));
+	err = skel_sys_bpf(BPF_PROG_RUN, &attr, test_run_attr_sz);
 	if (err < 0 || (int)attr.test.retval < 0) {
 		opts->errstr = "failed to execute loader prog";
 		if (err < 0) {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases
  2022-08-16  0:19 [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups Andrii Nakryiko
  2022-08-16  0:19 ` [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF Andrii Nakryiko
  2022-08-16  0:19 ` [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization Andrii Nakryiko
@ 2022-08-16  0:19 ` Andrii Nakryiko
  2022-08-16 21:33   ` Hao Luo
  2022-08-16  0:19 ` [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode Andrii Nakryiko
  2022-08-17 21:00 ` [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups patchwork-bot+netdevbpf
  4 siblings, 1 reply; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  0:19 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Remove two missed deprecated APIs that were aliased to new APIs:
bpf_object__unload and bpf_prog_attach_xattr.

Also move legacy API libbpf_find_kernel_btf (aliased to
btf__load_vmlinux_btf) into libbpf_legacy.h.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/lib/bpf/bpf.c           | 5 -----
 tools/lib/bpf/btf.c           | 2 --
 tools/lib/bpf/btf.h           | 1 -
 tools/lib/bpf/libbpf.c        | 2 --
 tools/lib/bpf/libbpf_legacy.h | 2 ++
 5 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index e3a0bd7efa2f..1d49a0352836 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -641,11 +641,6 @@ int bpf_prog_attach_opts(int prog_fd, int target_fd,
 	return libbpf_err_errno(ret);
 }
 
-__attribute__((alias("bpf_prog_attach_opts")))
-int bpf_prog_attach_xattr(int prog_fd, int target_fd,
-			  enum bpf_attach_type type,
-			  const struct bpf_prog_attach_opts *opts);
-
 int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
 {
 	const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 2d14f1a52d7a..361131518d63 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -1225,8 +1225,6 @@ int btf__load_into_kernel(struct btf *btf)
 	return btf_load_into_kernel(btf, NULL, 0, 0);
 }
 
-int btf__load(struct btf *) __attribute__((alias("btf__load_into_kernel")));
-
 int btf__fd(const struct btf *btf)
 {
 	return btf->fd;
diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
index 583760df83b4..ae543144ee30 100644
--- a/tools/lib/bpf/btf.h
+++ b/tools/lib/bpf/btf.h
@@ -116,7 +116,6 @@ LIBBPF_API struct btf *btf__parse_raw_split(const char *path, struct btf *base_b
 
 LIBBPF_API struct btf *btf__load_vmlinux_btf(void);
 LIBBPF_API struct btf *btf__load_module_btf(const char *module_name, struct btf *vmlinux_btf);
-LIBBPF_API struct btf *libbpf_find_kernel_btf(void);
 
 LIBBPF_API struct btf *btf__load_from_kernel_by_id(__u32 id);
 LIBBPF_API struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf);
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 89f192a3ef77..9aaf6f7e89df 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7260,8 +7260,6 @@ static int bpf_object_unload(struct bpf_object *obj)
 	return 0;
 }
 
-int bpf_object__unload(struct bpf_object *obj) __attribute__((alias("bpf_object_unload")));
-
 static int bpf_object__sanitize_maps(struct bpf_object *obj)
 {
 	struct bpf_map *m;
diff --git a/tools/lib/bpf/libbpf_legacy.h b/tools/lib/bpf/libbpf_legacy.h
index 5b7e0155db6a..1e1be467bede 100644
--- a/tools/lib/bpf/libbpf_legacy.h
+++ b/tools/lib/bpf/libbpf_legacy.h
@@ -125,6 +125,8 @@ struct bpf_map;
 struct btf;
 struct btf_ext;
 
+LIBBPF_API struct btf *libbpf_find_kernel_btf(void);
+
 LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
 LIBBPF_API enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog);
 LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode
  2022-08-16  0:19 [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups Andrii Nakryiko
                   ` (2 preceding siblings ...)
  2022-08-16  0:19 ` [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases Andrii Nakryiko
@ 2022-08-16  0:19 ` Andrii Nakryiko
  2022-08-16 21:34   ` Hao Luo
  2022-08-17 21:00 ` [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups patchwork-bot+netdevbpf
  4 siblings, 1 reply; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16  0:19 UTC (permalink / raw)
  To: bpf, ast, daniel; +Cc: andrii, kernel-team

Fix few issues found when building and running test_progs in release
mode.

First, potentially uninitialized idx variable in xskxceiver,
force-initialize to zero to satisfy compiler.

Few instances of defining uprobe trigger functions break in release mode
unless marked as noinline, due to being static. Add noinline to make
sure everything works.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
---
 tools/testing/selftests/bpf/prog_tests/attach_probe.c | 6 +++---
 tools/testing/selftests/bpf/prog_tests/bpf_cookie.c   | 2 +-
 tools/testing/selftests/bpf/prog_tests/task_pt_regs.c | 2 +-
 tools/testing/selftests/bpf/xskxceiver.c              | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
index 0b899d2d8ea7..9566d9d2f6ee 100644
--- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c
+++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
@@ -6,19 +6,19 @@
 volatile unsigned short uprobe_ref_ctr __attribute__((unused)) __attribute((section(".probes")));
 
 /* uprobe attach point */
-static void trigger_func(void)
+static noinline void trigger_func(void)
 {
 	asm volatile ("");
 }
 
 /* attach point for byname uprobe */
-static void trigger_func2(void)
+static noinline void trigger_func2(void)
 {
 	asm volatile ("");
 }
 
 /* attach point for byname sleepable uprobe */
-static void trigger_func3(void)
+static noinline void trigger_func3(void)
 {
 	asm volatile ("");
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
index 2974b44f80fa..2be2d61954bc 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
@@ -13,7 +13,7 @@
 #include "kprobe_multi.skel.h"
 
 /* uprobe attach point */
-static void trigger_func(void)
+static noinline void trigger_func(void)
 {
 	asm volatile ("");
 }
diff --git a/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c b/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
index 61935e7e056a..f000734a3d1f 100644
--- a/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
+++ b/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
@@ -4,7 +4,7 @@
 #include "test_task_pt_regs.skel.h"
 
 /* uprobe attach point */
-static void trigger_func(void)
+static noinline void trigger_func(void)
 {
 	asm volatile ("");
 }
diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
index 20b44ab32a06..14b4737b223c 100644
--- a/tools/testing/selftests/bpf/xskxceiver.c
+++ b/tools/testing/selftests/bpf/xskxceiver.c
@@ -922,7 +922,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb, struct pollfd *fd
 {
 	struct xsk_socket_info *xsk = ifobject->xsk;
 	bool use_poll = ifobject->use_poll;
-	u32 i, idx, ret, valid_pkts = 0;
+	u32 i, idx = 0, ret, valid_pkts = 0;
 
 	while (xsk_ring_prod__reserve(&xsk->tx, BATCH_SIZE, &idx) < BATCH_SIZE) {
 		if (use_poll) {
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF
  2022-08-16  0:19 ` [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF Andrii Nakryiko
@ 2022-08-16 21:31   ` Hao Luo
  0 siblings, 0 replies; 13+ messages in thread
From: Hao Luo @ 2022-08-16 21:31 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

On Mon, Aug 15, 2022 at 8:52 PM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Fix if condition filtering empty ELF sections to prevent NULL
> dereference.
>
> Fixes: 47ea7417b074 ("libbpf: Skip empty sections in bpf_object__init_global_data_maps")
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>

Acked-by: Hao Luo <haoluo@google.com>

> ---
>  tools/lib/bpf/libbpf.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index aa05a99b913d..5f0281e61437 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -1646,7 +1646,7 @@ static int bpf_object__init_global_data_maps(struct bpf_object *obj)
>                 sec_desc = &obj->efile.secs[sec_idx];
>
>                 /* Skip recognized sections with size 0. */
> -               if (sec_desc->data && sec_desc->data->d_size == 0)
> +               if (!sec_desc->data || sec_desc->data->d_size == 0)
>                         continue;
>
>                 switch (sec_desc->sec_type) {
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization
  2022-08-16  0:19 ` [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization Andrii Nakryiko
@ 2022-08-16 21:33   ` Hao Luo
  2022-08-16 21:54     ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Hao Luo @ 2022-08-16 21:33 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

On Mon, Aug 15, 2022 at 8:53 PM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Make sure that entire libbpf code base is initializing bpf_attr and
> perf_event_attr with memset(0). Also for bpf_attr make sure we
> clear and pass to kernel only relevant parts of bpf_attr. bpf_attr is
> a huge union of independent sub-command attributes, so there is no need
> to clear and pass entire union bpf_attr, which over time grows quite
> a lot and for most commands this growth is completely irrelevant.
>
> Few cases where we were relying on compiler initialization of BPF UAPI
> structs (like bpf_prog_info, bpf_map_info, etc) with `= {};` were
> switched to memset(0) pattern for future-proofing.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---

Looks good to me. I went over all the functions in this change and
verified the conversion is correct. There is only one question: I
noticed, for bpf_prog_load() and probe_memcg_account(), we only cover
up to fd_array and attach_btf_obj_fd. Should we cover up to the last
field i.e. core_relo_rec_size?

Acked-by: Hao Luo <haoluo@google.com>


>  tools/lib/bpf/bpf.c           | 173 ++++++++++++++++++++--------------
>  tools/lib/bpf/libbpf.c        |  43 ++++++---
>  tools/lib/bpf/netlink.c       |   3 +-
>  tools/lib/bpf/skel_internal.h |  10 +-
>  4 files changed, 138 insertions(+), 91 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index 575867d69496..e3a0bd7efa2f 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -105,7 +105,7 @@ int sys_bpf_prog_load(union bpf_attr *attr, unsigned int size, int attempts)
>   */
>  int probe_memcg_account(void)
>  {
> -       const size_t prog_load_attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
> +       const size_t attr_sz = offsetofend(union bpf_attr, attach_btf_obj_fd);
>         struct bpf_insn insns[] = {
>                 BPF_EMIT_CALL(BPF_FUNC_ktime_get_coarse_ns),
>                 BPF_EXIT_INSN(),
> @@ -115,13 +115,13 @@ int probe_memcg_account(void)
>         int prog_fd;
>
>         /* attempt loading freplace trying to use custom BTF */
> -       memset(&attr, 0, prog_load_attr_sz);
> +       memset(&attr, 0, attr_sz);
>         attr.prog_type = BPF_PROG_TYPE_SOCKET_FILTER;
>         attr.insns = ptr_to_u64(insns);
>         attr.insn_cnt = insn_cnt;
>         attr.license = ptr_to_u64("GPL");
>
> -       prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
> +       prog_fd = sys_bpf_fd(BPF_PROG_LOAD, &attr, attr_sz);
>         if (prog_fd >= 0) {
>                 close(prog_fd);
>                 return 1;
> @@ -232,6 +232,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
>                   const struct bpf_insn *insns, size_t insn_cnt,
>                   const struct bpf_prog_load_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, fd_array);
>         void *finfo = NULL, *linfo = NULL;
>         const char *func_info, *line_info;
>         __u32 log_size, log_level, attach_prog_fd, attach_btf_obj_fd;
> @@ -251,7 +252,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
>         if (attempts == 0)
>                 attempts = PROG_LOAD_ATTEMPTS;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>
>         attr.prog_type = prog_type;
>         attr.expected_attach_type = OPTS_GET(opts, expected_attach_type, 0);
> @@ -314,7 +315,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
>                 attr.log_level = log_level;
>         }
>
> -       fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
> +       fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
>         if (fd >= 0)
>                 return fd;
>
> @@ -354,7 +355,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
>                         break;
>                 }
>
> -               fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
> +               fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
>                 if (fd >= 0)
>                         goto done;
>         }
> @@ -368,7 +369,7 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
>                 attr.log_size = log_size;
>                 attr.log_level = 1;
>
> -               fd = sys_bpf_prog_load(&attr, sizeof(attr), attempts);
> +               fd = sys_bpf_prog_load(&attr, attr_sz, attempts);
>         }
>  done:
>         /* free() doesn't affect errno, so we don't need to restore it */
> @@ -380,127 +381,136 @@ int bpf_prog_load(enum bpf_prog_type prog_type,
>  int bpf_map_update_elem(int fd, const void *key, const void *value,
>                         __u64 flags)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.value = ptr_to_u64(value);
>         attr.flags = flags;
>
> -       ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_UPDATE_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_lookup_elem(int fd, const void *key, void *value)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.value = ptr_to_u64(value);
>
> -       ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_lookup_elem_flags(int fd, const void *key, void *value, __u64 flags)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.value = ptr_to_u64(value);
>         attr.flags = flags;
>
> -       ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_LOOKUP_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_lookup_and_delete_elem(int fd, const void *key, void *value)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.value = ptr_to_u64(value);
>
> -       ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_lookup_and_delete_elem_flags(int fd, const void *key, void *value, __u64 flags)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.value = ptr_to_u64(value);
>         attr.flags = flags;
>
> -       ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_LOOKUP_AND_DELETE_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_delete_elem(int fd, const void *key)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>
> -       ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_delete_elem_flags(int fd, const void *key, __u64 flags)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.flags = flags;
>
> -       ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_DELETE_ELEM, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_get_next_key(int fd, const void *key, void *next_key)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, next_key);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>         attr.key = ptr_to_u64(key);
>         attr.next_key = ptr_to_u64(next_key);
>
> -       ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_GET_NEXT_KEY, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_map_freeze(int fd)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, map_fd);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_fd = fd;
>
> -       ret = sys_bpf(BPF_MAP_FREEZE, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_MAP_FREEZE, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
> @@ -509,13 +519,14 @@ static int bpf_map_batch_common(int cmd, int fd, void  *in_batch,
>                                 __u32 *count,
>                                 const struct bpf_map_batch_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, batch);
>         union bpf_attr attr;
>         int ret;
>
>         if (!OPTS_VALID(opts, bpf_map_batch_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.batch.map_fd = fd;
>         attr.batch.in_batch = ptr_to_u64(in_batch);
>         attr.batch.out_batch = ptr_to_u64(out_batch);
> @@ -525,7 +536,7 @@ static int bpf_map_batch_common(int cmd, int fd, void  *in_batch,
>         attr.batch.elem_flags  = OPTS_GET(opts, elem_flags, 0);
>         attr.batch.flags = OPTS_GET(opts, flags, 0);
>
> -       ret = sys_bpf(cmd, &attr, sizeof(attr));
> +       ret = sys_bpf(cmd, &attr, attr_sz);
>         *count = attr.batch.count;
>
>         return libbpf_err_errno(ret);
> @@ -564,14 +575,15 @@ int bpf_map_update_batch(int fd, const void *keys, const void *values, __u32 *co
>
>  int bpf_obj_pin(int fd, const char *pathname)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, file_flags);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.pathname = ptr_to_u64((void *)pathname);
>         attr.bpf_fd = fd;
>
> -       ret = sys_bpf(BPF_OBJ_PIN, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_OBJ_PIN, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
> @@ -582,17 +594,18 @@ int bpf_obj_get(const char *pathname)
>
>  int bpf_obj_get_opts(const char *pathname, const struct bpf_obj_get_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, file_flags);
>         union bpf_attr attr;
>         int fd;
>
>         if (!OPTS_VALID(opts, bpf_obj_get_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.pathname = ptr_to_u64((void *)pathname);
>         attr.file_flags = OPTS_GET(opts, file_flags, 0);
>
> -       fd = sys_bpf_fd(BPF_OBJ_GET, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_OBJ_GET, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
> @@ -610,20 +623,21 @@ int bpf_prog_attach_opts(int prog_fd, int target_fd,
>                           enum bpf_attach_type type,
>                           const struct bpf_prog_attach_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
>         union bpf_attr attr;
>         int ret;
>
>         if (!OPTS_VALID(opts, bpf_prog_attach_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.target_fd     = target_fd;
>         attr.attach_bpf_fd = prog_fd;
>         attr.attach_type   = type;
>         attr.attach_flags  = OPTS_GET(opts, flags, 0);
>         attr.replace_bpf_fd = OPTS_GET(opts, replace_prog_fd, 0);
>
> -       ret = sys_bpf(BPF_PROG_ATTACH, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_PROG_ATTACH, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
> @@ -634,28 +648,30 @@ int bpf_prog_attach_xattr(int prog_fd, int target_fd,
>
>  int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.target_fd   = target_fd;
>         attr.attach_type = type;
>
> -       ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_PROG_DETACH, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_prog_detach2(int prog_fd, int target_fd, enum bpf_attach_type type)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.target_fd   = target_fd;
>         attr.attach_bpf_fd = prog_fd;
>         attr.attach_type = type;
>
> -       ret = sys_bpf(BPF_PROG_DETACH, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_PROG_DETACH, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
> @@ -663,6 +679,7 @@ int bpf_link_create(int prog_fd, int target_fd,
>                     enum bpf_attach_type attach_type,
>                     const struct bpf_link_create_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, link_create);
>         __u32 target_btf_id, iter_info_len;
>         union bpf_attr attr;
>         int fd, err;
> @@ -681,7 +698,7 @@ int bpf_link_create(int prog_fd, int target_fd,
>                         return libbpf_err(-EINVAL);
>         }
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.link_create.prog_fd = prog_fd;
>         attr.link_create.target_fd = target_fd;
>         attr.link_create.attach_type = attach_type;
> @@ -725,7 +742,7 @@ int bpf_link_create(int prog_fd, int target_fd,
>                 break;
>         }
>  proceed:
> -       fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_LINK_CREATE, &attr, attr_sz);
>         if (fd >= 0)
>                 return fd;
>         /* we'll get EINVAL if LINK_CREATE doesn't support attaching fentry
> @@ -761,44 +778,47 @@ int bpf_link_create(int prog_fd, int target_fd,
>
>  int bpf_link_detach(int link_fd)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, link_detach);
>         union bpf_attr attr;
>         int ret;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.link_detach.link_fd = link_fd;
>
> -       ret = sys_bpf(BPF_LINK_DETACH, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_LINK_DETACH, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_link_update(int link_fd, int new_prog_fd,
>                     const struct bpf_link_update_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, link_update);
>         union bpf_attr attr;
>         int ret;
>
>         if (!OPTS_VALID(opts, bpf_link_update_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.link_update.link_fd = link_fd;
>         attr.link_update.new_prog_fd = new_prog_fd;
>         attr.link_update.flags = OPTS_GET(opts, flags, 0);
>         attr.link_update.old_prog_fd = OPTS_GET(opts, old_prog_fd, 0);
>
> -       ret = sys_bpf(BPF_LINK_UPDATE, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_LINK_UPDATE, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
>
>  int bpf_iter_create(int link_fd)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, iter_create);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.iter_create.link_fd = link_fd;
>
> -       fd = sys_bpf_fd(BPF_ITER_CREATE, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_ITER_CREATE, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
> @@ -806,13 +826,14 @@ int bpf_prog_query_opts(int target_fd,
>                         enum bpf_attach_type type,
>                         struct bpf_prog_query_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, query);
>         union bpf_attr attr;
>         int ret;
>
>         if (!OPTS_VALID(opts, bpf_prog_query_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>
>         attr.query.target_fd    = target_fd;
>         attr.query.attach_type  = type;
> @@ -821,7 +842,7 @@ int bpf_prog_query_opts(int target_fd,
>         attr.query.prog_ids     = ptr_to_u64(OPTS_GET(opts, prog_ids, NULL));
>         attr.query.prog_attach_flags = ptr_to_u64(OPTS_GET(opts, prog_attach_flags, NULL));
>
> -       ret = sys_bpf(BPF_PROG_QUERY, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_PROG_QUERY, &attr, attr_sz);
>
>         OPTS_SET(opts, attach_flags, attr.query.attach_flags);
>         OPTS_SET(opts, prog_cnt, attr.query.prog_cnt);
> @@ -850,13 +871,14 @@ int bpf_prog_query(int target_fd, enum bpf_attach_type type, __u32 query_flags,
>
>  int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, test);
>         union bpf_attr attr;
>         int ret;
>
>         if (!OPTS_VALID(opts, bpf_test_run_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.test.prog_fd = prog_fd;
>         attr.test.batch_size = OPTS_GET(opts, batch_size, 0);
>         attr.test.cpu = OPTS_GET(opts, cpu, 0);
> @@ -872,7 +894,7 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
>         attr.test.data_in = ptr_to_u64(OPTS_GET(opts, data_in, NULL));
>         attr.test.data_out = ptr_to_u64(OPTS_GET(opts, data_out, NULL));
>
> -       ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_PROG_TEST_RUN, &attr, attr_sz);
>
>         OPTS_SET(opts, data_size_out, attr.test.data_size_out);
>         OPTS_SET(opts, ctx_size_out, attr.test.ctx_size_out);
> @@ -884,13 +906,14 @@ int bpf_prog_test_run_opts(int prog_fd, struct bpf_test_run_opts *opts)
>
>  static int bpf_obj_get_next_id(__u32 start_id, __u32 *next_id, int cmd)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
>         union bpf_attr attr;
>         int err;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.start_id = start_id;
>
> -       err = sys_bpf(cmd, &attr, sizeof(attr));
> +       err = sys_bpf(cmd, &attr, attr_sz);
>         if (!err)
>                 *next_id = attr.next_id;
>
> @@ -919,80 +942,84 @@ int bpf_link_get_next_id(__u32 start_id, __u32 *next_id)
>
>  int bpf_prog_get_fd_by_id(__u32 id)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.prog_id = id;
>
> -       fd = sys_bpf_fd(BPF_PROG_GET_FD_BY_ID, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_PROG_GET_FD_BY_ID, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
>  int bpf_map_get_fd_by_id(__u32 id)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.map_id = id;
>
> -       fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_MAP_GET_FD_BY_ID, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
>  int bpf_btf_get_fd_by_id(__u32 id)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.btf_id = id;
>
> -       fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_BTF_GET_FD_BY_ID, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
>  int bpf_link_get_fd_by_id(__u32 id)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, open_flags);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.link_id = id;
>
> -       fd = sys_bpf_fd(BPF_LINK_GET_FD_BY_ID, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_LINK_GET_FD_BY_ID, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
>  int bpf_obj_get_info_by_fd(int bpf_fd, void *info, __u32 *info_len)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, info);
>         union bpf_attr attr;
>         int err;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.info.bpf_fd = bpf_fd;
>         attr.info.info_len = *info_len;
>         attr.info.info = ptr_to_u64(info);
>
> -       err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, sizeof(attr));
> -
> +       err = sys_bpf(BPF_OBJ_GET_INFO_BY_FD, &attr, attr_sz);
>         if (!err)
>                 *info_len = attr.info.info_len;
> -
>         return libbpf_err_errno(err);
>  }
>
>  int bpf_raw_tracepoint_open(const char *name, int prog_fd)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, raw_tracepoint);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.raw_tracepoint.name = ptr_to_u64(name);
>         attr.raw_tracepoint.prog_fd = prog_fd;
>
> -       fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_RAW_TRACEPOINT_OPEN, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
> @@ -1048,16 +1075,18 @@ int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len,
>                       __u32 *prog_id, __u32 *fd_type, __u64 *probe_offset,
>                       __u64 *probe_addr)
>  {
> -       union bpf_attr attr = {};
> +       const size_t attr_sz = offsetofend(union bpf_attr, task_fd_query);
> +       union bpf_attr attr;
>         int err;
>
> +       memset(&attr, 0, attr_sz);
>         attr.task_fd_query.pid = pid;
>         attr.task_fd_query.fd = fd;
>         attr.task_fd_query.flags = flags;
>         attr.task_fd_query.buf = ptr_to_u64(buf);
>         attr.task_fd_query.buf_len = *buf_len;
>
> -       err = sys_bpf(BPF_TASK_FD_QUERY, &attr, sizeof(attr));
> +       err = sys_bpf(BPF_TASK_FD_QUERY, &attr, attr_sz);
>
>         *buf_len = attr.task_fd_query.buf_len;
>         *prog_id = attr.task_fd_query.prog_id;
> @@ -1070,30 +1099,32 @@ int bpf_task_fd_query(int pid, int fd, __u32 flags, char *buf, __u32 *buf_len,
>
>  int bpf_enable_stats(enum bpf_stats_type type)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, enable_stats);
>         union bpf_attr attr;
>         int fd;
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.enable_stats.type = type;
>
> -       fd = sys_bpf_fd(BPF_ENABLE_STATS, &attr, sizeof(attr));
> +       fd = sys_bpf_fd(BPF_ENABLE_STATS, &attr, attr_sz);
>         return libbpf_err_errno(fd);
>  }
>
>  int bpf_prog_bind_map(int prog_fd, int map_fd,
>                       const struct bpf_prog_bind_opts *opts)
>  {
> +       const size_t attr_sz = offsetofend(union bpf_attr, prog_bind_map);
>         union bpf_attr attr;
>         int ret;
>
>         if (!OPTS_VALID(opts, bpf_prog_bind_opts))
>                 return libbpf_err(-EINVAL);
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, attr_sz);
>         attr.prog_bind_map.prog_fd = prog_fd;
>         attr.prog_bind_map.map_fd = map_fd;
>         attr.prog_bind_map.flags = OPTS_GET(opts, flags, 0);
>
> -       ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, sizeof(attr));
> +       ret = sys_bpf(BPF_PROG_BIND_MAP, &attr, attr_sz);
>         return libbpf_err_errno(ret);
>  }
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 5f0281e61437..89f192a3ef77 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -4284,11 +4284,12 @@ int bpf_map__set_autocreate(struct bpf_map *map, bool autocreate)
>
>  int bpf_map__reuse_fd(struct bpf_map *map, int fd)
>  {
> -       struct bpf_map_info info = {};
> +       struct bpf_map_info info;
>         __u32 len = sizeof(info), name_len;
>         int new_fd, err;
>         char *new_name;
>
> +       memset(&info, 0, len);
>         err = bpf_obj_get_info_by_fd(fd, &info, &len);
>         if (err && errno == EINVAL)
>                 err = bpf_get_map_info_from_fdinfo(fd, &info);
> @@ -4830,13 +4831,12 @@ bool kernel_supports(const struct bpf_object *obj, enum kern_feature_id feat_id)
>
>  static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
>  {
> -       struct bpf_map_info map_info = {};
> +       struct bpf_map_info map_info;
>         char msg[STRERR_BUFSIZE];
> -       __u32 map_info_len;
> +       __u32 map_info_len = sizeof(map_info);
>         int err;
>
> -       map_info_len = sizeof(map_info);
> -
> +       memset(&map_info, 0, map_info_len);
>         err = bpf_obj_get_info_by_fd(map_fd, &map_info, &map_info_len);
>         if (err && errno == EINVAL)
>                 err = bpf_get_map_info_from_fdinfo(map_fd, &map_info);
> @@ -8994,11 +8994,12 @@ int libbpf_find_vmlinux_btf_id(const char *name,
>
>  static int libbpf_find_prog_btf_id(const char *name, __u32 attach_prog_fd)
>  {
> -       struct bpf_prog_info info = {};
> +       struct bpf_prog_info info;
>         __u32 info_len = sizeof(info);
>         struct btf *btf;
>         int err;
>
> +       memset(&info, 0, info_len);
>         err = bpf_obj_get_info_by_fd(attach_prog_fd, &info, &info_len);
>         if (err) {
>                 pr_warn("failed bpf_obj_get_info_by_fd for FD %d: %d\n",
> @@ -9826,13 +9827,16 @@ static int determine_uprobe_retprobe_bit(void)
>  static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
>                                  uint64_t offset, int pid, size_t ref_ctr_off)
>  {
> -       struct perf_event_attr attr = {};
> +       const size_t attr_sz = sizeof(struct perf_event_attr);
> +       struct perf_event_attr attr;
>         char errmsg[STRERR_BUFSIZE];
>         int type, pfd;
>
>         if (ref_ctr_off >= (1ULL << PERF_UPROBE_REF_CTR_OFFSET_BITS))
>                 return -EINVAL;
>
> +       memset(&attr, 0, attr_sz);
> +
>         type = uprobe ? determine_uprobe_perf_type()
>                       : determine_kprobe_perf_type();
>         if (type < 0) {
> @@ -9853,7 +9857,7 @@ static int perf_event_open_probe(bool uprobe, bool retprobe, const char *name,
>                 }
>                 attr.config |= 1 << bit;
>         }
> -       attr.size = sizeof(attr);
> +       attr.size = attr_sz;
>         attr.type = type;
>         attr.config |= (__u64)ref_ctr_off << PERF_UPROBE_REF_CTR_OFFSET_SHIFT;
>         attr.config1 = ptr_to_u64(name); /* kprobe_func or uprobe_path */
> @@ -9952,7 +9956,8 @@ static int determine_kprobe_perf_type_legacy(const char *probe_name, bool retpro
>  static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
>                                          const char *kfunc_name, size_t offset, int pid)
>  {
> -       struct perf_event_attr attr = {};
> +       const size_t attr_sz = sizeof(struct perf_event_attr);
> +       struct perf_event_attr attr;
>         char errmsg[STRERR_BUFSIZE];
>         int type, pfd, err;
>
> @@ -9971,7 +9976,9 @@ static int perf_event_kprobe_open_legacy(const char *probe_name, bool retprobe,
>                         libbpf_strerror_r(err, errmsg, sizeof(errmsg)));
>                 goto err_clean_legacy;
>         }
> -       attr.size = sizeof(attr);
> +
> +       memset(&attr, 0, attr_sz);
> +       attr.size = attr_sz;
>         attr.config = type;
>         attr.type = PERF_TYPE_TRACEPOINT;
>
> @@ -10428,6 +10435,7 @@ static int determine_uprobe_perf_type_legacy(const char *probe_name, bool retpro
>  static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
>                                          const char *binary_path, size_t offset, int pid)
>  {
> +       const size_t attr_sz = sizeof(struct perf_event_attr);
>         struct perf_event_attr attr;
>         int type, pfd, err;
>
> @@ -10445,8 +10453,8 @@ static int perf_event_uprobe_open_legacy(const char *probe_name, bool retprobe,
>                 goto err_clean_legacy;
>         }
>
> -       memset(&attr, 0, sizeof(attr));
> -       attr.size = sizeof(attr);
> +       memset(&attr, 0, attr_sz);
> +       attr.size = attr_sz;
>         attr.config = type;
>         attr.type = PERF_TYPE_TRACEPOINT;
>
> @@ -10985,7 +10993,8 @@ static int determine_tracepoint_id(const char *tp_category,
>  static int perf_event_open_tracepoint(const char *tp_category,
>                                       const char *tp_name)
>  {
> -       struct perf_event_attr attr = {};
> +       const size_t attr_sz = sizeof(struct perf_event_attr);
> +       struct perf_event_attr attr;
>         char errmsg[STRERR_BUFSIZE];
>         int tp_id, pfd, err;
>
> @@ -10997,8 +11006,9 @@ static int perf_event_open_tracepoint(const char *tp_category,
>                 return tp_id;
>         }
>
> +       memset(&attr, 0, attr_sz);
>         attr.type = PERF_TYPE_TRACEPOINT;
> -       attr.size = sizeof(attr);
> +       attr.size = attr_sz;
>         attr.config = tp_id;
>
>         pfd = syscall(__NR_perf_event_open, &attr, -1 /* pid */, 0 /* cpu */,
> @@ -11618,12 +11628,15 @@ struct perf_buffer *perf_buffer__new(int map_fd, size_t page_cnt,
>                                      void *ctx,
>                                      const struct perf_buffer_opts *opts)
>  {
> +       const size_t attr_sz = sizeof(struct perf_event_attr);
>         struct perf_buffer_params p = {};
> -       struct perf_event_attr attr = {};
> +       struct perf_event_attr attr;
>
>         if (!OPTS_VALID(opts, perf_buffer_opts))
>                 return libbpf_err_ptr(-EINVAL);
>
> +       memset(&attr, 0, attr_sz);
> +       attr.size = attr_sz;
>         attr.config = PERF_COUNT_SW_BPF_OUTPUT;
>         attr.type = PERF_TYPE_SOFTWARE;
>         attr.sample_type = PERF_SAMPLE_RAW;
> diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
> index 6c013168032d..35104580870c 100644
> --- a/tools/lib/bpf/netlink.c
> +++ b/tools/lib/bpf/netlink.c
> @@ -587,11 +587,12 @@ static int get_tc_info(struct nlmsghdr *nh, libbpf_dump_nlmsg_t fn,
>
>  static int tc_add_fd_and_name(struct libbpf_nla_req *req, int fd)
>  {
> -       struct bpf_prog_info info = {};
> +       struct bpf_prog_info info;
>         __u32 info_len = sizeof(info);
>         char name[256];
>         int len, ret;
>
> +       memset(&info, 0, info_len);
>         ret = bpf_obj_get_info_by_fd(fd, &info, &info_len);
>         if (ret < 0)
>                 return ret;
> diff --git a/tools/lib/bpf/skel_internal.h b/tools/lib/bpf/skel_internal.h
> index bd6f4505e7b1..365d769e0357 100644
> --- a/tools/lib/bpf/skel_internal.h
> +++ b/tools/lib/bpf/skel_internal.h
> @@ -285,6 +285,8 @@ static inline int skel_link_create(int prog_fd, int target_fd,
>
>  static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
>  {
> +       const size_t prog_load_attr_sz = offsetofend(union bpf_attr, fd_array);
> +       const size_t test_run_attr_sz = offsetofend(union bpf_attr, test);
>         int map_fd = -1, prog_fd = -1, key = 0, err;
>         union bpf_attr attr;
>
> @@ -302,7 +304,7 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
>                 goto out;
>         }
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, prog_load_attr_sz);
>         attr.prog_type = BPF_PROG_TYPE_SYSCALL;
>         attr.insns = (long) opts->insns;
>         attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
> @@ -313,18 +315,18 @@ static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
>         attr.log_size = opts->ctx->log_size;
>         attr.log_buf = opts->ctx->log_buf;
>         attr.prog_flags = BPF_F_SLEEPABLE;
> -       err = prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
> +       err = prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, prog_load_attr_sz);
>         if (prog_fd < 0) {
>                 opts->errstr = "failed to load loader prog";
>                 set_err;
>                 goto out;
>         }
>
> -       memset(&attr, 0, sizeof(attr));
> +       memset(&attr, 0, test_run_attr_sz);
>         attr.test.prog_fd = prog_fd;
>         attr.test.ctx_in = (long) opts->ctx;
>         attr.test.ctx_size_in = opts->ctx->sz;
> -       err = skel_sys_bpf(BPF_PROG_RUN, &attr, sizeof(attr));
> +       err = skel_sys_bpf(BPF_PROG_RUN, &attr, test_run_attr_sz);
>         if (err < 0 || (int)attr.test.retval < 0) {
>                 opts->errstr = "failed to execute loader prog";
>                 if (err < 0) {
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases
  2022-08-16  0:19 ` [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases Andrii Nakryiko
@ 2022-08-16 21:33   ` Hao Luo
  2022-08-16 21:55     ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Hao Luo @ 2022-08-16 21:33 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

On Mon, Aug 15, 2022 at 9:23 PM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Remove two missed deprecated APIs that were aliased to new APIs:
> bpf_object__unload and bpf_prog_attach_xattr.
>

Three functions? Missing btf__load()?

> Also move legacy API libbpf_find_kernel_btf (aliased to
> btf__load_vmlinux_btf) into libbpf_legacy.h.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---

The change itself looks good to me. Verified these functions are no
longer used in the source file.

Acked-by: Hao Luo <haoluo@google.com>


>  tools/lib/bpf/bpf.c           | 5 -----
>  tools/lib/bpf/btf.c           | 2 --
>  tools/lib/bpf/btf.h           | 1 -
>  tools/lib/bpf/libbpf.c        | 2 --
>  tools/lib/bpf/libbpf_legacy.h | 2 ++
>  5 files changed, 2 insertions(+), 10 deletions(-)
>
> diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
> index e3a0bd7efa2f..1d49a0352836 100644
> --- a/tools/lib/bpf/bpf.c
> +++ b/tools/lib/bpf/bpf.c
> @@ -641,11 +641,6 @@ int bpf_prog_attach_opts(int prog_fd, int target_fd,
>         return libbpf_err_errno(ret);
>  }
>
> -__attribute__((alias("bpf_prog_attach_opts")))
> -int bpf_prog_attach_xattr(int prog_fd, int target_fd,
> -                         enum bpf_attach_type type,
> -                         const struct bpf_prog_attach_opts *opts);
> -
>  int bpf_prog_detach(int target_fd, enum bpf_attach_type type)
>  {
>         const size_t attr_sz = offsetofend(union bpf_attr, replace_bpf_fd);
> diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
> index 2d14f1a52d7a..361131518d63 100644
> --- a/tools/lib/bpf/btf.c
> +++ b/tools/lib/bpf/btf.c
> @@ -1225,8 +1225,6 @@ int btf__load_into_kernel(struct btf *btf)
>         return btf_load_into_kernel(btf, NULL, 0, 0);
>  }
>
> -int btf__load(struct btf *) __attribute__((alias("btf__load_into_kernel")));
> -
>  int btf__fd(const struct btf *btf)
>  {
>         return btf->fd;
> diff --git a/tools/lib/bpf/btf.h b/tools/lib/bpf/btf.h
> index 583760df83b4..ae543144ee30 100644
> --- a/tools/lib/bpf/btf.h
> +++ b/tools/lib/bpf/btf.h
> @@ -116,7 +116,6 @@ LIBBPF_API struct btf *btf__parse_raw_split(const char *path, struct btf *base_b
>
>  LIBBPF_API struct btf *btf__load_vmlinux_btf(void);
>  LIBBPF_API struct btf *btf__load_module_btf(const char *module_name, struct btf *vmlinux_btf);
> -LIBBPF_API struct btf *libbpf_find_kernel_btf(void);
>
>  LIBBPF_API struct btf *btf__load_from_kernel_by_id(__u32 id);
>  LIBBPF_API struct btf *btf__load_from_kernel_by_id_split(__u32 id, struct btf *base_btf);
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index 89f192a3ef77..9aaf6f7e89df 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -7260,8 +7260,6 @@ static int bpf_object_unload(struct bpf_object *obj)
>         return 0;
>  }
>
> -int bpf_object__unload(struct bpf_object *obj) __attribute__((alias("bpf_object_unload")));
> -
>  static int bpf_object__sanitize_maps(struct bpf_object *obj)
>  {
>         struct bpf_map *m;
> diff --git a/tools/lib/bpf/libbpf_legacy.h b/tools/lib/bpf/libbpf_legacy.h
> index 5b7e0155db6a..1e1be467bede 100644
> --- a/tools/lib/bpf/libbpf_legacy.h
> +++ b/tools/lib/bpf/libbpf_legacy.h
> @@ -125,6 +125,8 @@ struct bpf_map;
>  struct btf;
>  struct btf_ext;
>
> +LIBBPF_API struct btf *libbpf_find_kernel_btf(void);
> +
>  LIBBPF_API enum bpf_prog_type bpf_program__get_type(const struct bpf_program *prog);
>  LIBBPF_API enum bpf_attach_type bpf_program__get_expected_attach_type(const struct bpf_program *prog);
>  LIBBPF_API const char *bpf_map__get_pin_path(const struct bpf_map *map);
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode
  2022-08-16  0:19 ` [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode Andrii Nakryiko
@ 2022-08-16 21:34   ` Hao Luo
  2022-08-16 21:58     ` Andrii Nakryiko
  0 siblings, 1 reply; 13+ messages in thread
From: Hao Luo @ 2022-08-16 21:34 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

On Mon, Aug 15, 2022 at 8:52 PM Andrii Nakryiko <andrii@kernel.org> wrote:
>
> Fix few issues found when building and running test_progs in release
> mode.
>
> First, potentially uninitialized idx variable in xskxceiver,
> force-initialize to zero to satisfy compiler.
>
> Few instances of defining uprobe trigger functions break in release mode
> unless marked as noinline, due to being static. Add noinline to make
> sure everything works.
>
> Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> ---

I can't say for the noinline change, I trust it works. The fix for
uninitialized use looks good to me.

Acked-by: Hao Luo <haoluo@google.com>


>  tools/testing/selftests/bpf/prog_tests/attach_probe.c | 6 +++---
>  tools/testing/selftests/bpf/prog_tests/bpf_cookie.c   | 2 +-
>  tools/testing/selftests/bpf/prog_tests/task_pt_regs.c | 2 +-
>  tools/testing/selftests/bpf/xskxceiver.c              | 2 +-
>  4 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/tools/testing/selftests/bpf/prog_tests/attach_probe.c b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> index 0b899d2d8ea7..9566d9d2f6ee 100644
> --- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> +++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
> @@ -6,19 +6,19 @@
>  volatile unsigned short uprobe_ref_ctr __attribute__((unused)) __attribute((section(".probes")));
>
>  /* uprobe attach point */
> -static void trigger_func(void)
> +static noinline void trigger_func(void)
>  {
>         asm volatile ("");
>  }
>
>  /* attach point for byname uprobe */
> -static void trigger_func2(void)
> +static noinline void trigger_func2(void)
>  {
>         asm volatile ("");
>  }
>
>  /* attach point for byname sleepable uprobe */
> -static void trigger_func3(void)
> +static noinline void trigger_func3(void)
>  {
>         asm volatile ("");
>  }
> diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
> index 2974b44f80fa..2be2d61954bc 100644
> --- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
> +++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
> @@ -13,7 +13,7 @@
>  #include "kprobe_multi.skel.h"
>
>  /* uprobe attach point */
> -static void trigger_func(void)
> +static noinline void trigger_func(void)
>  {
>         asm volatile ("");
>  }
> diff --git a/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c b/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
> index 61935e7e056a..f000734a3d1f 100644
> --- a/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
> +++ b/tools/testing/selftests/bpf/prog_tests/task_pt_regs.c
> @@ -4,7 +4,7 @@
>  #include "test_task_pt_regs.skel.h"
>
>  /* uprobe attach point */
> -static void trigger_func(void)
> +static noinline void trigger_func(void)
>  {
>         asm volatile ("");
>  }
> diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
> index 20b44ab32a06..14b4737b223c 100644
> --- a/tools/testing/selftests/bpf/xskxceiver.c
> +++ b/tools/testing/selftests/bpf/xskxceiver.c
> @@ -922,7 +922,7 @@ static int __send_pkts(struct ifobject *ifobject, u32 *pkt_nb, struct pollfd *fd
>  {
>         struct xsk_socket_info *xsk = ifobject->xsk;
>         bool use_poll = ifobject->use_poll;
> -       u32 i, idx, ret, valid_pkts = 0;
> +       u32 i, idx = 0, ret, valid_pkts = 0;
>
>         while (xsk_ring_prod__reserve(&xsk->tx, BATCH_SIZE, &idx) < BATCH_SIZE) {
>                 if (use_poll) {
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization
  2022-08-16 21:33   ` Hao Luo
@ 2022-08-16 21:54     ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16 21:54 UTC (permalink / raw)
  To: Hao Luo; +Cc: Andrii Nakryiko, bpf, ast, daniel, kernel-team

On Tue, Aug 16, 2022 at 2:33 PM Hao Luo <haoluo@google.com> wrote:
>
> On Mon, Aug 15, 2022 at 8:53 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > Make sure that entire libbpf code base is initializing bpf_attr and
> > perf_event_attr with memset(0). Also for bpf_attr make sure we
> > clear and pass to kernel only relevant parts of bpf_attr. bpf_attr is
> > a huge union of independent sub-command attributes, so there is no need
> > to clear and pass entire union bpf_attr, which over time grows quite
> > a lot and for most commands this growth is completely irrelevant.
> >
> > Few cases where we were relying on compiler initialization of BPF UAPI
> > structs (like bpf_prog_info, bpf_map_info, etc) with `= {};` were
> > switched to memset(0) pattern for future-proofing.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
>
> Looks good to me. I went over all the functions in this change and

Thanks!

> verified the conversion is correct. There is only one question: I
> noticed, for bpf_prog_load() and probe_memcg_account(), we only cover
> up to fd_array and attach_btf_obj_fd. Should we cover up to the last
> field i.e. core_relo_rec_size?

libbpf never sets those fields, I think it's only used from light
skeleton generated loaded BPF program, so I felt like it's unnecessary
to include that yet

>
> Acked-by: Hao Luo <haoluo@google.com>
>
>
> >  tools/lib/bpf/bpf.c           | 173 ++++++++++++++++++++--------------
> >  tools/lib/bpf/libbpf.c        |  43 ++++++---
> >  tools/lib/bpf/netlink.c       |   3 +-
> >  tools/lib/bpf/skel_internal.h |  10 +-
> >  4 files changed, 138 insertions(+), 91 deletions(-)
> >

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases
  2022-08-16 21:33   ` Hao Luo
@ 2022-08-16 21:55     ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16 21:55 UTC (permalink / raw)
  To: Hao Luo; +Cc: Andrii Nakryiko, bpf, ast, daniel, kernel-team

On Tue, Aug 16, 2022 at 2:34 PM Hao Luo <haoluo@google.com> wrote:
>
> On Mon, Aug 15, 2022 at 9:23 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > Remove two missed deprecated APIs that were aliased to new APIs:
> > bpf_object__unload and bpf_prog_attach_xattr.
> >
>
> Three functions? Missing btf__load()?

Apparently I suck at counting, thanks for spotting. It doesn't seem
worth it to send v2 just for this, maybe Alexei or Daniel can fix it
up while applying.

>
> > Also move legacy API libbpf_find_kernel_btf (aliased to
> > btf__load_vmlinux_btf) into libbpf_legacy.h.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
>
> The change itself looks good to me. Verified these functions are no
> longer used in the source file.
>
> Acked-by: Hao Luo <haoluo@google.com>
>
>
> >  tools/lib/bpf/bpf.c           | 5 -----
> >  tools/lib/bpf/btf.c           | 2 --
> >  tools/lib/bpf/btf.h           | 1 -
> >  tools/lib/bpf/libbpf.c        | 2 --
> >  tools/lib/bpf/libbpf_legacy.h | 2 ++
> >  5 files changed, 2 insertions(+), 10 deletions(-)
> >

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode
  2022-08-16 21:34   ` Hao Luo
@ 2022-08-16 21:58     ` Andrii Nakryiko
  0 siblings, 0 replies; 13+ messages in thread
From: Andrii Nakryiko @ 2022-08-16 21:58 UTC (permalink / raw)
  To: Hao Luo; +Cc: Andrii Nakryiko, bpf, ast, daniel, kernel-team

On Tue, Aug 16, 2022 at 2:34 PM Hao Luo <haoluo@google.com> wrote:
>
> On Mon, Aug 15, 2022 at 8:52 PM Andrii Nakryiko <andrii@kernel.org> wrote:
> >
> > Fix few issues found when building and running test_progs in release
> > mode.
> >
> > First, potentially uninitialized idx variable in xskxceiver,
> > force-initialize to zero to satisfy compiler.
> >
> > Few instances of defining uprobe trigger functions break in release mode
> > unless marked as noinline, due to being static. Add noinline to make
> > sure everything works.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
> > ---
>
> I can't say for the noinline change, I trust it works. The fix for
> uninitialized use looks good to me.

yeah, I tested noinline with both default debug (-O0) build and
release build (-O2).

noinline itself was interesting, apparently GCC doesn't support
__attribute__((noinline)) for static functions, but noinline is fine.
I was scratching my head for a while, didn't find any good explanation
and just went with `noinline`.

>
> Acked-by: Hao Luo <haoluo@google.com>
>
>
> >  tools/testing/selftests/bpf/prog_tests/attach_probe.c | 6 +++---
> >  tools/testing/selftests/bpf/prog_tests/bpf_cookie.c   | 2 +-
> >  tools/testing/selftests/bpf/prog_tests/task_pt_regs.c | 2 +-
> >  tools/testing/selftests/bpf/xskxceiver.c              | 2 +-
> >  4 files changed, 6 insertions(+), 6 deletions(-)
> >

[...]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups
  2022-08-16  0:19 [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups Andrii Nakryiko
                   ` (3 preceding siblings ...)
  2022-08-16  0:19 ` [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode Andrii Nakryiko
@ 2022-08-17 21:00 ` patchwork-bot+netdevbpf
  4 siblings, 0 replies; 13+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-08-17 21:00 UTC (permalink / raw)
  To: Andrii Nakryiko; +Cc: bpf, ast, daniel, kernel-team

Hello:

This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Mon, 15 Aug 2022 17:19:25 -0700 you wrote:
> Few fixes and clean up in preparation for finalizing libbpf 1.0.
> 
> Main change is switching libbpf to initializing only relevant portions of
> union bpf_attr for any given BPF command. This has been on a wishlist for
> a while, so finally this is done. While cleaning this up, I've also cleaned up
> few other placed were we didn't use explicit memset() with kernel UAPI structs
> (perf_event_attr, bpf_map_info, bpf_prog_info, etc).
> 
> [...]

Here is the summary with links:
  - [bpf-next,1/4] libbpf: fix potential NULL dereference when parsing ELF
    https://git.kernel.org/bpf/bpf-next/c/d4e6d684f3be
  - [bpf-next,2/4] libbpf: streamline bpf_attr and perf_event_attr initialization
    https://git.kernel.org/bpf/bpf-next/c/813847a31447
  - [bpf-next,3/4] libbpf: clean up deprecated and legacy aliases
    https://git.kernel.org/bpf/bpf-next/c/abf84b64e36b
  - [bpf-next,4/4] selftests/bpf: few fixes for selftests/bpf built in release mode
    https://git.kernel.org/bpf/bpf-next/c/df78da27260c

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-08-17 21:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-16  0:19 [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups Andrii Nakryiko
2022-08-16  0:19 ` [PATCH bpf-next 1/4] libbpf: fix potential NULL dereference when parsing ELF Andrii Nakryiko
2022-08-16 21:31   ` Hao Luo
2022-08-16  0:19 ` [PATCH bpf-next 2/4] libbpf: streamline bpf_attr and perf_event_attr initialization Andrii Nakryiko
2022-08-16 21:33   ` Hao Luo
2022-08-16 21:54     ` Andrii Nakryiko
2022-08-16  0:19 ` [PATCH bpf-next 3/4] libbpf: clean up deprecated and legacy aliases Andrii Nakryiko
2022-08-16 21:33   ` Hao Luo
2022-08-16 21:55     ` Andrii Nakryiko
2022-08-16  0:19 ` [PATCH bpf-next 4/4] selftests/bpf: few fixes for selftests/bpf built in release mode Andrii Nakryiko
2022-08-16 21:34   ` Hao Luo
2022-08-16 21:58     ` Andrii Nakryiko
2022-08-17 21:00 ` [PATCH bpf-next 0/4] Preparatory libbpf fixes and clean ups patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.