* [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-19 1:23 ` Yonghong Song
2020-08-18 21:33 ` [PATCH bpf-next 2/7] libbpf: make kernel feature probing lazy Andrii Nakryiko
` (6 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko
That compilation warning is more annoying, than helpful.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/Makefile | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
index bf8ed134cb8a..95c946e94ca5 100644
--- a/tools/lib/bpf/Makefile
+++ b/tools/lib/bpf/Makefile
@@ -107,7 +107,7 @@ ifeq ($(feature-reallocarray), 0)
endif
# Append required CFLAGS
-override CFLAGS += $(EXTRA_WARNINGS)
+override CFLAGS += $(EXTRA_WARNINGS) -Wno-switch-enum
override CFLAGS += -Werror -Wall
override CFLAGS += -fPIC
override CFLAGS += $(INCLUDES)
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning
2020-08-18 21:33 ` [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning Andrii Nakryiko
@ 2020-08-19 1:23 ` Yonghong Song
2020-08-19 1:39 ` Andrii Nakryiko
0 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2020-08-19 1:23 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team
On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
> That compilation warning is more annoying, than helpful.
Curious which compiler and which version caused this issue?
I did not hit with gcc 8.2 or latest clang in my environment.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
> tools/lib/bpf/Makefile | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
> index bf8ed134cb8a..95c946e94ca5 100644
> --- a/tools/lib/bpf/Makefile
> +++ b/tools/lib/bpf/Makefile
> @@ -107,7 +107,7 @@ ifeq ($(feature-reallocarray), 0)
> endif
>
> # Append required CFLAGS
> -override CFLAGS += $(EXTRA_WARNINGS)
> +override CFLAGS += $(EXTRA_WARNINGS) -Wno-switch-enum
> override CFLAGS += -Werror -Wall
> override CFLAGS += -fPIC
> override CFLAGS += $(INCLUDES)
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning
2020-08-19 1:23 ` Yonghong Song
@ 2020-08-19 1:39 ` Andrii Nakryiko
2020-08-19 5:44 ` Yonghong Song
0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-19 1:39 UTC (permalink / raw)
To: Yonghong Song
Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
Daniel Borkmann, Kernel Team
On Tue, Aug 18, 2020 at 6:23 PM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
> > That compilation warning is more annoying, than helpful.
>
> Curious which compiler and which version caused this issue?
> I did not hit with gcc 8.2 or latest clang in my environment.
>
Strange, I just tried on bpf-next, removed -Wno-switch-enum and got
tons of errors:
libbpf.c: In function ‘bpf_object__sanitize_prog’:
libbpf.c:5560:3: error: enumeration value ‘BPF_FUNC_unspec’ not
handled in switch [-Werror=switch-enum]
switch (func_id) {
^~~~~~
libbpf.c:5560:3: error: enumeration value ‘BPF_FUNC_map_lookup_elem’
not handled in switch [-Werror=switch-enum]
libbpf.c:5560:3: error: enumeration value ‘BPF_FUNC_map_update_elem’
not handled in switch [-Werror=switch-enum]
... and many more ...
My compiler:
$ cc --version
cc (GCC) 8.2.1 20180801 (Red Hat 8.2.1-2)
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> > tools/lib/bpf/Makefile | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
> > index bf8ed134cb8a..95c946e94ca5 100644
> > --- a/tools/lib/bpf/Makefile
> > +++ b/tools/lib/bpf/Makefile
> > @@ -107,7 +107,7 @@ ifeq ($(feature-reallocarray), 0)
> > endif
> >
> > # Append required CFLAGS
> > -override CFLAGS += $(EXTRA_WARNINGS)
> > +override CFLAGS += $(EXTRA_WARNINGS) -Wno-switch-enum
> > override CFLAGS += -Werror -Wall
> > override CFLAGS += -fPIC
> > override CFLAGS += $(INCLUDES)
> >
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning
2020-08-19 1:39 ` Andrii Nakryiko
@ 2020-08-19 5:44 ` Yonghong Song
0 siblings, 0 replies; 16+ messages in thread
From: Yonghong Song @ 2020-08-19 5:44 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
Daniel Borkmann, Kernel Team
On 8/18/20 6:39 PM, Andrii Nakryiko wrote:
> On Tue, Aug 18, 2020 at 6:23 PM Yonghong Song <yhs@fb.com> wrote:
>>
>>
>>
>> On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
>>> That compilation warning is more annoying, than helpful.
>>
>> Curious which compiler and which version caused this issue?
>> I did not hit with gcc 8.2 or latest clang in my environment.
>>
>
> Strange, I just tried on bpf-next, removed -Wno-switch-enum and got
> tons of errors:
>
> libbpf.c: In function ‘bpf_object__sanitize_prog’:
> libbpf.c:5560:3: error: enumeration value ‘BPF_FUNC_unspec’ not
> handled in switch [-Werror=switch-enum]
> switch (func_id) {
> ^~~~~~
> libbpf.c:5560:3: error: enumeration value ‘BPF_FUNC_map_lookup_elem’
> not handled in switch [-Werror=switch-enum]
> libbpf.c:5560:3: error: enumeration value ‘BPF_FUNC_map_update_elem’
> not handled in switch [-Werror=switch-enum]
> ... and many more ...
Okay I can reproduce now with latest bpf-next but not bpf.
Now I understand why you introduced this additional option.
The issue is introduced by the above code in Patch #4. To compile
patch #4, you need -Wno-switch-enum.
>
> My compiler:
>
> $ cc --version
> cc (GCC) 8.2.1 20180801 (Red Hat 8.2.1-2)
>
>
>>>
>>> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
>>> ---
>>> tools/lib/bpf/Makefile | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
>>> index bf8ed134cb8a..95c946e94ca5 100644
>>> --- a/tools/lib/bpf/Makefile
>>> +++ b/tools/lib/bpf/Makefile
>>> @@ -107,7 +107,7 @@ ifeq ($(feature-reallocarray), 0)
>>> endif
>>>
>>> # Append required CFLAGS
>>> -override CFLAGS += $(EXTRA_WARNINGS)
>>> +override CFLAGS += $(EXTRA_WARNINGS) -Wno-switch-enum
>>> override CFLAGS += -Werror -Wall
>>> override CFLAGS += -fPIC
>>> override CFLAGS += $(INCLUDES)
>>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next 2/7] libbpf: make kernel feature probing lazy
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 3/7] libbpf: factor out common logic of testing and closing FD Andrii Nakryiko
` (5 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko
Turn libbpf's kernel feature probing into lazily-performed checks. This allows
to skip performing unnecessary feature checks, if a given BPF application
doesn't rely on a particular kernel feature. As we grow number of feature
probes, libbpf might perform less unnecessary syscalls and scale better with
number of feature probes long-term.
By decoupling feature checks from bpf_object, it's also possible to perform
feature probing from libbpf static helpers and low-level APIs, if necessary.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/libbpf.c | 150 +++++++++++++++++++++++------------------
1 file changed, 86 insertions(+), 64 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 5d20b2da4427..9cf22361f945 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -165,23 +165,26 @@ static inline __u64 ptr_to_u64(const void *ptr)
return (__u64) (unsigned long) ptr;
}
-struct bpf_capabilities {
+enum kern_feature_id {
/* v4.14: kernel support for program & map names. */
- __u32 name:1;
+ FEAT_PROG_NAME,
/* v5.2: kernel support for global data sections. */
- __u32 global_data:1;
+ FEAT_GLOBAL_DATA,
/* BTF_KIND_FUNC and BTF_KIND_FUNC_PROTO support */
- __u32 btf_func:1;
+ FEAT_BTF_FUNC,
/* BTF_KIND_VAR and BTF_KIND_DATASEC support */
- __u32 btf_datasec:1;
- /* BPF_F_MMAPABLE is supported for arrays */
- __u32 array_mmap:1;
+ FEAT_BTF_DATASEC,
/* BTF_FUNC_GLOBAL is supported */
- __u32 btf_func_global:1;
+ FEAT_BTF_GLOBAL_FUNC,
+ /* BPF_F_MMAPABLE is supported for arrays */
+ FEAT_ARRAY_MMAP,
/* kernel support for expected_attach_type in BPF_PROG_LOAD */
- __u32 exp_attach_type:1;
+ FEAT_EXP_ATTACH_TYPE,
+ __FEAT_CNT,
};
+static bool kernel_supports(enum kern_feature_id feat_id);
+
enum reloc_type {
RELO_LD64,
RELO_CALL,
@@ -253,8 +256,6 @@ struct bpf_program {
__u32 func_info_rec_size;
__u32 func_info_cnt;
- struct bpf_capabilities *caps;
-
void *line_info;
__u32 line_info_rec_size;
__u32 line_info_cnt;
@@ -436,8 +437,6 @@ struct bpf_object {
void *priv;
bpf_object_clear_priv_t clear_priv;
- struct bpf_capabilities caps;
-
char path[];
};
#define obj_elf_valid(o) ((o)->efile.elf)
@@ -561,7 +560,6 @@ bpf_object__add_program(struct bpf_object *obj, void *data, size_t size,
if (err)
return err;
- prog.caps = &obj->caps;
progs = obj->programs;
nr_progs = obj->nr_programs;
@@ -2340,18 +2338,18 @@ static bool section_have_execinstr(struct bpf_object *obj, int idx)
static bool btf_needs_sanitization(struct bpf_object *obj)
{
- bool has_func_global = obj->caps.btf_func_global;
- bool has_datasec = obj->caps.btf_datasec;
- bool has_func = obj->caps.btf_func;
+ bool has_func_global = kernel_supports(FEAT_BTF_GLOBAL_FUNC);
+ bool has_datasec = kernel_supports(FEAT_BTF_DATASEC);
+ bool has_func = kernel_supports(FEAT_BTF_FUNC);
return !has_func || !has_datasec || !has_func_global;
}
static void bpf_object__sanitize_btf(struct bpf_object *obj, struct btf *btf)
{
- bool has_func_global = obj->caps.btf_func_global;
- bool has_datasec = obj->caps.btf_datasec;
- bool has_func = obj->caps.btf_func;
+ bool has_func_global = kernel_supports(FEAT_BTF_GLOBAL_FUNC);
+ bool has_datasec = kernel_supports(FEAT_BTF_DATASEC);
+ bool has_func = kernel_supports(FEAT_BTF_FUNC);
struct btf_type *t;
int i, j, vlen;
@@ -3433,8 +3431,7 @@ bpf_object__probe_loading(struct bpf_object *obj)
return 0;
}
-static int
-bpf_object__probe_name(struct bpf_object *obj)
+static int probe_kern_prog_name(void)
{
struct bpf_load_program_attr attr;
struct bpf_insn insns[] = {
@@ -3453,15 +3450,14 @@ bpf_object__probe_name(struct bpf_object *obj)
attr.name = "test";
ret = bpf_load_program_xattr(&attr, NULL, 0);
if (ret >= 0) {
- obj->caps.name = 1;
close(ret);
+ return 1;
}
return 0;
}
-static int
-bpf_object__probe_global_data(struct bpf_object *obj)
+static int probe_kern_global_data(void)
{
struct bpf_load_program_attr prg_attr;
struct bpf_create_map_attr map_attr;
@@ -3498,16 +3494,16 @@ bpf_object__probe_global_data(struct bpf_object *obj)
prg_attr.license = "GPL";
ret = bpf_load_program_xattr(&prg_attr, NULL, 0);
+ close(map);
if (ret >= 0) {
- obj->caps.global_data = 1;
close(ret);
+ return 1;
}
- close(map);
return 0;
}
-static int bpf_object__probe_btf_func(struct bpf_object *obj)
+static int probe_kern_btf_func(void)
{
static const char strs[] = "\0int\0x\0a";
/* void x(int a) {} */
@@ -3525,7 +3521,6 @@ static int bpf_object__probe_btf_func(struct bpf_object *obj)
btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types),
strs, sizeof(strs));
if (btf_fd >= 0) {
- obj->caps.btf_func = 1;
close(btf_fd);
return 1;
}
@@ -3533,7 +3528,7 @@ static int bpf_object__probe_btf_func(struct bpf_object *obj)
return 0;
}
-static int bpf_object__probe_btf_func_global(struct bpf_object *obj)
+static int probe_kern_btf_func_global(void)
{
static const char strs[] = "\0int\0x\0a";
/* static void x(int a) {} */
@@ -3551,7 +3546,6 @@ static int bpf_object__probe_btf_func_global(struct bpf_object *obj)
btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types),
strs, sizeof(strs));
if (btf_fd >= 0) {
- obj->caps.btf_func_global = 1;
close(btf_fd);
return 1;
}
@@ -3559,7 +3553,7 @@ static int bpf_object__probe_btf_func_global(struct bpf_object *obj)
return 0;
}
-static int bpf_object__probe_btf_datasec(struct bpf_object *obj)
+static int probe_kern_btf_datasec(void)
{
static const char strs[] = "\0x\0.data";
/* static int a; */
@@ -3578,7 +3572,6 @@ static int bpf_object__probe_btf_datasec(struct bpf_object *obj)
btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types),
strs, sizeof(strs));
if (btf_fd >= 0) {
- obj->caps.btf_datasec = 1;
close(btf_fd);
return 1;
}
@@ -3586,7 +3579,7 @@ static int bpf_object__probe_btf_datasec(struct bpf_object *obj)
return 0;
}
-static int bpf_object__probe_array_mmap(struct bpf_object *obj)
+static int probe_kern_array_mmap(void)
{
struct bpf_create_map_attr attr = {
.map_type = BPF_MAP_TYPE_ARRAY,
@@ -3599,16 +3592,13 @@ static int bpf_object__probe_array_mmap(struct bpf_object *obj)
fd = bpf_create_map_xattr(&attr);
if (fd >= 0) {
- obj->caps.array_mmap = 1;
close(fd);
return 1;
}
-
return 0;
}
-static int
-bpf_object__probe_exp_attach_type(struct bpf_object *obj)
+static int probe_kern_exp_attach_type(void)
{
struct bpf_load_program_attr attr;
struct bpf_insn insns[] = {
@@ -3631,34 +3621,67 @@ bpf_object__probe_exp_attach_type(struct bpf_object *obj)
fd = bpf_load_program_xattr(&attr, NULL, 0);
if (fd >= 0) {
- obj->caps.exp_attach_type = 1;
close(fd);
return 1;
}
return 0;
}
-static int
-bpf_object__probe_caps(struct bpf_object *obj)
-{
- int (*probe_fn[])(struct bpf_object *obj) = {
- bpf_object__probe_name,
- bpf_object__probe_global_data,
- bpf_object__probe_btf_func,
- bpf_object__probe_btf_func_global,
- bpf_object__probe_btf_datasec,
- bpf_object__probe_array_mmap,
- bpf_object__probe_exp_attach_type,
- };
- int i, ret;
+enum kern_feature_result {
+ FEAT_UNKNOWN = 0,
+ FEAT_SUPPORTED = 1,
+ FEAT_MISSING = 2,
+};
- for (i = 0; i < ARRAY_SIZE(probe_fn); i++) {
- ret = probe_fn[i](obj);
- if (ret < 0)
- pr_debug("Probe #%d failed with %d.\n", i, ret);
+typedef int (*feature_probe_fn)(void);
+
+static struct kern_feature_desc {
+ const char *desc;
+ feature_probe_fn probe;
+ enum kern_feature_result res;
+} feature_probes[__FEAT_CNT] = {
+ [FEAT_PROG_NAME] = {
+ "BPF program name", probe_kern_prog_name,
+ },
+ [FEAT_GLOBAL_DATA] = {
+ "global variables", probe_kern_global_data,
+ },
+ [FEAT_BTF_FUNC] = {
+ "BTF functions", probe_kern_btf_func,
+ },
+ [FEAT_BTF_GLOBAL_FUNC] = {
+ "BTF global function", probe_kern_btf_func_global,
+ },
+ [FEAT_BTF_DATASEC] = {
+ "BTF data section and variable", probe_kern_btf_datasec,
+ },
+ [FEAT_ARRAY_MMAP] = {
+ "ARRAY map mmap()", probe_kern_array_mmap,
+ },
+ [FEAT_EXP_ATTACH_TYPE] = {
+ "BPF_PROG_LOAD expected_attach_type attribute",
+ probe_kern_exp_attach_type,
+ },
+};
+
+static bool kernel_supports(enum kern_feature_id feat_id)
+{
+ struct kern_feature_desc *feat = &feature_probes[feat_id];
+ int ret;
+
+ if (READ_ONCE(feat->res) == FEAT_UNKNOWN) {
+ ret = feat->probe();
+ if (ret > 0) {
+ WRITE_ONCE(feat->res, FEAT_SUPPORTED);
+ } else if (ret == 0) {
+ WRITE_ONCE(feat->res, FEAT_MISSING);
+ } else {
+ pr_warn("Detection of kernel %s support failed: %d\n", feat->desc, ret);
+ WRITE_ONCE(feat->res, FEAT_MISSING);
+ }
}
- return 0;
+ return READ_ONCE(feat->res) == FEAT_SUPPORTED;
}
static bool map_is_reuse_compat(const struct bpf_map *map, int map_fd)
@@ -3760,7 +3783,7 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map)
memset(&create_attr, 0, sizeof(create_attr));
- if (obj->caps.name)
+ if (kernel_supports(FEAT_PROG_NAME))
create_attr.name = map->name;
create_attr.map_ifindex = map->map_ifindex;
create_attr.map_type = def->type;
@@ -5364,12 +5387,12 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
memset(&load_attr, 0, sizeof(struct bpf_load_program_attr));
load_attr.prog_type = prog->type;
/* old kernels might not support specifying expected_attach_type */
- if (!prog->caps->exp_attach_type && prog->sec_def &&
+ if (!kernel_supports(FEAT_EXP_ATTACH_TYPE) && prog->sec_def &&
prog->sec_def->is_exp_attach_type_optional)
load_attr.expected_attach_type = 0;
else
load_attr.expected_attach_type = prog->expected_attach_type;
- if (prog->caps->name)
+ if (kernel_supports(FEAT_PROG_NAME))
load_attr.name = prog->name;
load_attr.insns = insns;
load_attr.insns_cnt = insns_cnt;
@@ -5387,7 +5410,7 @@ load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
}
/* specify func_info/line_info only if kernel supports them */
btf_fd = bpf_object__btf_fd(prog->obj);
- if (btf_fd >= 0 && prog->obj->caps.btf_func) {
+ if (btf_fd >= 0 && kernel_supports(FEAT_BTF_FUNC)) {
load_attr.prog_btf_fd = btf_fd;
load_attr.func_info = prog->func_info;
load_attr.func_info_rec_size = prog->func_info_rec_size;
@@ -5750,11 +5773,11 @@ static int bpf_object__sanitize_maps(struct bpf_object *obj)
bpf_object__for_each_map(m, obj) {
if (!bpf_map__is_internal(m))
continue;
- if (!obj->caps.global_data) {
+ if (!kernel_supports(FEAT_GLOBAL_DATA)) {
pr_warn("kernel doesn't support global data\n");
return -ENOTSUP;
}
- if (!obj->caps.array_mmap)
+ if (!kernel_supports(FEAT_ARRAY_MMAP))
m->def.map_flags ^= BPF_F_MMAPABLE;
}
@@ -5904,7 +5927,6 @@ int bpf_object__load_xattr(struct bpf_object_load_attr *attr)
}
err = bpf_object__probe_loading(obj);
- err = err ? : bpf_object__probe_caps(obj);
err = err ? : bpf_object__resolve_externs(obj, obj->kconfig);
err = err ? : bpf_object__sanitize_and_load_btf(obj);
err = err ? : bpf_object__sanitize_maps(obj);
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 3/7] libbpf: factor out common logic of testing and closing FD
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 1/7] libbpf: disable -Wswitch-enum compiler warning Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 2/7] libbpf: make kernel feature probing lazy Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str] Andrii Nakryiko
` (4 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko
Factor out common piece of logic that detects support for a feature based on
successfully created FD. Also take care of closing FD, if it was created.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/libbpf.c | 70 ++++++++++--------------------------------
1 file changed, 17 insertions(+), 53 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 9cf22361f945..ab0c3a409eea 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -3431,6 +3431,13 @@ bpf_object__probe_loading(struct bpf_object *obj)
return 0;
}
+static int probe_fd(int fd)
+{
+ if (fd >= 0)
+ close(fd);
+ return fd >= 0;
+}
+
static int probe_kern_prog_name(void)
{
struct bpf_load_program_attr attr;
@@ -3449,12 +3456,7 @@ static int probe_kern_prog_name(void)
attr.license = "GPL";
attr.name = "test";
ret = bpf_load_program_xattr(&attr, NULL, 0);
- if (ret >= 0) {
- close(ret);
- return 1;
- }
-
- return 0;
+ return probe_fd(ret);
}
static int probe_kern_global_data(void)
@@ -3495,12 +3497,7 @@ static int probe_kern_global_data(void)
ret = bpf_load_program_xattr(&prg_attr, NULL, 0);
close(map);
- if (ret >= 0) {
- close(ret);
- return 1;
- }
-
- return 0;
+ return probe_fd(ret);
}
static int probe_kern_btf_func(void)
@@ -3516,16 +3513,9 @@ static int probe_kern_btf_func(void)
/* FUNC x */ /* [3] */
BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, 0), 2),
};
- int btf_fd;
- btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs));
- if (btf_fd >= 0) {
- close(btf_fd);
- return 1;
- }
-
- return 0;
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs)));
}
static int probe_kern_btf_func_global(void)
@@ -3541,16 +3531,9 @@ static int probe_kern_btf_func_global(void)
/* FUNC x BTF_FUNC_GLOBAL */ /* [3] */
BTF_TYPE_ENC(5, BTF_INFO_ENC(BTF_KIND_FUNC, 0, BTF_FUNC_GLOBAL), 2),
};
- int btf_fd;
- btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs));
- if (btf_fd >= 0) {
- close(btf_fd);
- return 1;
- }
-
- return 0;
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs)));
}
static int probe_kern_btf_datasec(void)
@@ -3567,16 +3550,9 @@ static int probe_kern_btf_datasec(void)
BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4),
BTF_VAR_SECINFO_ENC(2, 0, 4),
};
- int btf_fd;
- btf_fd = libbpf__load_raw_btf((char *)types, sizeof(types),
- strs, sizeof(strs));
- if (btf_fd >= 0) {
- close(btf_fd);
- return 1;
- }
-
- return 0;
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs)));
}
static int probe_kern_array_mmap(void)
@@ -3588,14 +3564,8 @@ static int probe_kern_array_mmap(void)
.value_size = sizeof(int),
.max_entries = 1,
};
- int fd;
- fd = bpf_create_map_xattr(&attr);
- if (fd >= 0) {
- close(fd);
- return 1;
- }
- return 0;
+ return probe_fd(bpf_create_map_xattr(&attr));
}
static int probe_kern_exp_attach_type(void)
@@ -3605,7 +3575,6 @@ static int probe_kern_exp_attach_type(void)
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
};
- int fd;
memset(&attr, 0, sizeof(attr));
/* use any valid combination of program type and (optional)
@@ -3619,12 +3588,7 @@ static int probe_kern_exp_attach_type(void)
attr.insns_cnt = ARRAY_SIZE(insns);
attr.license = "GPL";
- fd = bpf_load_program_xattr(&attr, NULL, 0);
- if (fd >= 0) {
- close(fd);
- return 1;
- }
- return 0;
+ return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
}
enum kern_feature_result {
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str]
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
` (2 preceding siblings ...)
2020-08-18 21:33 ` [PATCH bpf-next 3/7] libbpf: factor out common logic of testing and closing FD Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-19 1:42 ` Yonghong Song
2020-08-18 21:33 ` [PATCH bpf-next 5/7] selftests/bpf: fix test_vmlinux test to use bpf_probe_read_user() Andrii Nakryiko
` (3 subsequent siblings)
7 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko
Add BPF program code sanitization pass, replacing calls to BPF
bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if
libbpf detects that kernel doesn't support new variants.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/libbpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index ab0c3a409eea..bdc08f89a5c0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -180,6 +180,8 @@ enum kern_feature_id {
FEAT_ARRAY_MMAP,
/* kernel support for expected_attach_type in BPF_PROG_LOAD */
FEAT_EXP_ATTACH_TYPE,
+ /* bpf_probe_read_{kernel,user}[_str] helpers */
+ FEAT_PROBE_READ_KERN,
__FEAT_CNT,
};
@@ -3591,6 +3593,27 @@ static int probe_kern_exp_attach_type(void)
return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
}
+static int probe_kern_probe_read_kernel(void)
+{
+ struct bpf_load_program_attr attr;
+ struct bpf_insn insns[] = {
+ BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), /* r1 = r10 (fp) */
+ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), /* r1 += -8 */
+ BPF_MOV64_IMM(BPF_REG_2, 8), /* r2 = 8 */
+ BPF_MOV64_IMM(BPF_REG_3, 0), /* r3 = 0 */
+ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
+ BPF_EXIT_INSN(),
+ };
+
+ memset(&attr, 0, sizeof(attr));
+ attr.prog_type = BPF_PROG_TYPE_KPROBE;
+ attr.insns = insns;
+ attr.insns_cnt = ARRAY_SIZE(insns);
+ attr.license = "GPL";
+
+ return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
+}
+
enum kern_feature_result {
FEAT_UNKNOWN = 0,
FEAT_SUPPORTED = 1,
@@ -3626,6 +3649,9 @@ static struct kern_feature_desc {
"BPF_PROG_LOAD expected_attach_type attribute",
probe_kern_exp_attach_type,
},
+ [FEAT_PROBE_READ_KERN] = {
+ "bpf_probe_read_kernel() helper", probe_kern_probe_read_kernel,
+ }
};
static bool kernel_supports(enum kern_feature_id feat_id)
@@ -5335,6 +5361,53 @@ static int bpf_object__collect_reloc(struct bpf_object *obj)
return 0;
}
+static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id)
+{
+ __u8 class = BPF_CLASS(insn->code);
+
+ if ((class == BPF_JMP || class == BPF_JMP32) &&
+ BPF_OP(insn->code) == BPF_CALL &&
+ BPF_SRC(insn->code) == BPF_K &&
+ insn->src_reg == 0 && insn->dst_reg == 0) {
+ if (func_id)
+ *func_id = insn->imm;
+ return true;
+ }
+ return false;
+}
+
+static int bpf_object__sanitize_prog(struct bpf_object* obj, struct bpf_program *prog)
+{
+ struct bpf_insn *insn = prog->insns;
+ enum bpf_func_id func_id;
+ int i;
+
+ for (i = 0; i < prog->insns_cnt; i++, insn++) {
+ if (!insn_is_helper_call(insn, &func_id))
+ continue;
+
+ /* on kernels that don't yet support
+ * bpf_probe_read_{kernel,user}[_str] helpers, fall back
+ * to bpf_probe_read() which works well for old kernels
+ */
+ switch (func_id) {
+ case BPF_FUNC_probe_read_kernel:
+ case BPF_FUNC_probe_read_user:
+ if (!kernel_supports(FEAT_PROBE_READ_KERN))
+ insn->imm = BPF_FUNC_probe_read;
+ break;
+ case BPF_FUNC_probe_read_kernel_str:
+ case BPF_FUNC_probe_read_user_str:
+ if (!kernel_supports(FEAT_PROBE_READ_KERN))
+ insn->imm = BPF_FUNC_probe_read_str;
+ break;
+ default:
+ break;
+ }
+ }
+ return 0;
+}
+
static int
load_program(struct bpf_program *prog, struct bpf_insn *insns, int insns_cnt,
char *license, __u32 kern_version, int *pfd)
@@ -5549,6 +5622,13 @@ bpf_object__load_progs(struct bpf_object *obj, int log_level)
size_t i;
int err;
+ for (i = 0; i < obj->nr_programs; i++) {
+ prog = &obj->programs[i];
+ err = bpf_object__sanitize_prog(obj, prog);
+ if (err)
+ return err;
+ }
+
for (i = 0; i < obj->nr_programs; i++) {
prog = &obj->programs[i];
if (bpf_program__is_function_storage(prog, obj))
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str]
2020-08-18 21:33 ` [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str] Andrii Nakryiko
@ 2020-08-19 1:42 ` Yonghong Song
2020-08-19 20:11 ` Andrii Nakryiko
0 siblings, 1 reply; 16+ messages in thread
From: Yonghong Song @ 2020-08-19 1:42 UTC (permalink / raw)
To: Andrii Nakryiko, bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team
On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
> Add BPF program code sanitization pass, replacing calls to BPF
> bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if
> libbpf detects that kernel doesn't support new variants.
I know this has been merged. The whole patch set looks good to me.
A few nit or questions below.
>
> Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> ---
> tools/lib/bpf/libbpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 80 insertions(+)
>
> diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> index ab0c3a409eea..bdc08f89a5c0 100644
> --- a/tools/lib/bpf/libbpf.c
> +++ b/tools/lib/bpf/libbpf.c
> @@ -180,6 +180,8 @@ enum kern_feature_id {
> FEAT_ARRAY_MMAP,
> /* kernel support for expected_attach_type in BPF_PROG_LOAD */
> FEAT_EXP_ATTACH_TYPE,
> + /* bpf_probe_read_{kernel,user}[_str] helpers */
> + FEAT_PROBE_READ_KERN,
> __FEAT_CNT,
> };
>
> @@ -3591,6 +3593,27 @@ static int probe_kern_exp_attach_type(void)
> return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
> }
>
[...]
>
> +static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id)
> +{
> + __u8 class = BPF_CLASS(insn->code);
> +
> + if ((class == BPF_JMP || class == BPF_JMP32) &&
Do we support BPF_JMP32 + BPF_CALL ... as a helper call?
I am not aware of this.
> + BPF_OP(insn->code) == BPF_CALL &&
> + BPF_SRC(insn->code) == BPF_K &&
> + insn->src_reg == 0 && insn->dst_reg == 0) {
> + if (func_id)
> + *func_id = insn->imm;
looks like func_id is always non-NULL. Unless this is to support future
usage where func_id may be NULL, the above condition probably not needed.
> + return true;
> + }
> + return false;
> +}
> +
[...]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str]
2020-08-19 1:42 ` Yonghong Song
@ 2020-08-19 20:11 ` Andrii Nakryiko
2020-08-19 20:15 ` Alexei Starovoitov
0 siblings, 1 reply; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-19 20:11 UTC (permalink / raw)
To: Yonghong Song
Cc: Andrii Nakryiko, bpf, Networking, Alexei Starovoitov,
Daniel Borkmann, Kernel Team
On Tue, Aug 18, 2020 at 6:42 PM Yonghong Song <yhs@fb.com> wrote:
>
>
>
> On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
> > Add BPF program code sanitization pass, replacing calls to BPF
> > bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if
> > libbpf detects that kernel doesn't support new variants.
>
> I know this has been merged. The whole patch set looks good to me.
> A few nit or questions below.
>
> >
> > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > ---
> > tools/lib/bpf/libbpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 80 insertions(+)
> >
> > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > index ab0c3a409eea..bdc08f89a5c0 100644
> > --- a/tools/lib/bpf/libbpf.c
> > +++ b/tools/lib/bpf/libbpf.c
> > @@ -180,6 +180,8 @@ enum kern_feature_id {
> > FEAT_ARRAY_MMAP,
> > /* kernel support for expected_attach_type in BPF_PROG_LOAD */
> > FEAT_EXP_ATTACH_TYPE,
> > + /* bpf_probe_read_{kernel,user}[_str] helpers */
> > + FEAT_PROBE_READ_KERN,
> > __FEAT_CNT,
> > };
> >
> > @@ -3591,6 +3593,27 @@ static int probe_kern_exp_attach_type(void)
> > return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
> > }
> >
> [...]
> >
> > +static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id)
> > +{
> > + __u8 class = BPF_CLASS(insn->code);
> > +
> > + if ((class == BPF_JMP || class == BPF_JMP32) &&
>
> Do we support BPF_JMP32 + BPF_CALL ... as a helper call?
> I am not aware of this.
Verifier seems to support both. Check do_check in
kernel/bpf/verifier.c, around line 9000. So I decided to also support
it, even if Clang doesn't emit it (yet?).
>
> > + BPF_OP(insn->code) == BPF_CALL &&
> > + BPF_SRC(insn->code) == BPF_K &&
> > + insn->src_reg == 0 && insn->dst_reg == 0) {
> > + if (func_id)
> > + *func_id = insn->imm;
>
> looks like func_id is always non-NULL. Unless this is to support future
> usage where func_id may be NULL, the above condition probably not needed.
Yeah, not sure why I assumed it might be optional, maybe the first
version of the code used to pass NULL in some other place. But I think
it's fine, this is a generic helper function that might be used later
as well. So I'd just keep it as is, it doesn't hurt.
>
> > + return true;
> > + }
> > + return false;
> > +}
> > +
> [...]
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str]
2020-08-19 20:11 ` Andrii Nakryiko
@ 2020-08-19 20:15 ` Alexei Starovoitov
2020-08-19 20:23 ` Andrii Nakryiko
0 siblings, 1 reply; 16+ messages in thread
From: Alexei Starovoitov @ 2020-08-19 20:15 UTC (permalink / raw)
To: Andrii Nakryiko
Cc: Yonghong Song, Andrii Nakryiko, bpf, Networking,
Alexei Starovoitov, Daniel Borkmann, Kernel Team
On Wed, Aug 19, 2020 at 1:13 PM Andrii Nakryiko
<andrii.nakryiko@gmail.com> wrote:
>
> On Tue, Aug 18, 2020 at 6:42 PM Yonghong Song <yhs@fb.com> wrote:
> >
> >
> >
> > On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
> > > Add BPF program code sanitization pass, replacing calls to BPF
> > > bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if
> > > libbpf detects that kernel doesn't support new variants.
> >
> > I know this has been merged. The whole patch set looks good to me.
> > A few nit or questions below.
> >
> > >
> > > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > > ---
> > > tools/lib/bpf/libbpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++
> > > 1 file changed, 80 insertions(+)
> > >
> > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > index ab0c3a409eea..bdc08f89a5c0 100644
> > > --- a/tools/lib/bpf/libbpf.c
> > > +++ b/tools/lib/bpf/libbpf.c
> > > @@ -180,6 +180,8 @@ enum kern_feature_id {
> > > FEAT_ARRAY_MMAP,
> > > /* kernel support for expected_attach_type in BPF_PROG_LOAD */
> > > FEAT_EXP_ATTACH_TYPE,
> > > + /* bpf_probe_read_{kernel,user}[_str] helpers */
> > > + FEAT_PROBE_READ_KERN,
> > > __FEAT_CNT,
> > > };
> > >
> > > @@ -3591,6 +3593,27 @@ static int probe_kern_exp_attach_type(void)
> > > return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
> > > }
> > >
> > [...]
> > >
> > > +static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id)
> > > +{
> > > + __u8 class = BPF_CLASS(insn->code);
> > > +
> > > + if ((class == BPF_JMP || class == BPF_JMP32) &&
> >
> > Do we support BPF_JMP32 + BPF_CALL ... as a helper call?
> > I am not aware of this.
>
> Verifier seems to support both. Check do_check in
> kernel/bpf/verifier.c, around line 9000. So I decided to also support
> it, even if Clang doesn't emit it (yet?).
please check few lines below 9000 ;)
jmp32 | call is rejected.
I would remove that from libbpf as well.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str]
2020-08-19 20:15 ` Alexei Starovoitov
@ 2020-08-19 20:23 ` Andrii Nakryiko
0 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-19 20:23 UTC (permalink / raw)
To: Alexei Starovoitov
Cc: Yonghong Song, Andrii Nakryiko, bpf, Networking,
Alexei Starovoitov, Daniel Borkmann, Kernel Team
On Wed, Aug 19, 2020 at 1:15 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Wed, Aug 19, 2020 at 1:13 PM Andrii Nakryiko
> <andrii.nakryiko@gmail.com> wrote:
> >
> > On Tue, Aug 18, 2020 at 6:42 PM Yonghong Song <yhs@fb.com> wrote:
> > >
> > >
> > >
> > > On 8/18/20 2:33 PM, Andrii Nakryiko wrote:
> > > > Add BPF program code sanitization pass, replacing calls to BPF
> > > > bpf_probe_read_{kernel,user}[_str]() helpers with bpf_probe_read[_str](), if
> > > > libbpf detects that kernel doesn't support new variants.
> > >
> > > I know this has been merged. The whole patch set looks good to me.
> > > A few nit or questions below.
> > >
> > > >
> > > > Signed-off-by: Andrii Nakryiko <andriin@fb.com>
> > > > ---
> > > > tools/lib/bpf/libbpf.c | 80 ++++++++++++++++++++++++++++++++++++++++++
> > > > 1 file changed, 80 insertions(+)
> > > >
> > > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
> > > > index ab0c3a409eea..bdc08f89a5c0 100644
> > > > --- a/tools/lib/bpf/libbpf.c
> > > > +++ b/tools/lib/bpf/libbpf.c
> > > > @@ -180,6 +180,8 @@ enum kern_feature_id {
> > > > FEAT_ARRAY_MMAP,
> > > > /* kernel support for expected_attach_type in BPF_PROG_LOAD */
> > > > FEAT_EXP_ATTACH_TYPE,
> > > > + /* bpf_probe_read_{kernel,user}[_str] helpers */
> > > > + FEAT_PROBE_READ_KERN,
> > > > __FEAT_CNT,
> > > > };
> > > >
> > > > @@ -3591,6 +3593,27 @@ static int probe_kern_exp_attach_type(void)
> > > > return probe_fd(bpf_load_program_xattr(&attr, NULL, 0));
> > > > }
> > > >
> > > [...]
> > > >
> > > > +static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id)
> > > > +{
> > > > + __u8 class = BPF_CLASS(insn->code);
> > > > +
> > > > + if ((class == BPF_JMP || class == BPF_JMP32) &&
> > >
> > > Do we support BPF_JMP32 + BPF_CALL ... as a helper call?
> > > I am not aware of this.
> >
> > Verifier seems to support both. Check do_check in
> > kernel/bpf/verifier.c, around line 9000. So I decided to also support
> > it, even if Clang doesn't emit it (yet?).
>
> please check few lines below 9000 ;)
> jmp32 | call is rejected.
> I would remove that from libbpf as well.
I've stared at that condition multiple times and didn't notice the
"class == BPF_JMP32" part... Yeah, sure, I'll drop that, of course.
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH bpf-next 5/7] selftests/bpf: fix test_vmlinux test to use bpf_probe_read_user()
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
` (3 preceding siblings ...)
2020-08-18 21:33 ` [PATCH bpf-next 4/7] libbpf: sanitize BPF program code for bpf_probe_read_{kernel,user}[_str] Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 6/7] libbpf: switch tracing and CO-RE helper macros to bpf_probe_read_kernel() Andrii Nakryiko
` (2 subsequent siblings)
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko
The test is reading UAPI kernel structure from user-space. So it doesn't need
CO-RE relocations and has to use bpf_probe_read_user().
Fixes: acbd06206bbb ("selftests/bpf: Add vmlinux.h selftest exercising tracing of syscalls")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/testing/selftests/bpf/progs/test_vmlinux.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/tools/testing/selftests/bpf/progs/test_vmlinux.c b/tools/testing/selftests/bpf/progs/test_vmlinux.c
index 29fa09d6a6c6..e9dfa0313d1b 100644
--- a/tools/testing/selftests/bpf/progs/test_vmlinux.c
+++ b/tools/testing/selftests/bpf/progs/test_vmlinux.c
@@ -19,12 +19,14 @@ SEC("tp/syscalls/sys_enter_nanosleep")
int handle__tp(struct trace_event_raw_sys_enter *args)
{
struct __kernel_timespec *ts;
+ long tv_nsec;
if (args->id != __NR_nanosleep)
return 0;
ts = (void *)args->args[0];
- if (BPF_CORE_READ(ts, tv_nsec) != MY_TV_NSEC)
+ if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
+ tv_nsec != MY_TV_NSEC)
return 0;
tp_called = true;
@@ -35,12 +37,14 @@ SEC("raw_tp/sys_enter")
int BPF_PROG(handle__raw_tp, struct pt_regs *regs, long id)
{
struct __kernel_timespec *ts;
+ long tv_nsec;
if (id != __NR_nanosleep)
return 0;
ts = (void *)PT_REGS_PARM1_CORE(regs);
- if (BPF_CORE_READ(ts, tv_nsec) != MY_TV_NSEC)
+ if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
+ tv_nsec != MY_TV_NSEC)
return 0;
raw_tp_called = true;
@@ -51,12 +55,14 @@ SEC("tp_btf/sys_enter")
int BPF_PROG(handle__tp_btf, struct pt_regs *regs, long id)
{
struct __kernel_timespec *ts;
+ long tv_nsec;
if (id != __NR_nanosleep)
return 0;
ts = (void *)PT_REGS_PARM1_CORE(regs);
- if (BPF_CORE_READ(ts, tv_nsec) != MY_TV_NSEC)
+ if (bpf_probe_read_user(&tv_nsec, sizeof(ts->tv_nsec), &ts->tv_nsec) ||
+ tv_nsec != MY_TV_NSEC)
return 0;
tp_btf_called = true;
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 6/7] libbpf: switch tracing and CO-RE helper macros to bpf_probe_read_kernel()
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
` (4 preceding siblings ...)
2020-08-18 21:33 ` [PATCH bpf-next 5/7] selftests/bpf: fix test_vmlinux test to use bpf_probe_read_user() Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-18 21:33 ` [PATCH bpf-next 7/7] libbpf: detect minimal BTF support and skip BTF loading, if missing Andrii Nakryiko
2020-08-19 0:24 ` [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Alexei Starovoitov
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel
Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko, Ilya Leoshkevich
Now that libbpf can automatically fallback to bpf_probe_read() on old kernels
not yet supporting bpf_probe_read_kernel(), switch libbpf BPF-side helper
macros to use appropriate BPF helper for reading kernel data.
Cc: Ilya Leoshkevich <iii@linux.ibm.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/bpf_core_read.h | 40 +++++++++++++++++++----------------
tools/lib/bpf/bpf_tracing.h | 4 ++--
2 files changed, 24 insertions(+), 20 deletions(-)
diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
index eae5cccff761..03152cb143b7 100644
--- a/tools/lib/bpf/bpf_core_read.h
+++ b/tools/lib/bpf/bpf_core_read.h
@@ -24,27 +24,29 @@ enum bpf_field_info_kind {
#if __BYTE_ORDER == __LITTLE_ENDIAN
#define __CORE_BITFIELD_PROBE_READ(dst, src, fld) \
- bpf_probe_read((void *)dst, \
- __CORE_RELO(src, fld, BYTE_SIZE), \
- (const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
+ bpf_probe_read_kernel( \
+ (void *)dst, \
+ __CORE_RELO(src, fld, BYTE_SIZE), \
+ (const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
#else
/* semantics of LSHIFT_64 assumes loading values into low-ordered bytes, so
* for big-endian we need to adjust destination pointer accordingly, based on
* field byte size
*/
#define __CORE_BITFIELD_PROBE_READ(dst, src, fld) \
- bpf_probe_read((void *)dst + (8 - __CORE_RELO(src, fld, BYTE_SIZE)), \
- __CORE_RELO(src, fld, BYTE_SIZE), \
- (const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
+ bpf_probe_read_kernel( \
+ (void *)dst + (8 - __CORE_RELO(src, fld, BYTE_SIZE)), \
+ __CORE_RELO(src, fld, BYTE_SIZE), \
+ (const void *)src + __CORE_RELO(src, fld, BYTE_OFFSET))
#endif
/*
* Extract bitfield, identified by s->field, and return its value as u64.
* All this is done in relocatable manner, so bitfield changes such as
* signedness, bit size, offset changes, this will be handled automatically.
- * This version of macro is using bpf_probe_read() to read underlying integer
- * storage. Macro functions as an expression and its return type is
- * bpf_probe_read()'s return value: 0, on success, <0 on error.
+ * This version of macro is using bpf_probe_read_kernel() to read underlying
+ * integer storage. Macro functions as an expression and its return type is
+ * bpf_probe_read_kernel()'s return value: 0, on success, <0 on error.
*/
#define BPF_CORE_READ_BITFIELD_PROBED(s, field) ({ \
unsigned long long val = 0; \
@@ -99,8 +101,8 @@ enum bpf_field_info_kind {
__builtin_preserve_field_info(field, BPF_FIELD_BYTE_SIZE)
/*
- * bpf_core_read() abstracts away bpf_probe_read() call and captures offset
- * relocation for source address using __builtin_preserve_access_index()
+ * bpf_core_read() abstracts away bpf_probe_read_kernel() call and captures
+ * offset relocation for source address using __builtin_preserve_access_index()
* built-in, provided by Clang.
*
* __builtin_preserve_access_index() takes as an argument an expression of
@@ -115,8 +117,8 @@ enum bpf_field_info_kind {
* (local) BTF, used to record relocation.
*/
#define bpf_core_read(dst, sz, src) \
- bpf_probe_read(dst, sz, \
- (const void *)__builtin_preserve_access_index(src))
+ bpf_probe_read_kernel(dst, sz, \
+ (const void *)__builtin_preserve_access_index(src))
/*
* bpf_core_read_str() is a thin wrapper around bpf_probe_read_str()
@@ -124,8 +126,8 @@ enum bpf_field_info_kind {
* argument.
*/
#define bpf_core_read_str(dst, sz, src) \
- bpf_probe_read_str(dst, sz, \
- (const void *)__builtin_preserve_access_index(src))
+ bpf_probe_read_kernel_str(dst, sz, \
+ (const void *)__builtin_preserve_access_index(src))
#define ___concat(a, b) a ## b
#define ___apply(fn, n) ___concat(fn, n)
@@ -239,15 +241,17 @@ enum bpf_field_info_kind {
* int x = BPF_CORE_READ(s, a.b.c, d.e, f, g);
*
* BPF_CORE_READ will decompose above statement into 4 bpf_core_read (BPF
- * CO-RE relocatable bpf_probe_read() wrapper) calls, logically equivalent to:
+ * CO-RE relocatable bpf_probe_read_kernel() wrapper) calls, logically
+ * equivalent to:
* 1. const void *__t = s->a.b.c;
* 2. __t = __t->d.e;
* 3. __t = __t->f;
* 4. return __t->g;
*
* Equivalence is logical, because there is a heavy type casting/preservation
- * involved, as well as all the reads are happening through bpf_probe_read()
- * calls using __builtin_preserve_access_index() to emit CO-RE relocations.
+ * involved, as well as all the reads are happening through
+ * bpf_probe_read_kernel() calls using __builtin_preserve_access_index() to
+ * emit CO-RE relocations.
*
* N.B. Only up to 9 "field accessors" are supported, which should be more
* than enough for any practical purpose.
diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
index eebf020cbe3e..f9ef37707888 100644
--- a/tools/lib/bpf/bpf_tracing.h
+++ b/tools/lib/bpf/bpf_tracing.h
@@ -289,9 +289,9 @@ struct pt_regs;
#define BPF_KRETPROBE_READ_RET_IP BPF_KPROBE_READ_RET_IP
#else
#define BPF_KPROBE_READ_RET_IP(ip, ctx) \
- ({ bpf_probe_read(&(ip), sizeof(ip), (void *)PT_REGS_RET(ctx)); })
+ ({ bpf_probe_read_kernel(&(ip), sizeof(ip), (void *)PT_REGS_RET(ctx)); })
#define BPF_KRETPROBE_READ_RET_IP(ip, ctx) \
- ({ bpf_probe_read(&(ip), sizeof(ip), \
+ ({ bpf_probe_read_kernel(&(ip), sizeof(ip), \
(void *)(PT_REGS_FP(ctx) + sizeof(ip))); })
#endif
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH bpf-next 7/7] libbpf: detect minimal BTF support and skip BTF loading, if missing
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
` (5 preceding siblings ...)
2020-08-18 21:33 ` [PATCH bpf-next 6/7] libbpf: switch tracing and CO-RE helper macros to bpf_probe_read_kernel() Andrii Nakryiko
@ 2020-08-18 21:33 ` Andrii Nakryiko
2020-08-19 0:24 ` [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Alexei Starovoitov
7 siblings, 0 replies; 16+ messages in thread
From: Andrii Nakryiko @ 2020-08-18 21:33 UTC (permalink / raw)
To: bpf, netdev, ast, daniel; +Cc: andrii.nakryiko, kernel-team, Andrii Nakryiko
Detect whether a kernel supports any BTF at all, and if not, don't even
attempt loading BTF to avoid unnecessary log messages like:
libbpf: Error loading BTF: Invalid argument(22)
libbpf: Error loading .BTF into kernel: -22. BTF is optional, ignoring.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
---
tools/lib/bpf/libbpf.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index bdc08f89a5c0..5f971796d196 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -170,6 +170,8 @@ enum kern_feature_id {
FEAT_PROG_NAME,
/* v5.2: kernel support for global data sections. */
FEAT_GLOBAL_DATA,
+ /* BTF support */
+ FEAT_BTF,
/* BTF_KIND_FUNC and BTF_KIND_FUNC_PROTO support */
FEAT_BTF_FUNC,
/* BTF_KIND_VAR and BTF_KIND_DATASEC support */
@@ -2533,6 +2535,15 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
if (!obj->btf)
return 0;
+ if (!kernel_supports(FEAT_BTF)) {
+ if (kernel_needs_btf(obj)) {
+ err = -EOPNOTSUPP;
+ goto report;
+ }
+ pr_debug("Kernel doesn't support BTF, skipping uploading it.\n");
+ return 0;
+ }
+
sanitize = btf_needs_sanitization(obj);
if (sanitize) {
const void *raw_data;
@@ -2558,6 +2569,7 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
}
btf__free(kern_btf);
}
+report:
if (err) {
btf_mandatory = kernel_needs_btf(obj);
pr_warn("Error loading .BTF into kernel: %d. %s\n", err,
@@ -3502,6 +3514,18 @@ static int probe_kern_global_data(void)
return probe_fd(ret);
}
+static int probe_kern_btf(void)
+{
+ static const char strs[] = "\0int";
+ __u32 types[] = {
+ /* int */
+ BTF_TYPE_INT_ENC(1, BTF_INT_SIGNED, 0, 32, 4),
+ };
+
+ return probe_fd(libbpf__load_raw_btf((char *)types, sizeof(types),
+ strs, sizeof(strs)));
+}
+
static int probe_kern_btf_func(void)
{
static const char strs[] = "\0int\0x\0a";
@@ -3633,6 +3657,9 @@ static struct kern_feature_desc {
[FEAT_GLOBAL_DATA] = {
"global variables", probe_kern_global_data,
},
+ [FEAT_BTF] = {
+ "minimal BTF", probe_kern_btf,
+ },
[FEAT_BTF_FUNC] = {
"BTF functions", probe_kern_btf_func,
},
--
2.24.1
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements
2020-08-18 21:33 [PATCH bpf-next 0/7] libbpf feature probing and sanitization improvements Andrii Nakryiko
` (6 preceding siblings ...)
2020-08-18 21:33 ` [PATCH bpf-next 7/7] libbpf: detect minimal BTF support and skip BTF loading, if missing Andrii Nakryiko
@ 2020-08-19 0:24 ` Alexei Starovoitov
7 siblings, 0 replies; 16+ messages in thread
From: Alexei Starovoitov @ 2020-08-19 0:24 UTC (permalink / raw)
To: Andrii Nakryiko; +Cc: bpf, netdev, ast, daniel, andrii.nakryiko, kernel-team
On Tue, Aug 18, 2020 at 02:33:49PM -0700, Andrii Nakryiko wrote:
> This patch set refactors libbpf feature probing to be done lazily on as-needed
> basis, instead of proactively testing all possible features libbpf knows
> about. This allows to scale such detections and mitigations better, without
> issuing unnecessary syscalls on each bpf_object__load() call. It's also now
> memoized globally, instead of per-bpf_object.
>
> Building on that, libbpf will now detect availability of
> bpf_probe_read_kernel() helper (which means also -user and -str variants), and
> will sanitize BPF program code by replacing such references to generic
> variants (bpf_probe_read[_str]()). This allows to migrate all BPF programs
> into proper -kernel/-user probing helpers, without the fear of breaking them
> for old kernels.
>
> With that, update BPF_CORE_READ() and related macros to use
> bpf_probe_read_kernel(), as it doesn't make much sense to do CO-RE relocations
> against user-space types. And the only class of cases in which BPF program
> might read kernel type from user-space are UAPI data structures which by
> definition are fixed in their memory layout and don't need relocating. This is
> exemplified by test_vmlinux test, which is fixed as part of this patch set as
> well. BPF_CORE_READ() is useful for chainingg bpf_probe_read_{kernel,user}()
> calls together even without relocation, so we might add user-space variants,
> if there is a need.
>
> While at making libbpf more useful for older kernels, also improve handling of
> a complete lack of BTF support in kernel by not even attempting to load BTF
> info into kernel. This eliminates annoying warning about lack of BTF support
> in the kernel and map creation retry without BTF. If user is using features
> that require kernel BTF support, it will still fail, of course.
Applied, Thanks
^ permalink raw reply [flat|nested] 16+ messages in thread