* [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type @ 2021-09-28 23:09 Kees Cook 2021-09-28 23:09 ` [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM Kees Cook ` (2 more replies) 0 siblings, 3 replies; 7+ messages in thread From: Kees Cook @ 2021-09-28 23:09 UTC (permalink / raw) To: Alexei Starovoitov Cc: Kees Cook, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, linux-kernel, netdev, bpf, linux-hardening Hi, In order to keep ahead of cases in the kernel where Control Flow Integrity (CFI) may trip over function call casts, enabling -Wcast-function-type is helpful. To that end, replace BPF_CAST_CALL() as it triggers warnings with this option and is now one of the last places in the kernel in need of fixing. Thanks, -Kees v2: - rebase to bpf-next - add acks v1: https://lore.kernel.org/lkml/20210927182700.2980499-1-keescook@chromium.org Kees Cook (2): bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM bpf: Replace callers of BPF_CAST_CALL with proper function typedef include/linux/bpf.h | 4 +++- include/linux/filter.h | 7 +++---- kernel/bpf/arraymap.c | 7 +++---- kernel/bpf/hashtab.c | 13 ++++++------- kernel/bpf/helpers.c | 5 ++--- kernel/bpf/verifier.c | 26 +++++++++----------------- lib/test_bpf.c | 2 +- 7 files changed, 27 insertions(+), 37 deletions(-) -- 2.30.2 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM 2021-09-28 23:09 [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Kees Cook @ 2021-09-28 23:09 ` Kees Cook 2021-09-28 23:25 ` Gustavo A. R. Silva 2021-09-28 23:09 ` [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef Kees Cook 2021-09-28 23:33 ` [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Alexei Starovoitov 2 siblings, 1 reply; 7+ messages in thread From: Kees Cook @ 2021-09-28 23:09 UTC (permalink / raw) To: Alexei Starovoitov Cc: Kees Cook, Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, netdev, bpf, Gustavo A . R . Silva, Andrii Nakryiko, linux-kernel, linux-hardening In order to keep ahead of cases in the kernel where Control Flow Integrity (CFI) may trip over function call casts, enabling -Wcast-function-type is helpful. To that end, BPF_CAST_CALL causes various warnings and is one of the last places in the kernel triggering this warning. Most places using BPF_CAST_CALL actually just want a void * to perform math on. It's not actually performing a call, so just use a different helper to get the void *, by way of the new BPF_CALL_IMM() helper, which can clean up a common copy/paste idiom as well. This change results in no object code difference. Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: netdev@vger.kernel.org Cc: bpf@vger.kernel.org Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://github.com/KSPP/linux/issues/20 Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/lkml/CAEf4Bzb46=-J5Fxc3mMZ8JQPtK1uoE0q6+g6WPz53Cvx=CBEhw@mail.gmail.com --- include/linux/filter.h | 6 +++++- kernel/bpf/hashtab.c | 6 +++--- kernel/bpf/verifier.c | 26 +++++++++----------------- lib/test_bpf.c | 2 +- 4 files changed, 18 insertions(+), 22 deletions(-) diff --git a/include/linux/filter.h b/include/linux/filter.h index 4a93c12543ee..6c247663d4ce 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -365,13 +365,17 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) #define BPF_CAST_CALL(x) \ ((u64 (*)(u64, u64, u64, u64, u64))(x)) +/* Convert function address to BPF immediate */ + +#define BPF_CALL_IMM(x) ((void *)(x) - (void *)__bpf_call_base) + #define BPF_EMIT_CALL(FUNC) \ ((struct bpf_insn) { \ .code = BPF_JMP | BPF_CALL, \ .dst_reg = 0, \ .src_reg = 0, \ .off = 0, \ - .imm = ((FUNC) - __bpf_call_base) }) + .imm = BPF_CALL_IMM(FUNC) }) /* Raw code statement block */ diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 32471ba02708..3d8f9d6997d5 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -668,7 +668,7 @@ static int htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf) BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, (void *(*)(struct bpf_map *map, void *key))NULL)); - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1); *insn++ = BPF_ALU64_IMM(BPF_ADD, ret, offsetof(struct htab_elem, key) + @@ -709,7 +709,7 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map, BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, (void *(*)(struct bpf_map *map, void *key))NULL)); - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 4); *insn++ = BPF_LDX_MEM(BPF_B, ref_reg, ret, offsetof(struct htab_elem, lru_node) + @@ -2397,7 +2397,7 @@ static int htab_of_map_gen_lookup(struct bpf_map *map, BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, (void *(*)(struct bpf_map *map, void *key))NULL)); - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 2); *insn++ = BPF_ALU64_IMM(BPF_ADD, ret, offsetof(struct htab_elem, key) + diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 7a8351604f67..1433752db740 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -1744,7 +1744,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id) desc = &tab->descs[tab->nr_descs++]; desc->func_id = func_id; - desc->imm = BPF_CAST_CALL(addr) - __bpf_call_base; + desc->imm = BPF_CALL_IMM(addr); err = btf_distill_func_proto(&env->log, btf_vmlinux, func_proto, func_name, &desc->func_model); @@ -12514,8 +12514,7 @@ static int jit_subprogs(struct bpf_verifier_env *env) if (!bpf_pseudo_call(insn)) continue; subprog = insn->off; - insn->imm = BPF_CAST_CALL(func[subprog]->bpf_func) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(func[subprog]->bpf_func); } /* we use the aux data to keep a list of the start addresses @@ -12995,32 +12994,25 @@ static int do_misc_fixups(struct bpf_verifier_env *env) patch_map_ops_generic: switch (insn->imm) { case BPF_FUNC_map_lookup_elem: - insn->imm = BPF_CAST_CALL(ops->map_lookup_elem) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_lookup_elem); continue; case BPF_FUNC_map_update_elem: - insn->imm = BPF_CAST_CALL(ops->map_update_elem) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_update_elem); continue; case BPF_FUNC_map_delete_elem: - insn->imm = BPF_CAST_CALL(ops->map_delete_elem) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_delete_elem); continue; case BPF_FUNC_map_push_elem: - insn->imm = BPF_CAST_CALL(ops->map_push_elem) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_push_elem); continue; case BPF_FUNC_map_pop_elem: - insn->imm = BPF_CAST_CALL(ops->map_pop_elem) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_pop_elem); continue; case BPF_FUNC_map_peek_elem: - insn->imm = BPF_CAST_CALL(ops->map_peek_elem) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_peek_elem); continue; case BPF_FUNC_redirect_map: - insn->imm = BPF_CAST_CALL(ops->map_redirect) - - __bpf_call_base; + insn->imm = BPF_CALL_IMM(ops->map_redirect); continue; } diff --git a/lib/test_bpf.c b/lib/test_bpf.c index 08f438e6fe9e..21ea1ab253a1 100644 --- a/lib/test_bpf.c +++ b/lib/test_bpf.c @@ -12439,7 +12439,7 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs) err = -EFAULT; goto out_err; } - *insn = BPF_EMIT_CALL(BPF_CAST_CALL(addr)); + *insn = BPF_EMIT_CALL(addr); if ((long)__bpf_call_base + insn->imm != addr) *insn = BPF_JMP_A(0); /* Skip: NOP */ break; -- 2.30.2 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM 2021-09-28 23:09 ` [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM Kees Cook @ 2021-09-28 23:25 ` Gustavo A. R. Silva 0 siblings, 0 replies; 7+ messages in thread From: Gustavo A. R. Silva @ 2021-09-28 23:25 UTC (permalink / raw) To: Kees Cook Cc: Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, netdev, bpf, Andrii Nakryiko, linux-kernel, linux-hardening On Tue, Sep 28, 2021 at 04:09:45PM -0700, Kees Cook wrote: > In order to keep ahead of cases in the kernel where Control Flow > Integrity (CFI) may trip over function call casts, enabling > -Wcast-function-type is helpful. To that end, BPF_CAST_CALL causes > various warnings and is one of the last places in the kernel triggering > this warning. > > Most places using BPF_CAST_CALL actually just want a void * to perform > math on. It's not actually performing a call, so just use a different > helper to get the void *, by way of the new BPF_CALL_IMM() helper, which > can clean up a common copy/paste idiom as well. > > This change results in no object code difference. > > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: Daniel Borkmann <daniel@iogearbox.net> > Cc: Martin KaFai Lau <kafai@fb.com> > Cc: Song Liu <songliubraving@fb.com> > Cc: Yonghong Song <yhs@fb.com> > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: KP Singh <kpsingh@kernel.org> > Cc: netdev@vger.kernel.org > Cc: bpf@vger.kernel.org > Cc: Gustavo A. R. Silva <gustavoars@kernel.org> > Link: https://github.com/KSPP/linux/issues/20 > Signed-off-by: Kees Cook <keescook@chromium.org> > Acked-by: Andrii Nakryiko <andrii@kernel.org> > Link: https://lore.kernel.org/lkml/CAEf4Bzb46=-J5Fxc3mMZ8JQPtK1uoE0q6+g6WPz53Cvx=CBEhw@mail.gmail.com Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Thanks -- Gustavo > --- > include/linux/filter.h | 6 +++++- > kernel/bpf/hashtab.c | 6 +++--- > kernel/bpf/verifier.c | 26 +++++++++----------------- > lib/test_bpf.c | 2 +- > 4 files changed, 18 insertions(+), 22 deletions(-) > > diff --git a/include/linux/filter.h b/include/linux/filter.h > index 4a93c12543ee..6c247663d4ce 100644 > --- a/include/linux/filter.h > +++ b/include/linux/filter.h > @@ -365,13 +365,17 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) > #define BPF_CAST_CALL(x) \ > ((u64 (*)(u64, u64, u64, u64, u64))(x)) > > +/* Convert function address to BPF immediate */ > + > +#define BPF_CALL_IMM(x) ((void *)(x) - (void *)__bpf_call_base) > + > #define BPF_EMIT_CALL(FUNC) \ > ((struct bpf_insn) { \ > .code = BPF_JMP | BPF_CALL, \ > .dst_reg = 0, \ > .src_reg = 0, \ > .off = 0, \ > - .imm = ((FUNC) - __bpf_call_base) }) > + .imm = BPF_CALL_IMM(FUNC) }) > > /* Raw code statement block */ > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index 32471ba02708..3d8f9d6997d5 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -668,7 +668,7 @@ static int htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf) > > BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, > (void *(*)(struct bpf_map *map, void *key))NULL)); > - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); > + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); > *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1); > *insn++ = BPF_ALU64_IMM(BPF_ADD, ret, > offsetof(struct htab_elem, key) + > @@ -709,7 +709,7 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map, > > BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, > (void *(*)(struct bpf_map *map, void *key))NULL)); > - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); > + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); > *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 4); > *insn++ = BPF_LDX_MEM(BPF_B, ref_reg, ret, > offsetof(struct htab_elem, lru_node) + > @@ -2397,7 +2397,7 @@ static int htab_of_map_gen_lookup(struct bpf_map *map, > > BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem, > (void *(*)(struct bpf_map *map, void *key))NULL)); > - *insn++ = BPF_EMIT_CALL(BPF_CAST_CALL(__htab_map_lookup_elem)); > + *insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem); > *insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 2); > *insn++ = BPF_ALU64_IMM(BPF_ADD, ret, > offsetof(struct htab_elem, key) + > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 7a8351604f67..1433752db740 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -1744,7 +1744,7 @@ static int add_kfunc_call(struct bpf_verifier_env *env, u32 func_id) > > desc = &tab->descs[tab->nr_descs++]; > desc->func_id = func_id; > - desc->imm = BPF_CAST_CALL(addr) - __bpf_call_base; > + desc->imm = BPF_CALL_IMM(addr); > err = btf_distill_func_proto(&env->log, btf_vmlinux, > func_proto, func_name, > &desc->func_model); > @@ -12514,8 +12514,7 @@ static int jit_subprogs(struct bpf_verifier_env *env) > if (!bpf_pseudo_call(insn)) > continue; > subprog = insn->off; > - insn->imm = BPF_CAST_CALL(func[subprog]->bpf_func) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(func[subprog]->bpf_func); > } > > /* we use the aux data to keep a list of the start addresses > @@ -12995,32 +12994,25 @@ static int do_misc_fixups(struct bpf_verifier_env *env) > patch_map_ops_generic: > switch (insn->imm) { > case BPF_FUNC_map_lookup_elem: > - insn->imm = BPF_CAST_CALL(ops->map_lookup_elem) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_lookup_elem); > continue; > case BPF_FUNC_map_update_elem: > - insn->imm = BPF_CAST_CALL(ops->map_update_elem) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_update_elem); > continue; > case BPF_FUNC_map_delete_elem: > - insn->imm = BPF_CAST_CALL(ops->map_delete_elem) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_delete_elem); > continue; > case BPF_FUNC_map_push_elem: > - insn->imm = BPF_CAST_CALL(ops->map_push_elem) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_push_elem); > continue; > case BPF_FUNC_map_pop_elem: > - insn->imm = BPF_CAST_CALL(ops->map_pop_elem) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_pop_elem); > continue; > case BPF_FUNC_map_peek_elem: > - insn->imm = BPF_CAST_CALL(ops->map_peek_elem) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_peek_elem); > continue; > case BPF_FUNC_redirect_map: > - insn->imm = BPF_CAST_CALL(ops->map_redirect) - > - __bpf_call_base; > + insn->imm = BPF_CALL_IMM(ops->map_redirect); > continue; > } > > diff --git a/lib/test_bpf.c b/lib/test_bpf.c > index 08f438e6fe9e..21ea1ab253a1 100644 > --- a/lib/test_bpf.c > +++ b/lib/test_bpf.c > @@ -12439,7 +12439,7 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs) > err = -EFAULT; > goto out_err; > } > - *insn = BPF_EMIT_CALL(BPF_CAST_CALL(addr)); > + *insn = BPF_EMIT_CALL(addr); > if ((long)__bpf_call_base + insn->imm != addr) > *insn = BPF_JMP_A(0); /* Skip: NOP */ > break; > -- > 2.30.2 > ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef 2021-09-28 23:09 [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Kees Cook 2021-09-28 23:09 ` [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM Kees Cook @ 2021-09-28 23:09 ` Kees Cook 2021-09-28 23:26 ` Gustavo A. R. Silva 2021-10-02 16:09 ` David Laight 2021-09-28 23:33 ` [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Alexei Starovoitov 2 siblings, 2 replies; 7+ messages in thread From: Kees Cook @ 2021-09-28 23:09 UTC (permalink / raw) To: Alexei Starovoitov Cc: Kees Cook, Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, netdev, bpf, Gustavo A . R . Silva, Andrii Nakryiko, linux-kernel, linux-hardening In order to keep ahead of cases in the kernel where Control Flow Integrity (CFI) may trip over function call casts, enabling -Wcast-function-type is helpful. To that end, BPF_CAST_CALL causes various warnings and is one of the last places in the kernel triggering this warning. For actual function calls, replace BPF_CAST_CALL() with a typedef, which captures the same details about the given function pointers. This change results in no object code difference. Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Martin KaFai Lau <kafai@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Yonghong Song <yhs@fb.com> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: netdev@vger.kernel.org Cc: bpf@vger.kernel.org Cc: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://github.com/KSPP/linux/issues/20 Signed-off-by: Kees Cook <keescook@chromium.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/lkml/CAEf4Bzb46=-J5Fxc3mMZ8JQPtK1uoE0q6+g6WPz53Cvx=CBEhw@mail.gmail.com --- include/linux/bpf.h | 4 +++- include/linux/filter.h | 5 ----- kernel/bpf/arraymap.c | 7 +++---- kernel/bpf/hashtab.c | 7 +++---- kernel/bpf/helpers.c | 5 ++--- 5 files changed, 11 insertions(+), 17 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index b6c45a6cbbba..19735d59230a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -48,6 +48,7 @@ extern struct idr btf_idr; extern spinlock_t btf_idr_lock; extern struct kobject *btf_kobj; +typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64); typedef int (*bpf_iter_init_seq_priv_t)(void *private_data, struct bpf_iter_aux_info *aux); typedef void (*bpf_iter_fini_seq_priv_t)(void *private_data); @@ -142,7 +143,8 @@ struct bpf_map_ops { int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env, struct bpf_func_state *caller, struct bpf_func_state *callee); - int (*map_for_each_callback)(struct bpf_map *map, void *callback_fn, + int (*map_for_each_callback)(struct bpf_map *map, + bpf_callback_t callback_fn, void *callback_ctx, u64 flags); /* BTF name and id of struct allocated by map_alloc */ diff --git a/include/linux/filter.h b/include/linux/filter.h index 6c247663d4ce..47f80adbe744 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -360,11 +360,6 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) .off = 0, \ .imm = TGT }) -/* Function call */ - -#define BPF_CAST_CALL(x) \ - ((u64 (*)(u64, u64, u64, u64, u64))(x)) - /* Convert function address to BPF immediate */ #define BPF_CALL_IMM(x) ((void *)(x) - (void *)__bpf_call_base) diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c index cebd4fb06d19..5e1ccfae916b 100644 --- a/kernel/bpf/arraymap.c +++ b/kernel/bpf/arraymap.c @@ -645,7 +645,7 @@ static const struct bpf_iter_seq_info iter_seq_info = { .seq_priv_size = sizeof(struct bpf_iter_seq_array_map_info), }; -static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, +static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn, void *callback_ctx, u64 flags) { u32 i, key, num_elems = 0; @@ -668,9 +668,8 @@ static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, val = array->value + array->elem_size * i; num_elems++; key = i; - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, - (u64)(long)&key, (u64)(long)val, - (u64)(long)callback_ctx, 0); + ret = callback_fn((u64)(long)map, (u64)(long)&key, + (u64)(long)val, (u64)(long)callback_ctx, 0); /* return value: 0 - continue, 1 - stop and return */ if (ret) break; diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c index 3d8f9d6997d5..d29af9988f37 100644 --- a/kernel/bpf/hashtab.c +++ b/kernel/bpf/hashtab.c @@ -2049,7 +2049,7 @@ static const struct bpf_iter_seq_info iter_seq_info = { .seq_priv_size = sizeof(struct bpf_iter_seq_hash_map_info), }; -static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn, +static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn, void *callback_ctx, u64 flags) { struct bpf_htab *htab = container_of(map, struct bpf_htab, map); @@ -2089,9 +2089,8 @@ static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn, val = elem->key + roundup_key_size; } num_elems++; - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, - (u64)(long)key, (u64)(long)val, - (u64)(long)callback_ctx, 0); + ret = callback_fn((u64)(long)map, (u64)(long)key, + (u64)(long)val, (u64)(long)callback_ctx, 0); /* return value: 0 - continue, 1 - stop and return */ if (ret) { rcu_read_unlock(); diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c index 2c604ff8c7fb..1ffd469c217f 100644 --- a/kernel/bpf/helpers.c +++ b/kernel/bpf/helpers.c @@ -1056,7 +1056,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) struct bpf_hrtimer *t = container_of(hrtimer, struct bpf_hrtimer, timer); struct bpf_map *map = t->map; void *value = t->value; - void *callback_fn; + bpf_callback_t callback_fn; void *key; u32 idx; @@ -1081,8 +1081,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) key = value - round_up(map->key_size, 8); } - BPF_CAST_CALL(callback_fn)((u64)(long)map, (u64)(long)key, - (u64)(long)value, 0, 0); + callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)value, 0, 0); /* The verifier checked that return value is zero. */ this_cpu_write(hrtimer_running, NULL); -- 2.30.2 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef 2021-09-28 23:09 ` [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef Kees Cook @ 2021-09-28 23:26 ` Gustavo A. R. Silva 2021-10-02 16:09 ` David Laight 1 sibling, 0 replies; 7+ messages in thread From: Gustavo A. R. Silva @ 2021-09-28 23:26 UTC (permalink / raw) To: Kees Cook Cc: Alexei Starovoitov, Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, netdev, bpf, Andrii Nakryiko, linux-kernel, linux-hardening On Tue, Sep 28, 2021 at 04:09:46PM -0700, Kees Cook wrote: > In order to keep ahead of cases in the kernel where Control Flow > Integrity (CFI) may trip over function call casts, enabling > -Wcast-function-type is helpful. To that end, BPF_CAST_CALL causes > various warnings and is one of the last places in the kernel > triggering this warning. > > For actual function calls, replace BPF_CAST_CALL() with a typedef, which > captures the same details about the given function pointers. > > This change results in no object code difference. > > Cc: Alexei Starovoitov <ast@kernel.org> > Cc: Daniel Borkmann <daniel@iogearbox.net> > Cc: Martin KaFai Lau <kafai@fb.com> > Cc: Song Liu <songliubraving@fb.com> > Cc: Yonghong Song <yhs@fb.com> > Cc: John Fastabend <john.fastabend@gmail.com> > Cc: KP Singh <kpsingh@kernel.org> > Cc: netdev@vger.kernel.org > Cc: bpf@vger.kernel.org > Cc: Gustavo A. R. Silva <gustavoars@kernel.org> > Link: https://github.com/KSPP/linux/issues/20 > Signed-off-by: Kees Cook <keescook@chromium.org> > Acked-by: Andrii Nakryiko <andrii@kernel.org> > Link: https://lore.kernel.org/lkml/CAEf4Bzb46=-J5Fxc3mMZ8JQPtK1uoE0q6+g6WPz53Cvx=CBEhw@mail.gmail.com Acked-by: Gustavo A. R. Silva <gustavoars@kernel.org> Thanks -- Gustavo > --- > include/linux/bpf.h | 4 +++- > include/linux/filter.h | 5 ----- > kernel/bpf/arraymap.c | 7 +++---- > kernel/bpf/hashtab.c | 7 +++---- > kernel/bpf/helpers.c | 5 ++--- > 5 files changed, 11 insertions(+), 17 deletions(-) > > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index b6c45a6cbbba..19735d59230a 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -48,6 +48,7 @@ extern struct idr btf_idr; > extern spinlock_t btf_idr_lock; > extern struct kobject *btf_kobj; > > +typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64); > typedef int (*bpf_iter_init_seq_priv_t)(void *private_data, > struct bpf_iter_aux_info *aux); > typedef void (*bpf_iter_fini_seq_priv_t)(void *private_data); > @@ -142,7 +143,8 @@ struct bpf_map_ops { > int (*map_set_for_each_callback_args)(struct bpf_verifier_env *env, > struct bpf_func_state *caller, > struct bpf_func_state *callee); > - int (*map_for_each_callback)(struct bpf_map *map, void *callback_fn, > + int (*map_for_each_callback)(struct bpf_map *map, > + bpf_callback_t callback_fn, > void *callback_ctx, u64 flags); > > /* BTF name and id of struct allocated by map_alloc */ > diff --git a/include/linux/filter.h b/include/linux/filter.h > index 6c247663d4ce..47f80adbe744 100644 > --- a/include/linux/filter.h > +++ b/include/linux/filter.h > @@ -360,11 +360,6 @@ static inline bool insn_is_zext(const struct bpf_insn *insn) > .off = 0, \ > .imm = TGT }) > > -/* Function call */ > - > -#define BPF_CAST_CALL(x) \ > - ((u64 (*)(u64, u64, u64, u64, u64))(x)) > - > /* Convert function address to BPF immediate */ > > #define BPF_CALL_IMM(x) ((void *)(x) - (void *)__bpf_call_base) > diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c > index cebd4fb06d19..5e1ccfae916b 100644 > --- a/kernel/bpf/arraymap.c > +++ b/kernel/bpf/arraymap.c > @@ -645,7 +645,7 @@ static const struct bpf_iter_seq_info iter_seq_info = { > .seq_priv_size = sizeof(struct bpf_iter_seq_array_map_info), > }; > > -static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, > +static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn, > void *callback_ctx, u64 flags) > { > u32 i, key, num_elems = 0; > @@ -668,9 +668,8 @@ static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, > val = array->value + array->elem_size * i; > num_elems++; > key = i; > - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, > - (u64)(long)&key, (u64)(long)val, > - (u64)(long)callback_ctx, 0); > + ret = callback_fn((u64)(long)map, (u64)(long)&key, > + (u64)(long)val, (u64)(long)callback_ctx, 0); > /* return value: 0 - continue, 1 - stop and return */ > if (ret) > break; > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c > index 3d8f9d6997d5..d29af9988f37 100644 > --- a/kernel/bpf/hashtab.c > +++ b/kernel/bpf/hashtab.c > @@ -2049,7 +2049,7 @@ static const struct bpf_iter_seq_info iter_seq_info = { > .seq_priv_size = sizeof(struct bpf_iter_seq_hash_map_info), > }; > > -static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn, > +static int bpf_for_each_hash_elem(struct bpf_map *map, bpf_callback_t callback_fn, > void *callback_ctx, u64 flags) > { > struct bpf_htab *htab = container_of(map, struct bpf_htab, map); > @@ -2089,9 +2089,8 @@ static int bpf_for_each_hash_elem(struct bpf_map *map, void *callback_fn, > val = elem->key + roundup_key_size; > } > num_elems++; > - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, > - (u64)(long)key, (u64)(long)val, > - (u64)(long)callback_ctx, 0); > + ret = callback_fn((u64)(long)map, (u64)(long)key, > + (u64)(long)val, (u64)(long)callback_ctx, 0); > /* return value: 0 - continue, 1 - stop and return */ > if (ret) { > rcu_read_unlock(); > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > index 2c604ff8c7fb..1ffd469c217f 100644 > --- a/kernel/bpf/helpers.c > +++ b/kernel/bpf/helpers.c > @@ -1056,7 +1056,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) > struct bpf_hrtimer *t = container_of(hrtimer, struct bpf_hrtimer, timer); > struct bpf_map *map = t->map; > void *value = t->value; > - void *callback_fn; > + bpf_callback_t callback_fn; > void *key; > u32 idx; > > @@ -1081,8 +1081,7 @@ static enum hrtimer_restart bpf_timer_cb(struct hrtimer *hrtimer) > key = value - round_up(map->key_size, 8); > } > > - BPF_CAST_CALL(callback_fn)((u64)(long)map, (u64)(long)key, > - (u64)(long)value, 0, 0); > + callback_fn((u64)(long)map, (u64)(long)key, (u64)(long)value, 0, 0); > /* The verifier checked that return value is zero. */ > > this_cpu_write(hrtimer_running, NULL); > -- > 2.30.2 > ^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef 2021-09-28 23:09 ` [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef Kees Cook 2021-09-28 23:26 ` Gustavo A. R. Silva @ 2021-10-02 16:09 ` David Laight 1 sibling, 0 replies; 7+ messages in thread From: David Laight @ 2021-10-02 16:09 UTC (permalink / raw) To: 'Kees Cook', Alexei Starovoitov Cc: Daniel Borkmann, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, netdev, bpf, Gustavo A . R . Silva, Andrii Nakryiko, linux-kernel, linux-hardening From: Kees Cook > Sent: 29 September 2021 00:10 ... > In order to keep ahead of cases in the kernel where Control Flow > Integrity (CFI) may trip over function call casts, enabling > -Wcast-function-type is helpful. To that end, BPF_CAST_CALL causes > various warnings and is one of the last places in the kernel > triggering this warning. ... > -static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, > +static int bpf_for_each_array_elem(struct bpf_map *map, bpf_callback_t callback_fn, > void *callback_ctx, u64 flags) > { > u32 i, key, num_elems = 0; > @@ -668,9 +668,8 @@ static int bpf_for_each_array_elem(struct bpf_map *map, void *callback_fn, > val = array->value + array->elem_size * i; > num_elems++; > key = i; > - ret = BPF_CAST_CALL(callback_fn)((u64)(long)map, > - (u64)(long)&key, (u64)(long)val, > - (u64)(long)callback_ctx, 0); > + ret = callback_fn((u64)(long)map, (u64)(long)&key, > + (u64)(long)val, (u64)(long)callback_ctx, 0); > /* return value: 0 - continue, 1 - stop and return */ > if (ret) > break; This is still entirely horrid and potentially error prone. While a callback function seems a nice idea the code is almost always better and much easier to read if some kind of iterator function is used so that the calling code is just a simple loop. This is true even if you need a #define for the loop end. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales) ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type 2021-09-28 23:09 [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Kees Cook 2021-09-28 23:09 ` [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM Kees Cook 2021-09-28 23:09 ` [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef Kees Cook @ 2021-09-28 23:33 ` Alexei Starovoitov 2 siblings, 0 replies; 7+ messages in thread From: Alexei Starovoitov @ 2021-09-28 23:33 UTC (permalink / raw) To: Kees Cook Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend, KP Singh, LKML, Network Development, bpf, linux-hardening On Tue, Sep 28, 2021 at 4:09 PM Kees Cook <keescook@chromium.org> wrote: > > Hi, > > In order to keep ahead of cases in the kernel where Control Flow Integrity > (CFI) may trip over function call casts, enabling -Wcast-function-type > is helpful. To that end, replace BPF_CAST_CALL() as it triggers warnings > with this option and is now one of the last places in the kernel in need > of fixing. > > Thanks, > > -Kees > > v2: > - rebase to bpf-next > - add acks > v1: https://lore.kernel.org/lkml/20210927182700.2980499-1-keescook@chromium.org Applied. Thanks ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-10-02 16:09 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-09-28 23:09 [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Kees Cook 2021-09-28 23:09 ` [PATCH bpf-next v2 1/2] bpf: Replace "want address" users of BPF_CAST_CALL with BPF_CALL_IMM Kees Cook 2021-09-28 23:25 ` Gustavo A. R. Silva 2021-09-28 23:09 ` [PATCH bpf-next v2 2/2] bpf: Replace callers of BPF_CAST_CALL with proper function typedef Kees Cook 2021-09-28 23:26 ` Gustavo A. R. Silva 2021-10-02 16:09 ` David Laight 2021-09-28 23:33 ` [PATCH bpf-next v2 0/2] bpf: Build with -Wcast-function-type Alexei Starovoitov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).