* [PATCH bpf 0/2] bpf, fix for subprogs with tailcalls @ 2021-06-30 19:40 John Fastabend 2021-06-30 19:40 ` [PATCH bpf 1/2] bpf: track subprog poke correctly, fix use-after-free John Fastabend 2021-06-30 19:40 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend 0 siblings, 2 replies; 9+ messages in thread From: John Fastabend @ 2021-06-30 19:40 UTC (permalink / raw) To: maciej.fijalkowski, ast, daniel, andriin; +Cc: john.fastabend, bpf, netdev This fixes a use-after-free when using subprogs and tailcalls and adds a test case to trigger the use-after-free. John Fastabend (2): bpf: track subprog poke correctly, fix use-after-free bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch arch/x86/net/bpf_jit_comp.c | 4 ++ include/linux/bpf.h | 1 + kernel/bpf/core.c | 7 +++- kernel/bpf/verifier.c | 39 ++++--------------- .../selftests/bpf/prog_tests/tailcalls.c | 36 ++++++++++++----- .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 21 +++++++++- 6 files changed, 64 insertions(+), 44 deletions(-) -- 2.25.1 ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH bpf 1/2] bpf: track subprog poke correctly, fix use-after-free 2021-06-30 19:40 [PATCH bpf 0/2] bpf, fix for subprogs with tailcalls John Fastabend @ 2021-06-30 19:40 ` John Fastabend 2021-07-06 8:10 ` Daniel Borkmann 2021-06-30 19:40 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend 1 sibling, 1 reply; 9+ messages in thread From: John Fastabend @ 2021-06-30 19:40 UTC (permalink / raw) To: maciej.fijalkowski, ast, daniel, andriin; +Cc: john.fastabend, bpf, netdev Subprograms are calling map_poke_track but on program release there is no hook to call map_poke_untrack. But on prog release the aux memory is freed even though we still have a reference to it in the element list of the map aux data. So when we run map_poke_run() we end up accessing free'd memory. This triggers with KASAN in prog_array_map_poke_run() shown here. [ 402.824686] ================================================================== [ 402.824689] BUG: KASAN: use-after-free in prog_array_map_poke_run+0xc2/0x34e [ 402.824698] Read of size 4 at addr ffff8881905a7940 by task hubble-fgs/4337 [ 402.824705] CPU: 1 PID: 4337 Comm: hubble-fgs Tainted: G I 5.12.0+ #399 [ 402.824715] Call Trace: [ 402.824719] dump_stack+0x93/0xc2 [ 402.824727] print_address_description.constprop.0+0x1a/0x140 [ 402.824736] ? prog_array_map_poke_run+0xc2/0x34e [ 402.824740] ? prog_array_map_poke_run+0xc2/0x34e [ 402.824744] kasan_report.cold+0x7c/0xd8 [ 402.824752] ? prog_array_map_poke_run+0xc2/0x34e [ 402.824757] prog_array_map_poke_run+0xc2/0x34e [ 402.824765] bpf_fd_array_map_update_elem+0x124/0x1a0 The elements concerned are walked like this, for (i = 0; i < elem->aux->size_poke_tab; i++) { poke = &elem->aux->poke_tab[i]; So the access to size_poke_tab is the 4B read, verified by checking offsets in the KASAN dump, [ 402.825004] The buggy address belongs to the object at ffff8881905a7800 which belongs to the cache kmalloc-1k of size 1024 [ 402.825008] The buggy address is located 320 bytes inside of 1024-byte region [ffff8881905a7800, ffff8881905a7c00) With pahol output, struct bpf_prog_aux { ... /* --- cacheline 5 boundary (320 bytes) --- */ u32 size_poke_tab; /* 320 4 */ ... In general subprograms do not manage their own data structures. For example btf func_info, linfo, etc are just pointers to the prog structure. This allows reference counting and cleanup to be done on the main prog. The aux->poke_tab struct however did not follow this logic. The initial fix for above use after free further embedded subprogram tracking of poke data tracking into the subprogram with proper reference counting. However, Daniel and Alexei questioned why we were treating these objects specially. I agree its unnecessary. The fix here removes the per subprogram poke data structure alloc and map tracking and instead simply points the aux->poke_tab pointer at the main programs poke_tab. This way map tracking is done on the origin program and we do not need to manage them per subprogram. A couple small complication arise here. First on bpf_prog_free_deferred(), where we unwind the prog reference counting and kfree objects, we need to ensure that we don't try to double free the poke_tab when free'ing the subprog structures. This is easily solved by NULL'ing the poke_tab pointer. The second detail is to ensure that per subprog jit logic only does fixups on poke_tab[] entries it owns. To do this we add a pointer in the poke structure to point at the subprog value so JITs can easily check while walking the poke_tab structure if the current entry belongs to the current program. This change is necessary per JIT. See x86/net/bpf_jit_compo.c func bpf_tail_call_direct_fixup() for the details. Only x86 is currently using the poke_tab struct so we only need to fixup the x86 JIT. Fixes: a748c6975dea3 ("bpf: propagate poke descriptors to subprograms") Signed-off-by: John Fastabend <john.fastabend@gmail.com> --- arch/x86/net/bpf_jit_comp.c | 4 ++++ include/linux/bpf.h | 1 + kernel/bpf/core.c | 7 ++++++- kernel/bpf/verifier.c | 39 +++++++------------------------------ 4 files changed, 18 insertions(+), 33 deletions(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index 2a2e290fa5d8..ce8dbc9310a9 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -576,6 +576,10 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog) for (i = 0; i < prog->aux->size_poke_tab; i++) { poke = &prog->aux->poke_tab[i]; + + if (poke->aux && poke->aux != prog->aux) + continue; + WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable)); if (poke->reason != BPF_POKE_REASON_TAIL_CALL) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 02b02cb29ce2..a7532cb3493a 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -777,6 +777,7 @@ struct bpf_jit_poke_descriptor { void *tailcall_target; void *tailcall_bypass; void *bypass_addr; + void *aux; union { struct { struct bpf_map *map; diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c index 5e31ee9f7512..72810314c43b 100644 --- a/kernel/bpf/core.c +++ b/kernel/bpf/core.c @@ -2211,8 +2211,13 @@ static void bpf_prog_free_deferred(struct work_struct *work) #endif if (aux->dst_trampoline) bpf_trampoline_put(aux->dst_trampoline); - for (i = 0; i < aux->func_cnt; i++) + for (i = 0; i < aux->func_cnt; i++) { + /* poke_tab in subprogs are links to main prog and are + * freed above so delete link without kfree. + */ + aux->func[i]->aux->poke_tab = NULL; bpf_jit_free(aux->func[i]); + } if (aux->func_cnt) { kfree(aux->func); bpf_prog_unlock_free(aux->prog); diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 6e2ebcb0d66f..daa5a3f5e7b8 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12109,30 +12109,17 @@ static int jit_subprogs(struct bpf_verifier_env *env) /* the btf and func_info will be freed only at prog->aux */ func[i]->aux->btf = prog->aux->btf; func[i]->aux->func_info = prog->aux->func_info; + func[i]->aux->poke_tab = prog->aux->poke_tab; + func[i]->aux->size_poke_tab = prog->aux->size_poke_tab; for (j = 0; j < prog->aux->size_poke_tab; j++) { - u32 insn_idx = prog->aux->poke_tab[j].insn_idx; - int ret; + struct bpf_jit_poke_descriptor *poke; - if (!(insn_idx >= subprog_start && - insn_idx <= subprog_end)) - continue; - - ret = bpf_jit_add_poke_descriptor(func[i], - &prog->aux->poke_tab[j]); - if (ret < 0) { - verbose(env, "adding tail call poke descriptor failed\n"); - goto out_free; - } + poke = &prog->aux->poke_tab[j]; - func[i]->insnsi[insn_idx - subprog_start].imm = ret + 1; - - map_ptr = func[i]->aux->poke_tab[ret].tail_call.map; - ret = map_ptr->ops->map_poke_track(map_ptr, func[i]->aux); - if (ret < 0) { - verbose(env, "tracking tail call prog failed\n"); - goto out_free; - } + if (poke->insn_idx < subprog_end && + poke->insn_idx >= subprog_start) + poke->aux = func[i]->aux; } /* Use bpf_prog_F_tag to indicate functions in stack traces. @@ -12163,18 +12150,6 @@ static int jit_subprogs(struct bpf_verifier_env *env) cond_resched(); } - /* Untrack main program's aux structs so that during map_poke_run() - * we will not stumble upon the unfilled poke descriptors; each - * of the main program's poke descs got distributed across subprogs - * and got tracked onto map, so we are sure that none of them will - * be missed after the operation below - */ - for (i = 0; i < prog->aux->size_poke_tab; i++) { - map_ptr = prog->aux->poke_tab[i].tail_call.map; - - map_ptr->ops->map_poke_untrack(map_ptr, prog->aux); - } - /* at this point all bpf functions were successfully JITed * now populate all bpf_calls with correct addresses and * run last pass of JIT -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 1/2] bpf: track subprog poke correctly, fix use-after-free 2021-06-30 19:40 ` [PATCH bpf 1/2] bpf: track subprog poke correctly, fix use-after-free John Fastabend @ 2021-07-06 8:10 ` Daniel Borkmann 0 siblings, 0 replies; 9+ messages in thread From: Daniel Borkmann @ 2021-07-06 8:10 UTC (permalink / raw) To: John Fastabend, maciej.fijalkowski, ast, andriin; +Cc: bpf, netdev On 6/30/21 9:40 PM, John Fastabend wrote: [...] > arch/x86/net/bpf_jit_comp.c | 4 ++++ > include/linux/bpf.h | 1 + > kernel/bpf/core.c | 7 ++++++- > kernel/bpf/verifier.c | 39 +++++++------------------------------ > 4 files changed, 18 insertions(+), 33 deletions(-) > > diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c > index 2a2e290fa5d8..ce8dbc9310a9 100644 > --- a/arch/x86/net/bpf_jit_comp.c > +++ b/arch/x86/net/bpf_jit_comp.c > @@ -576,6 +576,10 @@ static void bpf_tail_call_direct_fixup(struct bpf_prog *prog) > > for (i = 0; i < prog->aux->size_poke_tab; i++) { > poke = &prog->aux->poke_tab[i]; > + > + if (poke->aux && poke->aux != prog->aux) > + continue; > + > WARN_ON_ONCE(READ_ONCE(poke->tailcall_target_stable)); > > if (poke->reason != BPF_POKE_REASON_TAIL_CALL) > diff --git a/include/linux/bpf.h b/include/linux/bpf.h > index 02b02cb29ce2..a7532cb3493a 100644 > --- a/include/linux/bpf.h > +++ b/include/linux/bpf.h > @@ -777,6 +777,7 @@ struct bpf_jit_poke_descriptor { > void *tailcall_target; > void *tailcall_bypass; > void *bypass_addr; > + void *aux; > union { > struct { > struct bpf_map *map; > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index 5e31ee9f7512..72810314c43b 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -2211,8 +2211,13 @@ static void bpf_prog_free_deferred(struct work_struct *work) > #endif > if (aux->dst_trampoline) > bpf_trampoline_put(aux->dst_trampoline); > - for (i = 0; i < aux->func_cnt; i++) > + for (i = 0; i < aux->func_cnt; i++) { > + /* poke_tab in subprogs are links to main prog and are > + * freed above so delete link without kfree. > + */ > + aux->func[i]->aux->poke_tab = NULL; > bpf_jit_free(aux->func[i]); > + } > if (aux->func_cnt) { > kfree(aux->func); > bpf_prog_unlock_free(aux->prog); > diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c > index 6e2ebcb0d66f..daa5a3f5e7b8 100644 > --- a/kernel/bpf/verifier.c > +++ b/kernel/bpf/verifier.c > @@ -12109,30 +12109,17 @@ static int jit_subprogs(struct bpf_verifier_env *env) > /* the btf and func_info will be freed only at prog->aux */ > func[i]->aux->btf = prog->aux->btf; > func[i]->aux->func_info = prog->aux->func_info; > + func[i]->aux->poke_tab = prog->aux->poke_tab; > + func[i]->aux->size_poke_tab = prog->aux->size_poke_tab; > > for (j = 0; j < prog->aux->size_poke_tab; j++) { > - u32 insn_idx = prog->aux->poke_tab[j].insn_idx; > - int ret; > + struct bpf_jit_poke_descriptor *poke; > > - if (!(insn_idx >= subprog_start && > - insn_idx <= subprog_end)) > - continue; > - > - ret = bpf_jit_add_poke_descriptor(func[i], > - &prog->aux->poke_tab[j]); > - if (ret < 0) { > - verbose(env, "adding tail call poke descriptor failed\n"); > - goto out_free; > - } > + poke = &prog->aux->poke_tab[j]; > > - func[i]->insnsi[insn_idx - subprog_start].imm = ret + 1; > - > - map_ptr = func[i]->aux->poke_tab[ret].tail_call.map; > - ret = map_ptr->ops->map_poke_track(map_ptr, func[i]->aux); > - if (ret < 0) { > - verbose(env, "tracking tail call prog failed\n"); > - goto out_free; > - } > + if (poke->insn_idx < subprog_end && > + poke->insn_idx >= subprog_start) > + poke->aux = func[i]->aux; > } This still has a bug which leads to a use-after-free. Check the out_free label: out_free: for (i = 0; i < env->subprog_cnt; i++) { if (!func[i]) continue; for (j = 0; j < func[i]->aux->size_poke_tab; j++) { map_ptr = func[i]->aux->poke_tab[j].tail_call.map; map_ptr->ops->map_poke_untrack(map_ptr, func[i]->aux); } bpf_jit_free(func[i]); } kfree(func); Neither the map_poke_untrack() belongs in there nor bpf_jit_free() with non-NULL aux->func[i]->aux->poke_tab (given it belongs to the main prog instead). > /* Use bpf_prog_F_tag to indicate functions in stack traces. > @@ -12163,18 +12150,6 @@ static int jit_subprogs(struct bpf_verifier_env *env) > cond_resched(); > } > > - /* Untrack main program's aux structs so that during map_poke_run() > - * we will not stumble upon the unfilled poke descriptors; each > - * of the main program's poke descs got distributed across subprogs > - * and got tracked onto map, so we are sure that none of them will > - * be missed after the operation below > - */ > - for (i = 0; i < prog->aux->size_poke_tab; i++) { > - map_ptr = prog->aux->poke_tab[i].tail_call.map; > - > - map_ptr->ops->map_poke_untrack(map_ptr, prog->aux); > - } > - > /* at this point all bpf functions were successfully JITed > * now populate all bpf_calls with correct addresses and > * run last pass of JIT > ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch 2021-06-30 19:40 [PATCH bpf 0/2] bpf, fix for subprogs with tailcalls John Fastabend 2021-06-30 19:40 ` [PATCH bpf 1/2] bpf: track subprog poke correctly, fix use-after-free John Fastabend @ 2021-06-30 19:40 ` John Fastabend 1 sibling, 0 replies; 9+ messages in thread From: John Fastabend @ 2021-06-30 19:40 UTC (permalink / raw) To: maciej.fijalkowski, ast, daniel, andriin; +Cc: john.fastabend, bpf, netdev This adds some extra noise to the tailcall_bpf2bpf4 tests that will cause verify to patch insns. This then moves around subprog start/end insn index and poke descriptor insn index to ensure that verify and JIT will continue to track these correctly. If done correctly verifier should pass this program same as before and JIT should emit tail call logic. Signed-off-by: John Fastabend <john.fastabend@gmail.com> --- .../selftests/bpf/prog_tests/tailcalls.c | 36 +++++++++++++------ .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 21 ++++++++++- 2 files changed, 46 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c index ee27d68d2a1c..b5940e6ca67c 100644 --- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c +++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c @@ -715,6 +715,8 @@ static void test_tailcall_bpf2bpf_3(void) bpf_object__close(obj); } +#include "tailcall_bpf2bpf4.skel.h" + /* test_tailcall_bpf2bpf_4 checks that tailcall counter is correctly preserved * across tailcalls combined with bpf2bpf calls. for making sure that tailcall * counter behaves correctly, bpf program will go through following flow: @@ -727,10 +729,15 @@ static void test_tailcall_bpf2bpf_3(void) * the loop begins. At the end of the test make sure that the global counter is * equal to 31, because tailcall counter includes the first two tailcalls * whereas global counter is incremented only on loop presented on flow above. + * + * The noise parameter is used to insert bpf_map_update calls into the logic + * to force verifier to patch instructions. This allows us to ensure jump + * logic remains correct with instruction movement. */ -static void test_tailcall_bpf2bpf_4(void) +static void test_tailcall_bpf2bpf_4(bool noise) { - int err, map_fd, prog_fd, main_fd, data_fd, i, val; + int err, map_fd, prog_fd, main_fd, data_fd, i; + struct tailcall_bpf2bpf4__bss val; struct bpf_map *prog_array, *data_map; struct bpf_program *prog; struct bpf_object *obj; @@ -774,11 +781,6 @@ static void test_tailcall_bpf2bpf_4(void) goto out; } - err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, - &duration, &retval, NULL); - CHECK(err || retval != sizeof(pkt_v4) * 3, "tailcall", "err %d errno %d retval %d\n", - err, errno, retval); - data_map = bpf_object__find_map_by_name(obj, "tailcall.bss"); if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map))) return; @@ -787,10 +789,22 @@ static void test_tailcall_bpf2bpf_4(void) if (CHECK_FAIL(map_fd < 0)) return; + i = 0; + val.noise = noise; + val.count = 0; + err = bpf_map_update_elem(data_fd, &i, &val, BPF_ANY); + if (CHECK_FAIL(err)) + goto out; + + err = bpf_prog_test_run(main_fd, 1, &pkt_v4, sizeof(pkt_v4), 0, + &duration, &retval, NULL); + CHECK(err || retval != sizeof(pkt_v4) * 3, "tailcall", "err %d errno %d retval %d\n", + err, errno, retval); + i = 0; err = bpf_map_lookup_elem(data_fd, &i, &val); - CHECK(err || val != 31, "tailcall count", "err %d errno %d count %d\n", - err, errno, val); + CHECK(err || val.count != 31, "tailcall count", "err %d errno %d count %d\n", + err, errno, val.count); out: bpf_object__close(obj); @@ -815,5 +829,7 @@ void test_tailcalls(void) if (test__start_subtest("tailcall_bpf2bpf_3")) test_tailcall_bpf2bpf_3(); if (test__start_subtest("tailcall_bpf2bpf_4")) - test_tailcall_bpf2bpf_4(); + test_tailcall_bpf2bpf_4(false); + if (test__start_subtest("tailcall_bpf2bpf_5")) + test_tailcall_bpf2bpf_4(true); } diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c index 9a1b166b7fbe..6242803dabde 100644 --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c @@ -2,6 +2,13 @@ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} nop_table SEC(".maps"); + struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 3); @@ -9,11 +16,23 @@ struct { __uint(value_size, sizeof(__u32)); } jmp_table SEC(".maps"); -static volatile int count; +int count = 0; +int noise = 0; + +__always_inline int subprog_noise(void) +{ + __u32 key = 0; + + bpf_printk("hello noisy subprog %d\n", key); + bpf_map_lookup_elem(&nop_table, &key); + return 0; +} __noinline int subprog_tail_2(struct __sk_buff *skb) { + if (noise) + subprog_noise(); bpf_tail_call_static(skb, &jmp_table, 2); return skb->len * 3; } -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH bpf 0/2] bpf fix for mixed tail calls and subprograms @ 2021-06-08 19:29 John Fastabend 2021-06-08 19:30 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend 0 siblings, 1 reply; 9+ messages in thread From: John Fastabend @ 2021-06-08 19:29 UTC (permalink / raw) To: ast, daniel, andriin; +Cc: john.fastabend, netdev, bpf, maciej.fijalkowski We recently tried to use mixed programs that have both tail calls and subprograms, but it needs the attached fix. Also added a small test addition that will cause the failure without the fix. Thanks, John --- John Fastabend (2): bpf: Fix null ptr deref with mixed tail calls and subprogs bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) -- ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch 2021-06-08 19:29 [PATCH bpf 0/2] bpf fix for mixed tail calls and subprograms John Fastabend @ 2021-06-08 19:30 ` John Fastabend 2021-06-09 6:22 ` Yonghong Song 2021-06-09 15:57 ` Maciej Fijalkowski 0 siblings, 2 replies; 9+ messages in thread From: John Fastabend @ 2021-06-08 19:30 UTC (permalink / raw) To: ast, daniel, andriin; +Cc: john.fastabend, netdev, bpf, maciej.fijalkowski This adds some extra noise to the tailcall_bpf2bpf4 tests that will cause verifier to patch insns. This then moves around subprog start/end insn index and poke descriptor insn index to ensure that verify and JIT will continue to track these correctly. Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: John Fastabend <john.fastabend@gmail.com> --- .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c index 9a1b166b7fbe..0d70de5f97e2 100644 --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c @@ -2,6 +2,13 @@ #include <linux/bpf.h> #include <bpf/bpf_helpers.h> +struct { + __uint(type, BPF_MAP_TYPE_ARRAY); + __uint(max_entries, 1); + __uint(key_size, sizeof(__u32)); + __uint(value_size, sizeof(__u32)); +} nop_table SEC(".maps"); + struct { __uint(type, BPF_MAP_TYPE_PROG_ARRAY); __uint(max_entries, 3); @@ -11,9 +18,19 @@ struct { static volatile int count; +__noinline +int subprog_noise(struct __sk_buff *skb) +{ + __u32 key = 0; + + bpf_map_lookup_elem(&nop_table, &key); + return 0; +} + __noinline int subprog_tail_2(struct __sk_buff *skb) { + subprog_noise(skb); bpf_tail_call_static(skb, &jmp_table, 2); return skb->len * 3; } ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch 2021-06-08 19:30 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend @ 2021-06-09 6:22 ` Yonghong Song 2021-06-09 15:57 ` Maciej Fijalkowski 1 sibling, 0 replies; 9+ messages in thread From: Yonghong Song @ 2021-06-09 6:22 UTC (permalink / raw) To: John Fastabend, ast, daniel, andriin; +Cc: netdev, bpf, maciej.fijalkowski On 6/8/21 12:30 PM, John Fastabend wrote: > This adds some extra noise to the tailcall_bpf2bpf4 tests that will cause > verifier to patch insns. This then moves around subprog start/end insn > index and poke descriptor insn index to ensure that verify and JIT will > continue to track these correctly. > > Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> > Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Yonghong Song <yhs@fb.com> ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch 2021-06-08 19:30 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend 2021-06-09 6:22 ` Yonghong Song @ 2021-06-09 15:57 ` Maciej Fijalkowski 2021-06-09 16:26 ` John Fastabend 1 sibling, 1 reply; 9+ messages in thread From: Maciej Fijalkowski @ 2021-06-09 15:57 UTC (permalink / raw) To: John Fastabend; +Cc: ast, daniel, andriin, netdev, bpf On Tue, Jun 08, 2021 at 12:30:33PM -0700, John Fastabend wrote: > This adds some extra noise to the tailcall_bpf2bpf4 tests that will cause > verifier to patch insns. This then moves around subprog start/end insn > index and poke descriptor insn index to ensure that verify and JIT will > continue to track these correctly. This test is the most complicated one where I tried to document the scope of it on the side of prog_tests/tailcalls.c. I feel that it would make it more difficult to debug it if under any circumstances something would have been broken with that logic. Maybe a separate test scenario? Or is this an overkill? If so, I would vote for moving it to tailcall_bpf2bpf1.c and have a little comment that testing other bpf helpers mixed in is in scope of that test. > > Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> > Signed-off-by: John Fastabend <john.fastabend@gmail.com> > --- > .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 17 +++++++++++++++++ > 1 file changed, 17 insertions(+) > > diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > index 9a1b166b7fbe..0d70de5f97e2 100644 > --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > @@ -2,6 +2,13 @@ > #include <linux/bpf.h> > #include <bpf/bpf_helpers.h> > > +struct { > + __uint(type, BPF_MAP_TYPE_ARRAY); > + __uint(max_entries, 1); > + __uint(key_size, sizeof(__u32)); > + __uint(value_size, sizeof(__u32)); > +} nop_table SEC(".maps"); > + > struct { > __uint(type, BPF_MAP_TYPE_PROG_ARRAY); > __uint(max_entries, 3); > @@ -11,9 +18,19 @@ struct { > > static volatile int count; > > +__noinline > +int subprog_noise(struct __sk_buff *skb) > +{ > + __u32 key = 0; > + > + bpf_map_lookup_elem(&nop_table, &key); > + return 0; > +} > + > __noinline > int subprog_tail_2(struct __sk_buff *skb) > { > + subprog_noise(skb); > bpf_tail_call_static(skb, &jmp_table, 2); > return skb->len * 3; > } > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch 2021-06-09 15:57 ` Maciej Fijalkowski @ 2021-06-09 16:26 ` John Fastabend 2021-06-09 16:21 ` Maciej Fijalkowski 0 siblings, 1 reply; 9+ messages in thread From: John Fastabend @ 2021-06-09 16:26 UTC (permalink / raw) To: Maciej Fijalkowski, John Fastabend; +Cc: ast, daniel, andriin, netdev, bpf Maciej Fijalkowski wrote: > On Tue, Jun 08, 2021 at 12:30:33PM -0700, John Fastabend wrote: > > This adds some extra noise to the tailcall_bpf2bpf4 tests that will cause > > verifier to patch insns. This then moves around subprog start/end insn > > index and poke descriptor insn index to ensure that verify and JIT will > > continue to track these correctly. > > This test is the most complicated one where I tried to document the scope > of it on the side of prog_tests/tailcalls.c. I feel that it would make it > more difficult to debug it if under any circumstances something would have > been broken with that logic. > > Maybe a separate test scenario? Or is this an overkill? If so, I would > vote for moving it to tailcall_bpf2bpf1.c and have a little comment that > testing other bpf helpers mixed in is in scope of that test. I like pushing it into the complex test to get the most instruction patching combinations possible. > > > > > Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> > > Signed-off-by: John Fastabend <john.fastabend@gmail.com> > > --- > > .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 17 +++++++++++++++++ > > 1 file changed, 17 insertions(+) > > > > diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > > index 9a1b166b7fbe..0d70de5f97e2 100644 > > --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > > +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > > @@ -2,6 +2,13 @@ > > #include <linux/bpf.h> > > #include <bpf/bpf_helpers.h> > > > > +struct { > > + __uint(type, BPF_MAP_TYPE_ARRAY); > > + __uint(max_entries, 1); > > + __uint(key_size, sizeof(__u32)); > > + __uint(value_size, sizeof(__u32)); > > +} nop_table SEC(".maps"); > > + > > struct { > > __uint(type, BPF_MAP_TYPE_PROG_ARRAY); > > __uint(max_entries, 3); > > @@ -11,9 +18,19 @@ struct { > > > > static volatile int count; > > > > +__noinline > > +int subprog_noise(struct __sk_buff *skb) > > +{ > > + __u32 key = 0; > > + > > + bpf_map_lookup_elem(&nop_table, &key); > > + return 0; > > +} > > + > > __noinline > > int subprog_tail_2(struct __sk_buff *skb) > > { > > + subprog_noise(skb); > > bpf_tail_call_static(skb, &jmp_table, 2); > > return skb->len * 3; > > } > > > > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch 2021-06-09 16:26 ` John Fastabend @ 2021-06-09 16:21 ` Maciej Fijalkowski 0 siblings, 0 replies; 9+ messages in thread From: Maciej Fijalkowski @ 2021-06-09 16:21 UTC (permalink / raw) To: John Fastabend; +Cc: ast, daniel, andriin, netdev, bpf On Wed, Jun 09, 2021 at 09:26:01AM -0700, John Fastabend wrote: > Maciej Fijalkowski wrote: > > On Tue, Jun 08, 2021 at 12:30:33PM -0700, John Fastabend wrote: > > > This adds some extra noise to the tailcall_bpf2bpf4 tests that will cause > > > verifier to patch insns. This then moves around subprog start/end insn > > > index and poke descriptor insn index to ensure that verify and JIT will > > > continue to track these correctly. > > > > This test is the most complicated one where I tried to document the scope > > of it on the side of prog_tests/tailcalls.c. I feel that it would make it > > more difficult to debug it if under any circumstances something would have > > been broken with that logic. > > > > Maybe a separate test scenario? Or is this an overkill? If so, I would > > vote for moving it to tailcall_bpf2bpf1.c and have a little comment that > > testing other bpf helpers mixed in is in scope of that test. > > I like pushing it into the complex test to get the most instruction > patching combinations possible. Makes sense after a second thought, that was the intention of that test case, to squeeze out the feature out here. I still would ask to have it commented on the prog_tests/tailcalls.c side, WDYT? > > > > > > > > > Reviewed-by: Daniel Borkmann <daniel@iogearbox.net> > > > Signed-off-by: John Fastabend <john.fastabend@gmail.com> > > > --- > > > .../selftests/bpf/progs/tailcall_bpf2bpf4.c | 17 +++++++++++++++++ > > > 1 file changed, 17 insertions(+) > > > > > > diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > > > index 9a1b166b7fbe..0d70de5f97e2 100644 > > > --- a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > > > +++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf4.c > > > @@ -2,6 +2,13 @@ > > > #include <linux/bpf.h> > > > #include <bpf/bpf_helpers.h> > > > > > > +struct { > > > + __uint(type, BPF_MAP_TYPE_ARRAY); > > > + __uint(max_entries, 1); > > > + __uint(key_size, sizeof(__u32)); > > > + __uint(value_size, sizeof(__u32)); > > > +} nop_table SEC(".maps"); > > > + > > > struct { > > > __uint(type, BPF_MAP_TYPE_PROG_ARRAY); > > > __uint(max_entries, 3); > > > @@ -11,9 +18,19 @@ struct { > > > > > > static volatile int count; > > > > > > +__noinline > > > +int subprog_noise(struct __sk_buff *skb) > > > +{ > > > + __u32 key = 0; > > > + > > > + bpf_map_lookup_elem(&nop_table, &key); > > > + return 0; > > > +} > > > + > > > __noinline > > > int subprog_tail_2(struct __sk_buff *skb) > > > { > > > + subprog_noise(skb); > > > bpf_tail_call_static(skb, &jmp_table, 2); > > > return skb->len * 3; > > > } > > > > > > > > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2021-07-06 8:10 UTC | newest] Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-06-30 19:40 [PATCH bpf 0/2] bpf, fix for subprogs with tailcalls John Fastabend 2021-06-30 19:40 ` [PATCH bpf 1/2] bpf: track subprog poke correctly, fix use-after-free John Fastabend 2021-07-06 8:10 ` Daniel Borkmann 2021-06-30 19:40 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend -- strict thread matches above, loose matches on Subject: below -- 2021-06-08 19:29 [PATCH bpf 0/2] bpf fix for mixed tail calls and subprograms John Fastabend 2021-06-08 19:30 ` [PATCH bpf 2/2] bpf: selftest to verify mixing bpf2bpf calls and tailcalls with insn patch John Fastabend 2021-06-09 6:22 ` Yonghong Song 2021-06-09 15:57 ` Maciej Fijalkowski 2021-06-09 16:26 ` John Fastabend 2021-06-09 16:21 ` Maciej Fijalkowski
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).