From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yonghong Song Subject: Re: [PATCH bpf-next v6 02/10] bpf: add bpf_get_stack helper Date: Wed, 25 Apr 2018 09:44:54 -0700 Message-ID: <9cfc8ec2-2c6a-8cfd-1293-89b0c486379d@fb.com> References: <20180423212752.986580-1-yhs@fb.com> <20180423212752.986580-3-yhs@fb.com> <89ee3446-8119-8bf4-ee69-bd43e4167a2e@iogearbox.net> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Cc: To: Daniel Borkmann , , , Return-path: Received: from mx0a-00082601.pphosted.com ([67.231.145.42]:35694 "EHLO mx0a-00082601.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754585AbeDYQp3 (ORCPT ); Wed, 25 Apr 2018 12:45:29 -0400 In-Reply-To: <89ee3446-8119-8bf4-ee69-bd43e4167a2e@iogearbox.net> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 4/25/18 2:00 AM, Daniel Borkmann wrote: > On 04/23/2018 11:27 PM, Yonghong Song wrote: >> Currently, stackmap and bpf_get_stackid helper are provided >> for bpf program to get the stack trace. This approach has >> a limitation though. If two stack traces have the same hash, >> only one will get stored in the stackmap table, >> so some stack traces are missing from user perspective. >> >> This patch implements a new helper, bpf_get_stack, will >> send stack traces directly to bpf program. The bpf program >> is able to see all stack traces, and then can do in-kernel >> processing or send stack traces to user space through >> shared map or bpf_perf_event_output. >> >> Acked-by: Alexei Starovoitov >> Signed-off-by: Yonghong Song > [...] >> diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c >> index d315b39..bf22eca 100644 >> --- a/kernel/bpf/core.c >> +++ b/kernel/bpf/core.c >> @@ -31,6 +31,7 @@ >> #include >> #include >> #include >> +#include >> >> #include >> >> @@ -1709,6 +1710,10 @@ static void bpf_prog_free_deferred(struct work_struct *work) >> aux = container_of(work, struct bpf_prog_aux, work); >> if (bpf_prog_is_dev_bound(aux)) >> bpf_prog_offload_destroy(aux->prog); >> +#ifdef CONFIG_PERF_EVENTS >> + if (aux->prog->need_callchain_buf) >> + put_callchain_buffers(); >> +#endif >> for (i = 0; i < aux->func_cnt; i++) >> bpf_jit_free(aux->func[i]); >> if (aux->func_cnt) { > [...] >> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c >> index fe23dc5a..1ee71f6 100644 >> --- a/kernel/bpf/syscall.c >> +++ b/kernel/bpf/syscall.c >> @@ -1360,6 +1360,16 @@ static int bpf_prog_load(union bpf_attr *attr) >> if (err) >> goto free_used_maps; >> >> + if (prog->need_callchain_buf) { >> +#ifdef CONFIG_PERF_EVENTS >> + err = get_callchain_buffers(sysctl_perf_event_max_stack); >> +#else >> + err = -ENOTSUPP; >> +#endif >> + if (err) >> + goto free_used_maps; >> + } >> + >> err = bpf_prog_new_fd(prog); >> if (err < 0) { >> /* failed to allocate fd. >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> index 5dd1dcb..aba9425 100644 >> --- a/kernel/bpf/verifier.c >> +++ b/kernel/bpf/verifier.c >> @@ -2460,6 +2460,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn >> if (err) >> return err; >> >> + if (func_id == BPF_FUNC_get_stack) >> + env->prog->need_callchain_buf = true; >> + >> if (changes_data) >> clear_all_pkt_pointers(env); >> return 0; > > The above three hunks will cause a use-after-free on the perf callchain buffers. > > In check_helper_call() you mark the prog with need_callchain_buf, where the > program hasn't fully completed verification phase yet, meaning some buggy prog > will still bail out. > > However, you do the get_callchain_buffers() at a much later phase, so when you > bail out with error from bpf_check(), you take the free_used_maps error path > which calls bpf_prog_free(). > > The latter calls into bpf_prog_free_deferred() where you do the put_callchain_buffers() > since the need_callchain_buf is marked, but without prior get_callchain_buffers(). Thanks for catching this! Yes, this is a real issue. I will fix it and respin.