From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49371C43334 for ; Mon, 27 Jun 2022 11:41:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237054AbiF0Llx (ORCPT ); Mon, 27 Jun 2022 07:41:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236802AbiF0LiL (ORCPT ); Mon, 27 Jun 2022 07:38:11 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 948AABC2A; Mon, 27 Jun 2022 04:34:55 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1FA2FB8111D; Mon, 27 Jun 2022 11:34:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8CBF5C385A5; Mon, 27 Jun 2022 11:34:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1656329692; bh=V2lgXS6XgffKquji+g8xrjN4kwR7HHexW31+Up0ZesY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QMc5FKRoVXWBnegfx+9tsCVUNGzLIUiXxjc8YpCz+iNXww/HHS/SI1K9JtSD+1b5f raZz+pSlbzPZfamsmnMqjnrvDM+kzH+e+NIRfmhAy6LXWMcCRxtF1f9s5OC3QOMUEB IWlU36C7Q7bO7WEmcFD3S3dFVl7kyWrA/mpY5SOQ= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jakub Sitnicki , Daniel Borkmann , Maciej Fijalkowski , Sasha Levin Subject: [PATCH 5.15 043/135] bpf, x86: Fix tail call count offset calculation on bpf2bpf call Date: Mon, 27 Jun 2022 13:20:50 +0200 Message-Id: <20220627111939.410339370@linuxfoundation.org> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220627111938.151743692@linuxfoundation.org> References: <20220627111938.151743692@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Jakub Sitnicki [ Upstream commit ff672c67ee7635ca1e28fb13729e8ef0d1f08ce5 ] On x86-64 the tail call count is passed from one BPF function to another through %rax. Additionally, on function entry, the tail call count value is stored on stack right after the BPF program stack, due to register shortage. The stored count is later loaded from stack either when performing a tail call - to check if we have not reached the tail call limit - or before calling another BPF function call in order to pass it via %rax. In the latter case, we miscalculate the offset at which the tail call count was stored on function entry. The JIT does not take into account that the allocated BPF program stack is always a multiple of 8 on x86, while the actual stack depth does not have to be. This leads to a load from an offset that belongs to the BPF stack, as shown in the example below: SEC("tc") int entry(struct __sk_buff *skb) { /* Have data on stack which size is not a multiple of 8 */ volatile char arr[1] = {}; return subprog_tail(skb); } int entry(struct __sk_buff * skb): 0: (b4) w2 = 0 1: (73) *(u8 *)(r10 -1) = r2 2: (85) call pc+1#bpf_prog_ce2f79bb5f3e06dd_F 3: (95) exit int entry(struct __sk_buff * skb): 0xffffffffa0201788: nop DWORD PTR [rax+rax*1+0x0] 0xffffffffa020178d: xor eax,eax 0xffffffffa020178f: push rbp 0xffffffffa0201790: mov rbp,rsp 0xffffffffa0201793: sub rsp,0x8 0xffffffffa020179a: push rax 0xffffffffa020179b: xor esi,esi 0xffffffffa020179d: mov BYTE PTR [rbp-0x1],sil 0xffffffffa02017a1: mov rax,QWORD PTR [rbp-0x9] !!! tail call count 0xffffffffa02017a8: call 0xffffffffa02017d8 !!! is at rbp-0x10 0xffffffffa02017ad: leave 0xffffffffa02017ae: ret Fix it by rounding up the BPF stack depth to a multiple of 8, when calculating the tail call count offset on stack. Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") Signed-off-by: Jakub Sitnicki Signed-off-by: Daniel Borkmann Acked-by: Maciej Fijalkowski Acked-by: Daniel Borkmann Link: https://lore.kernel.org/bpf/20220616162037.535469-2-jakub@cloudflare.com Signed-off-by: Sasha Levin --- arch/x86/net/bpf_jit_comp.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c index d99434dc215c..8dca2bcbb0ea 100644 --- a/arch/x86/net/bpf_jit_comp.c +++ b/arch/x86/net/bpf_jit_comp.c @@ -1440,8 +1440,9 @@ st: if (is_imm8(insn->off)) case BPF_JMP | BPF_CALL: func = (u8 *) __bpf_call_base + imm32; if (tail_call_reachable) { + /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ EMIT3_off32(0x48, 0x8B, 0x85, - -(bpf_prog->aux->stack_depth + 8)); + -round_up(bpf_prog->aux->stack_depth, 8) - 8); if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7)) return -EINVAL; } else { -- 2.35.1