bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yonghong Song <yhs@fb.com>
To: Song Liu <songliubraving@fb.com>
Cc: bpf <bpf@vger.kernel.org>, Networking <netdev@vger.kernel.org>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Andrii Nakryiko <andrii@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	KP Singh <kpsingh@chromium.org>, Kernel Team <Kernel-team@fb.com>,
	"syzbot+4f98876664c7337a4ae6@syzkaller.appspotmail.com" 
	<syzbot+4f98876664c7337a4ae6@syzkaller.appspotmail.com>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>
Subject: Re: [PATCH bpf-next] bpf: reject too big ctx_size_in for raw_tp test run
Date: Wed, 13 Jan 2021 15:28:33 -0800	[thread overview]
Message-ID: <1d116261-5ef2-eef6-369f-e8e12eaebc6e@fb.com> (raw)
In-Reply-To: <2DAED411-C65F-4BFD-A627-1EED4823168B@fb.com>



On 1/13/21 1:48 PM, Song Liu wrote:
> 
> 
>> On Jan 12, 2021, at 9:17 PM, Yonghong Song <yhs@fb.com> wrote:
>>
>>
>>
>> On 1/12/21 3:42 PM, Song Liu wrote:
>>> syzbot reported a WARNING for allocating too big memory:
>>> WARNING: CPU: 1 PID: 8484 at mm/page_alloc.c:4976 __alloc_pages_nodemask+0x5f8/0x730 mm/page_alloc.c:5011
>>> Modules linked in:
>>> CPU: 1 PID: 8484 Comm: syz-executor862 Not tainted 5.11.0-rc2-syzkaller #0
>>> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
>>> RIP: 0010:__alloc_pages_nodemask+0x5f8/0x730 mm/page_alloc.c:4976
>>> Code: 00 00 0c 00 0f 85 a7 00 00 00 8b 3c 24 4c 89 f2 44 89 e6 c6 44 24 70 00 48 89 6c 24 58 e8 d0 d7 ff ff 49 89 c5 e9 ea fc ff ff <0f> 0b e9 b5 fd ff ff 89 74 24 14 4c 89 4c 24 08 4c 89 74 24 18 e8
>>> RSP: 0018:ffffc900012efb10 EFLAGS: 00010246
>>> RAX: 0000000000000000 RBX: 1ffff9200025df66 RCX: 0000000000000000
>>> RDX: 0000000000000000 RSI: dffffc0000000000 RDI: 0000000000140dc0
>>> RBP: 0000000000140dc0 R08: 0000000000000000 R09: 0000000000000000
>>> R10: ffffffff81b1f7e1 R11: 0000000000000000 R12: 0000000000000014
>>> R13: 0000000000000014 R14: 0000000000000000 R15: 0000000000000000
>>> FS:  000000000190c880(0000) GS:ffff8880b9e00000(0000) knlGS:0000000000000000
>>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> CR2: 00007f08b7f316c0 CR3: 0000000012073000 CR4: 00000000001506f0
>>> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
>>> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
>>> Call Trace:
>>> alloc_pages_current+0x18c/0x2a0 mm/mempolicy.c:2267
>>> alloc_pages include/linux/gfp.h:547 [inline]
>>> kmalloc_order+0x2e/0xb0 mm/slab_common.c:837
>>> kmalloc_order_trace+0x14/0x120 mm/slab_common.c:853
>>> kmalloc include/linux/slab.h:557 [inline]
>>> kzalloc include/linux/slab.h:682 [inline]
>>> bpf_prog_test_run_raw_tp+0x4b5/0x670 net/bpf/test_run.c:282
>>> bpf_prog_test_run kernel/bpf/syscall.c:3120 [inline]
>>> __do_sys_bpf+0x1ea9/0x4f10 kernel/bpf/syscall.c:4398
>>> do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
>>> entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> RIP: 0033:0x440499
>>> Code: 18 89 d0 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b 13 fc ff c3 66 2e 0f 1f 84 00 00 00 00
>>> RSP: 002b:00007ffe1f3bfb18 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
>>> RAX: ffffffffffffffda RBX: 00000000004002c8 RCX: 0000000000440499
>>> RDX: 0000000000000048 RSI: 0000000020000600 RDI: 000000000000000a
>>> RBP: 00000000006ca018 R08: 0000000000000000 R09: 00000000004002c8
>>> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000401ca0
>>> R13: 0000000000401d30 R14: 0000000000000000 R15: 0000000000000000
>>> This is because we didn't filter out too big ctx_size_in. Fix it by
>>> rejecting ctx_size_in that are bigger than MAX_BPF_FUNC_ARGS (12) u64
>>> numbers.
>>> Reported-by: syzbot+4f98876664c7337a4ae6@syzkaller.appspotmail.com
>>> Fixes: 1b4d60ec162f ("bpf: Enable BPF_PROG_TEST_RUN for raw_tracepoint")
>>> Cc: stable@vger.kernel.org # v5.10+
>>> Signed-off-by: Song Liu <songliubraving@fb.com>
>>
>> Maybe this should target to bpf tree?
> 
> IIRC, we direct fixes to current release under rc (5.11) to bpf tree. This
> one is for 5.10 and 5.11, so should go bpf-next, no?

I don't know where it should go first. Maintainers know better. But it 
should go to 5.10, 5.11 (currently rc4) and bpf-next.

> 
>>
>> Acked-by: Yonghong Song <yhs@fb.com>
> 
> Thanks!
> 

  reply	other threads:[~2021-01-14  1:52 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-12 23:42 [PATCH bpf-next] bpf: reject too big ctx_size_in for raw_tp test run Song Liu
2021-01-13  5:17 ` Yonghong Song
2021-01-13 21:48   ` Song Liu
2021-01-13 23:28     ` Yonghong Song [this message]
2021-01-14  3:41       ` Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d116261-5ef2-eef6-369f-e8e12eaebc6e@fb.com \
    --to=yhs@fb.com \
    --cc=Kernel-team@fb.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=john.fastabend@gmail.com \
    --cc=kpsingh@chromium.org \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=stable@vger.kernel.org \
    --cc=syzbot+4f98876664c7337a4ae6@syzkaller.appspotmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).