linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Russell King (Oracle)" <linux@armlinux.org.uk>
To: Dmitry Vyukov <dvyukov@google.com>
Cc: syzbot <syzbot+96a7f60bd78d03b6b991@syzkaller.appspotmail.com>,
	davem@davemloft.net, edumazet@google.com, kuba@kernel.org,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
	pabeni@redhat.com, syzkaller-bugs@googlegroups.com,
	Linux ARM <linux-arm-kernel@lists.infradead.org>
Subject: Re: [syzbot] [net?] Internal error in ipvlan_get_L3_hdr
Date: Wed, 14 Jun 2023 10:44:08 +0100	[thread overview]
Message-ID: <ZImL6P7Nt2MufaVW@shell.armlinux.org.uk> (raw)
In-Reply-To: <CACT4Y+ZJBoU_QU0DMuH_rCRm8Cu-4jGr8hBpuBozyzhdghjFZg@mail.gmail.com>

On Wed, Jun 14, 2023 at 10:49:16AM +0200, Dmitry Vyukov wrote:
> On Wed, 14 Jun 2023 at 09:35, syzbot
> <syzbot+96a7f60bd78d03b6b991@syzkaller.appspotmail.com> wrote:
> >
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit:    33f2b5785a2b Merge tag 'drm-fixes-2023-06-09' of git://ano..
> > git tree:       upstream
> > console output: https://syzkaller.appspot.com/x/log.txt?x=1749d065280000
> > kernel config:  https://syzkaller.appspot.com/x/.config?x=869b244dcd5d983c
> > dashboard link: https://syzkaller.appspot.com/bug?extid=96a7f60bd78d03b6b991
> > compiler:       arm-linux-gnueabi-gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2
> > userspace arch: arm
> 
> +arm maintainers
> 
> #syz set subsystems: arm
> 
> ip6_output() is recursed 9 times in the stack.
> 
> Eric pointed out that:
> 
> #define MAX_NEST_DEV 8
> #define XMIT_RECURSION_LIMIT    8
> 
> So net stack can legitimately do this recursion and arm stack is 2x
> smaller than x86_64 stack (8K instead of 16K).
> 
> Should arm stack be increased? Or MAX_NEST_DEV/XMIT_RECURSION_LIMIT
> reduced for arm?

Do we guarantee that order-2 allocations will succeed on a 4k page-
sized system? It seems it would be a doubling of the chances of
failure.

Another solution would be to use vmalloc, but then I'd start to
worry about vmalloc space. With a 16k vmalloc allocation (plus
guard page and alignment) that'll be 32k per thread, and 32k
threads would be 512M, which for a 3G:1G user/kernel split is
too way too big, so I don't think vmalloc is an option.

Is there nothing that net can do to reduce its stack usage?

-- 
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

      reply	other threads:[~2023-06-14  9:45 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-14  7:35 [syzbot] [net?] Internal error in ipvlan_get_L3_hdr syzbot
2023-06-14  8:49 ` Dmitry Vyukov
2023-06-14  9:44   ` Russell King (Oracle) [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZImL6P7Nt2MufaVW@shell.armlinux.org.uk \
    --to=linux@armlinux.org.uk \
    --cc=davem@davemloft.net \
    --cc=dvyukov@google.com \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=syzbot+96a7f60bd78d03b6b991@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).