All of lore.kernel.org
 help / color / mirror / Atom feed
* bpf: Massive skbuff_head_cache memory leak?
@ 2018-09-22 13:25 Tetsuo Handa
  2018-09-26 21:09 ` Tetsuo Handa
  0 siblings, 1 reply; 8+ messages in thread
From: Tetsuo Handa @ 2018-09-22 13:25 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: Network Development, LKML, David S. Miller

Hello.

syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.

  INFO: rcu detected stall in br_multicast_port_group_expired (2)
  https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94

  INFO: rcu detected stall in tun_chr_close
  https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270

  INFO: rcu detected stall in discover_timer
  https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54

  INFO: rcu detected stall in ret_from_fork (2)
  https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363

  INFO: rcu detected stall in addrconf_rs_timer
  https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6

  INFO: rcu detected stall in kthread (2)
  https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2

  INFO: rcu detected stall in ext4_filemap_fault
  https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af

  INFO: rcu detected stall in run_timer_softirq (2)
  https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe

  INFO: rcu detected stall in bpf_prog_ADDR
  https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744

  INFO: rcu detected stall in __run_timers (2)
  https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a

The cause of lockup seems to be flood of printk() messages from memory allocation
failures, and one of out_of_memory() messages indicates that skbuff_head_cache
usage is huge enough to suspect in-kernel memory leaks.

  [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB

Unfortunately, we cannot find from logs what syzbot is trying to do
because constant printk() messages is flooding away syzkaller messages.
Can you try running your testcases with kmemleak enabled?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-22 13:25 bpf: Massive skbuff_head_cache memory leak? Tetsuo Handa
@ 2018-09-26 21:09 ` Tetsuo Handa
  2018-09-26 21:22   ` Daniel Borkmann
  2018-09-27 10:24   ` Dmitry Vyukov
  0 siblings, 2 replies; 8+ messages in thread
From: Tetsuo Handa @ 2018-09-26 21:09 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann
  Cc: Network Development, David S. Miller, Dmitry Vyukov,
	Andrew Morton, Michal Hocko, John Johansen

Hello, Alexei and Daniel.

Can you show us how to run testcases you are testing?

On 2018/09/22 22:25, Tetsuo Handa wrote:
> Hello.
> 
> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
> 
>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
> 
>   INFO: rcu detected stall in tun_chr_close
>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
> 
>   INFO: rcu detected stall in discover_timer
>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
> 
>   INFO: rcu detected stall in ret_from_fork (2)
>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
> 
>   INFO: rcu detected stall in addrconf_rs_timer
>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
> 
>   INFO: rcu detected stall in kthread (2)
>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
> 
>   INFO: rcu detected stall in ext4_filemap_fault
>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
> 
>   INFO: rcu detected stall in run_timer_softirq (2)
>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
> 
>   INFO: rcu detected stall in bpf_prog_ADDR
>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
> 
>   INFO: rcu detected stall in __run_timers (2)
>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
> 
> The cause of lockup seems to be flood of printk() messages from memory allocation
> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
> usage is huge enough to suspect in-kernel memory leaks.
> 
>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
> 
> Unfortunately, we cannot find from logs what syzbot is trying to do
> because constant printk() messages is flooding away syzkaller messages.
> Can you try running your testcases with kmemleak enabled?
> 

On 2018/09/27 2:35, Dmitry Vyukov wrote:
> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
> Now the instances that use SELinux and Smack explicitly contain that
> in the name, but the rest are Apparmor.
> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
> some Apparmor hook leaks a reference to skbs?

Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
this problem is not occurring on linux-next.git when this problem is occurring
on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
different testcases depending on which git tree is targeted?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-26 21:09 ` Tetsuo Handa
@ 2018-09-26 21:22   ` Daniel Borkmann
  2018-09-26 23:35     ` John Johansen
  2018-09-27 10:24   ` Dmitry Vyukov
  1 sibling, 1 reply; 8+ messages in thread
From: Daniel Borkmann @ 2018-09-26 21:22 UTC (permalink / raw)
  To: Tetsuo Handa, Alexei Starovoitov
  Cc: Network Development, David S. Miller, Dmitry Vyukov,
	Andrew Morton, Michal Hocko, John Johansen

On 09/26/2018 11:09 PM, Tetsuo Handa wrote:
> Hello, Alexei and Daniel.
> 
> Can you show us how to run testcases you are testing?

Sorry for the delay; currently quite backlogged but will definitely take a look
at these reports. Regarding your question: majority of test cases are in the
kernel tree under selftests, see tools/testing/selftests/bpf/ .

> On 2018/09/22 22:25, Tetsuo Handa wrote:
>> Hello.
>>
>> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
>>
>>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
>>
>>   INFO: rcu detected stall in tun_chr_close
>>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
>>
>>   INFO: rcu detected stall in discover_timer
>>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
>>
>>   INFO: rcu detected stall in ret_from_fork (2)
>>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
>>
>>   INFO: rcu detected stall in addrconf_rs_timer
>>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
>>
>>   INFO: rcu detected stall in kthread (2)
>>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
>>
>>   INFO: rcu detected stall in ext4_filemap_fault
>>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
>>
>>   INFO: rcu detected stall in run_timer_softirq (2)
>>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
>>
>>   INFO: rcu detected stall in bpf_prog_ADDR
>>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
>>
>>   INFO: rcu detected stall in __run_timers (2)
>>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
>>
>> The cause of lockup seems to be flood of printk() messages from memory allocation
>> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
>> usage is huge enough to suspect in-kernel memory leaks.
>>
>>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
>>
>> Unfortunately, we cannot find from logs what syzbot is trying to do
>> because constant printk() messages is flooding away syzkaller messages.
>> Can you try running your testcases with kmemleak enabled?
>>
> 
> On 2018/09/27 2:35, Dmitry Vyukov wrote:
>> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
>> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
>> Now the instances that use SELinux and Smack explicitly contain that
>> in the name, but the rest are Apparmor.
>> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
>> some Apparmor hook leaks a reference to skbs?
> 
> Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
> this problem is not occurring on linux-next.git when this problem is occurring
> on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
> different testcases depending on which git tree is targeted?
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-26 21:22   ` Daniel Borkmann
@ 2018-09-26 23:35     ` John Johansen
  2018-09-27 10:27       ` Dmitry Vyukov
  0 siblings, 1 reply; 8+ messages in thread
From: John Johansen @ 2018-09-26 23:35 UTC (permalink / raw)
  To: Daniel Borkmann, Tetsuo Handa, Alexei Starovoitov
  Cc: Network Development, David S. Miller, Dmitry Vyukov,
	Andrew Morton, Michal Hocko

On 09/26/2018 02:22 PM, Daniel Borkmann wrote:
> On 09/26/2018 11:09 PM, Tetsuo Handa wrote:
>> Hello, Alexei and Daniel.
>>
>> Can you show us how to run testcases you are testing?
> 
> Sorry for the delay; currently quite backlogged but will definitely take a look
> at these reports. Regarding your question: majority of test cases are in the
> kernel tree under selftests, see tools/testing/selftests/bpf/ .
> 

Its unlikely to be apparmor. I went through the reports and saw nothing that
would indicate apparmor involvement, but the primary reason is what is being tested
in upstream apparmor atm.

The current upstream code does nothing directly with skbuffs. Its
possible that the audit code paths (kernel audit does grab skbuffs)
could, but there are only a couple cases that would be triggered in
the current fuzzing so this seems to be an unlikely source for such a
large leak.

>> On 2018/09/22 22:25, Tetsuo Handa wrote:
>>> Hello.
>>>
>>> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
>>>
>>>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>>>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
>>>
>>>   INFO: rcu detected stall in tun_chr_close
>>>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
>>>
>>>   INFO: rcu detected stall in discover_timer
>>>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
>>>
>>>   INFO: rcu detected stall in ret_from_fork (2)
>>>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
>>>
>>>   INFO: rcu detected stall in addrconf_rs_timer
>>>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
>>>
>>>   INFO: rcu detected stall in kthread (2)
>>>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
>>>
>>>   INFO: rcu detected stall in ext4_filemap_fault
>>>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
>>>
>>>   INFO: rcu detected stall in run_timer_softirq (2)
>>>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
>>>
>>>   INFO: rcu detected stall in bpf_prog_ADDR
>>>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
>>>
>>>   INFO: rcu detected stall in __run_timers (2)
>>>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
>>>
>>> The cause of lockup seems to be flood of printk() messages from memory allocation
>>> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
>>> usage is huge enough to suspect in-kernel memory leaks.
>>>
>>>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
>>>
>>> Unfortunately, we cannot find from logs what syzbot is trying to do
>>> because constant printk() messages is flooding away syzkaller messages.
>>> Can you try running your testcases with kmemleak enabled?
>>>
>>
>> On 2018/09/27 2:35, Dmitry Vyukov wrote:
>>> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
>>> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
>>> Now the instances that use SELinux and Smack explicitly contain that
>>> in the name, but the rest are Apparmor.
>>> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
>>> some Apparmor hook leaks a reference to skbs?
>>
>> Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
>> this problem is not occurring on linux-next.git when this problem is occurring
>> on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
>> different testcases depending on which git tree is targeted?
>>
> 

this is another reason that it is doubtful that its apparmor.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-26 21:09 ` Tetsuo Handa
  2018-09-26 21:22   ` Daniel Borkmann
@ 2018-09-27 10:24   ` Dmitry Vyukov
  1 sibling, 0 replies; 8+ messages in thread
From: Dmitry Vyukov @ 2018-09-27 10:24 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: Alexei Starovoitov, Daniel Borkmann, Network Development,
	David S. Miller, Andrew Morton, Michal Hocko, John Johansen

On Wed, Sep 26, 2018 at 11:09 PM, Tetsuo Handa
<penguin-kernel@i-love.sakura.ne.jp> wrote:
> Hello, Alexei and Daniel.
>
> Can you show us how to run testcases you are testing?
>
> On 2018/09/22 22:25, Tetsuo Handa wrote:
>> Hello.
>>
>> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
>>
>>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
>>
>>   INFO: rcu detected stall in tun_chr_close
>>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
>>
>>   INFO: rcu detected stall in discover_timer
>>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
>>
>>   INFO: rcu detected stall in ret_from_fork (2)
>>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
>>
>>   INFO: rcu detected stall in addrconf_rs_timer
>>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
>>
>>   INFO: rcu detected stall in kthread (2)
>>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
>>
>>   INFO: rcu detected stall in ext4_filemap_fault
>>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
>>
>>   INFO: rcu detected stall in run_timer_softirq (2)
>>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
>>
>>   INFO: rcu detected stall in bpf_prog_ADDR
>>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
>>
>>   INFO: rcu detected stall in __run_timers (2)
>>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
>>
>> The cause of lockup seems to be flood of printk() messages from memory allocation
>> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
>> usage is huge enough to suspect in-kernel memory leaks.
>>
>>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
>>
>> Unfortunately, we cannot find from logs what syzbot is trying to do
>> because constant printk() messages is flooding away syzkaller messages.
>> Can you try running your testcases with kmemleak enabled?
>>
>
> On 2018/09/27 2:35, Dmitry Vyukov wrote:
>> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
>> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
>> Now the instances that use SELinux and Smack explicitly contain that
>> in the name, but the rest are Apparmor.
>> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
>> some Apparmor hook leaks a reference to skbs?
>
> Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
> this problem is not occurring on linux-next.git when this problem is occurring
> on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
> different testcases depending on which git tree is targeted?


Yes, this is strange. Net/bpf instances run _subset_ of tests. That
is, they are more concentrated on the corresponding subsystems, but
other instances can run all these tests too, just with lower
probability.

Bpf instances are restricted to this set of syscalls:

"enable_syscalls": [
    "bpf", "mkdir", "mount$bpf", "unlink", "close",
    "perf_event_open", "ioctl$PERF*", "getpid", "gettid",
    "socketpair", "sendmsg", "recvmsg", "setsockopt$sock_attach_bpf",
    "socket$kcm", "ioctl$sock_kcm*",
    "mkdirat$cgroup*", "openat$cgroup*", "write$cgroup*",
    "openat$tun", "write$tun", "ioctl$TUN*", "ioctl$SIOCSIFHWADDR"
]

Net instances to this:

"enable_syscalls": [
    "accept", "accept4", "bind", "close", "connect", "epoll_create",
    "epoll_create1", "epoll_ctl", "epoll_pwait", "epoll_wait",
    "getpeername", "getsockname", "getsockopt", "ioctl", "listen",
    "mmap", "poll", "ppoll", "pread64", "preadv", "pselect6",
    "pwrite64", "pwritev", "read", "readv", "recvfrom", "recvmmsg",
    "recvmsg", "select", "sendfile", "sendmmsg", "sendmsg", "sendto",
    "setsockopt", "shutdown", "socket", "socketpair", "splice",
    "vmsplice", "write", "writev", "tee", "bpf", "getpid",
    "getgid", "getuid", "gettid", "unshare", "pipe",
    "syz_emit_ethernet", "syz_extract_tcp_res",
    "syz_genetlink_get_family_id", "syz_init_net_socket",
    "mkdirat$cgroup*", "openat$cgroup*", "write$cgroup*",
    "clock_gettime", "bpf"
]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-26 23:35     ` John Johansen
@ 2018-09-27 10:27       ` Dmitry Vyukov
  2018-09-27 16:47         ` Dmitry Vyukov
  0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2018-09-27 10:27 UTC (permalink / raw)
  To: John Johansen
  Cc: Daniel Borkmann, Tetsuo Handa, Alexei Starovoitov,
	Network Development, David S. Miller, Andrew Morton,
	Michal Hocko

On Thu, Sep 27, 2018 at 1:35 AM, John Johansen
<john.johansen@canonical.com> wrote:
> On 09/26/2018 02:22 PM, Daniel Borkmann wrote:
>> On 09/26/2018 11:09 PM, Tetsuo Handa wrote:
>>> Hello, Alexei and Daniel.
>>>
>>> Can you show us how to run testcases you are testing?
>>
>> Sorry for the delay; currently quite backlogged but will definitely take a look
>> at these reports. Regarding your question: majority of test cases are in the
>> kernel tree under selftests, see tools/testing/selftests/bpf/ .
>>
>
> Its unlikely to be apparmor. I went through the reports and saw nothing that
> would indicate apparmor involvement, but the primary reason is what is being tested
> in upstream apparmor atm.
>
> The current upstream code does nothing directly with skbuffs. Its
> possible that the audit code paths (kernel audit does grab skbuffs)
> could, but there are only a couple cases that would be triggered in
> the current fuzzing so this seems to be an unlikely source for such a
> large leak.


Ack. There is no direct evidence against apparmor, I am just trying to
get at least some hooks re the root cause.

>From all the weak indirect evidence, I leaning towards skb allocation
in an infinite loop (or a timer with infinite rate).

>>> On 2018/09/22 22:25, Tetsuo Handa wrote:
>>>> Hello.
>>>>
>>>> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
>>>>
>>>>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>>>>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
>>>>
>>>>   INFO: rcu detected stall in tun_chr_close
>>>>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
>>>>
>>>>   INFO: rcu detected stall in discover_timer
>>>>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
>>>>
>>>>   INFO: rcu detected stall in ret_from_fork (2)
>>>>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
>>>>
>>>>   INFO: rcu detected stall in addrconf_rs_timer
>>>>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
>>>>
>>>>   INFO: rcu detected stall in kthread (2)
>>>>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
>>>>
>>>>   INFO: rcu detected stall in ext4_filemap_fault
>>>>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
>>>>
>>>>   INFO: rcu detected stall in run_timer_softirq (2)
>>>>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
>>>>
>>>>   INFO: rcu detected stall in bpf_prog_ADDR
>>>>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
>>>>
>>>>   INFO: rcu detected stall in __run_timers (2)
>>>>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
>>>>
>>>> The cause of lockup seems to be flood of printk() messages from memory allocation
>>>> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
>>>> usage is huge enough to suspect in-kernel memory leaks.
>>>>
>>>>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
>>>>
>>>> Unfortunately, we cannot find from logs what syzbot is trying to do
>>>> because constant printk() messages is flooding away syzkaller messages.
>>>> Can you try running your testcases with kmemleak enabled?
>>>>
>>>
>>> On 2018/09/27 2:35, Dmitry Vyukov wrote:
>>>> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
>>>> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
>>>> Now the instances that use SELinux and Smack explicitly contain that
>>>> in the name, but the rest are Apparmor.
>>>> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
>>>> some Apparmor hook leaks a reference to skbs?
>>>
>>> Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
>>> this problem is not occurring on linux-next.git when this problem is occurring
>>> on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
>>> different testcases depending on which git tree is targeted?
>>>
>>
>
> this is another reason that it is doubtful that its apparmor.
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-27 10:27       ` Dmitry Vyukov
@ 2018-09-27 16:47         ` Dmitry Vyukov
  2018-09-27 16:53           ` Dmitry Vyukov
  0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Vyukov @ 2018-09-27 16:47 UTC (permalink / raw)
  To: Eric Dumazet, Tetsuo Handa
  Cc: Daniel Borkmann, Alexei Starovoitov, Network Development,
	David S. Miller, Andrew Morton, Michal Hocko, John Johansen,
	syzkaller

On Thu, Sep 27, 2018 at 12:27 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Thu, Sep 27, 2018 at 1:35 AM, John Johansen
> <john.johansen@canonical.com> wrote:
>> On 09/26/2018 02:22 PM, Daniel Borkmann wrote:
>>> On 09/26/2018 11:09 PM, Tetsuo Handa wrote:
>>>> Hello, Alexei and Daniel.
>>>>
>>>> Can you show us how to run testcases you are testing?
>>>
>>> Sorry for the delay; currently quite backlogged but will definitely take a look
>>> at these reports. Regarding your question: majority of test cases are in the
>>> kernel tree under selftests, see tools/testing/selftests/bpf/ .
>>>
>>
>> Its unlikely to be apparmor. I went through the reports and saw nothing that
>> would indicate apparmor involvement, but the primary reason is what is being tested
>> in upstream apparmor atm.
>>
>> The current upstream code does nothing directly with skbuffs. Its
>> possible that the audit code paths (kernel audit does grab skbuffs)
>> could, but there are only a couple cases that would be triggered in
>> the current fuzzing so this seems to be an unlikely source for such a
>> large leak.
>
>
> Ack. There is no direct evidence against apparmor, I am just trying to
> get at least some hooks re the root cause.
>
> From all the weak indirect evidence, I leaning towards skb allocation
> in an infinite loop (or a timer with infinite rate).
>
>>>> On 2018/09/22 22:25, Tetsuo Handa wrote:
>>>>> Hello.
>>>>>
>>>>> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
>>>>>
>>>>>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>>>>>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
>>>>>
>>>>>   INFO: rcu detected stall in tun_chr_close
>>>>>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
>>>>>
>>>>>   INFO: rcu detected stall in discover_timer
>>>>>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
>>>>>
>>>>>   INFO: rcu detected stall in ret_from_fork (2)
>>>>>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
>>>>>
>>>>>   INFO: rcu detected stall in addrconf_rs_timer
>>>>>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
>>>>>
>>>>>   INFO: rcu detected stall in kthread (2)
>>>>>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
>>>>>
>>>>>   INFO: rcu detected stall in ext4_filemap_fault
>>>>>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
>>>>>
>>>>>   INFO: rcu detected stall in run_timer_softirq (2)
>>>>>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
>>>>>
>>>>>   INFO: rcu detected stall in bpf_prog_ADDR
>>>>>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
>>>>>
>>>>>   INFO: rcu detected stall in __run_timers (2)
>>>>>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
>>>>>
>>>>> The cause of lockup seems to be flood of printk() messages from memory allocation
>>>>> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
>>>>> usage is huge enough to suspect in-kernel memory leaks.
>>>>>
>>>>>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
>>>>>
>>>>> Unfortunately, we cannot find from logs what syzbot is trying to do
>>>>> because constant printk() messages is flooding away syzkaller messages.
>>>>> Can you try running your testcases with kmemleak enabled?
>>>>>
>>>>
>>>> On 2018/09/27 2:35, Dmitry Vyukov wrote:
>>>>> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
>>>>> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
>>>>> Now the instances that use SELinux and Smack explicitly contain that
>>>>> in the name, but the rest are Apparmor.
>>>>> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
>>>>> some Apparmor hook leaks a reference to skbs?
>>>>
>>>> Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
>>>> this problem is not occurring on linux-next.git when this problem is occurring
>>>> on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
>>>> different testcases depending on which git tree is targeted?
>>>>
>> this is another reason that it is doubtful that its apparmor.

On Thu, Sep 27, 2018 at 2:52 PM, edumazet
> Have you tried kmemleak perhaps, it might give us a clue, but it seems
> obvious the leak would be in TX path.

So, I've tried. Now what? :)

I've uploaded all reports to:
https://drive.google.com/file/d/107LUW0zmYbXmxfQCWoLpeenxXJsXIkxj/view?usp=sharing
This is on net tree d4ce58082f206bf6e7d697380c7bc5480a8b0264

memory leak in __lookup_hash    33    Sep 27 2018 16:35:50
memory leak in new_inode_pseudo    43    Sep 27 2018 16:41:14
memory leak in path_openat    1    Sep 27 2018 16:12:53
memory leak in rhashtable_init    1    Sep 27 2018 16:41:41
memory leak in shmem_symlink    4    Sep 27 2018 16:34:34
memory leak in __anon_vma_prepare    1    Sep 27 2018 17:30:02
memory leak in __do_execve_file    4    Sep 27 2018 18:14:10
memory leak in __do_sys_perf_event_open    2    Sep 27 2018 17:40:05
memory leak in __es_insert_extent    3    Sep 27 2018 17:24:52
memory leak in __getblk_gfp    3    Sep 27 2018 18:19:30
memory leak in __handle_mm_fault    1    Sep 27 2018 18:11:31
memory leak in __hw_addr_create_ex    2    Sep 27 2018 18:10:56
memory leak in __ip_mc_inc_group    13    Sep 27 2018 18:24:31
memory leak in __khugepaged_enter    1    Sep 27 2018 15:40:25
memory leak in __list_lru_init    27    Sep 27 2018 17:30:48
memory leak in __neigh_create    12    Sep 27 2018 17:40:28
memory leak in __netlink_create    1    Sep 27 2018 15:40:23
memory leak in __register_sysctl_table    1    Sep 27 2018 17:36:57
memory leak in __send_signal    1    Sep 27 2018 18:30:48
memory leak in __sys_socket    7    Sep 27 2018 15:43:20
memory leak in anon_inode_getfile    4    Sep 27 2018 18:17:29
memory leak in bpf_prog_store_orig_filter    1    Sep 27 2018 17:59:14
memory leak in br_multicast_new_group    2    Sep 27 2018 18:16:42
memory leak in br_multicast_new_port_group    3    Sep 27 2018 18:17:39
memory leak in build_sched_domains    2    Sep 27 2018 17:28:55
memory leak in clone_mnt    2    Sep 27 2018 18:18:48
memory leak in compute_effective_progs    1    Sep 27 2018 18:35:15
memory leak in create_empty_buffers    1    Sep 27 2018 17:35:42
memory leak in create_filter_start    2    Sep 27 2018 18:16:00
memory leak in create_pipe_files    3    Sep 27 2018 18:21:47
memory leak in do_ip6t_set_ctl    1    Sep 27 2018 15:40:21
memory leak in do_ipt_set_ctl    1    Sep 27 2018 17:37:35
memory leak in do_signalfd4    1    Sep 27 2018 18:00:45
memory leak in do_syslog    2    Sep 27 2018 18:06:22
memory leak in ep_insert    11    Sep 27 2018 17:39:14
memory leak in ep_ptable_queue_proc    3    Sep 27 2018 17:37:17
memory leak in ext4_mb_new_group_pa    1    Sep 27 2018 17:36:04
memory leak in ext4_mb_new_inode_pa    3    Sep 27 2018 17:39:19
memory leak in fdb_create    13    Sep 27 2018 18:24:03
memory leak in fib6_add_1    11    Sep 27 2018 18:19:53
memory leak in fib_table_insert    1    Sep 27 2018 17:36:49
memory leak in find_get_context    1    Sep 27 2018 15:42:00
memory leak in fsnotify_add_mark_locked    11    Sep 27 2018 17:31:41
memory leak in idr_get_free    2    Sep 27 2018 18:17:34
memory leak in iget_locked    1    Sep 27 2018 17:52:59
memory leak in inet_frag_find    3    Sep 27 2018 18:23:53
memory leak in inotify_update_watch    25    Sep 27 2018 17:30:59
memory leak in ioc_create_icq    1    Sep 27 2018 15:42:40
memory leak in ip6_pol_route    4    Sep 27 2018 18:24:59
memory leak in ip6_route_info_create    1    Sep 27 2018 17:38:09
memory leak in ip6t_register_table    1    Sep 27 2018 17:35:39
memory leak in ip_route_output_key_hash_rcu    4    Sep 27 2018 18:21:21
memory leak in ipt_register_table    3    Sep 27 2018 17:39:58
memory leak in ipv6_add_addr    1    Sep 27 2018 17:38:49
memory leak in load_elf_binary    1    Sep 27 2018 18:09:44
memory leak in map_create    9    Sep 27 2018 18:25:10
memory leak in memcg_update_all_list_lrus    1    Sep 27 2018 15:39:36
memory leak in ndisc_send_rs    1    Sep 27 2018 17:50:03
memory leak in neigh_table_init    6    Sep 27 2018 17:23:41
memory leak in nf_hook_entries_grow    5    Sep 27 2018 15:43:41
memory leak in packet_sendmsg    1    Sep 27 2018 18:08:43
memory leak in pcpu_create_chunk    1    Sep 27 2018 17:48:41
memory leak in prepare_creds    18    Sep 27 2018 17:29:10
memory leak in prepare_kernel_cred    15    Sep 27 2018 18:23:42
memory leak in process_preds    2    Sep 27 2018 18:11:01
memory leak in rht_deferred_worker    9    Sep 27 2018 18:24:29
memory leak in sched_init_domains    2    Sep 27 2018 18:12:42
memory leak in sctp_addr_wq_mgmt    1    Sep 27 2018 17:46:20
memory leak in sget    5    Sep 27 2018 18:03:40
memory leak in shmem_symlink    30    Sep 27 2018 17:32:09
memory leak in skb_clone    3    Sep 27 2018 18:13:22
memory leak in submit_bh_wbc    1    Sep 27 2018 17:49:06
memory leak in tracepoint_probe_register_prio    1    Sep 27 2018 17:39:13
memory leak in xt_replace_table    4    Sep 27 2018 15:43:19
memory leak in __delayacct_tsk_init    2    Sep 27 2018 17:02:53
memory leak in disk_expand_part_tbl    1    Sep 27 2018 16:59:05
memory leak in do_ip6t_set_ctl    14    Sep 27 2018 15:46:37
memory leak in neigh_table_init    4    Sep 27 2018 17:10:29
memory leak in do_check    1    Sep 27 2018 18:38:13

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: bpf: Massive skbuff_head_cache memory leak?
  2018-09-27 16:47         ` Dmitry Vyukov
@ 2018-09-27 16:53           ` Dmitry Vyukov
  0 siblings, 0 replies; 8+ messages in thread
From: Dmitry Vyukov @ 2018-09-27 16:53 UTC (permalink / raw)
  To: Eric Dumazet, Tetsuo Handa
  Cc: Daniel Borkmann, Alexei Starovoitov, Network Development,
	David S. Miller, Andrew Morton, Michal Hocko, John Johansen,
	syzkaller

On Thu, Sep 27, 2018 at 6:47 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Thu, Sep 27, 2018 at 12:27 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>> On Thu, Sep 27, 2018 at 1:35 AM, John Johansen
>> <john.johansen@canonical.com> wrote:
>>> On 09/26/2018 02:22 PM, Daniel Borkmann wrote:
>>>> On 09/26/2018 11:09 PM, Tetsuo Handa wrote:
>>>>> Hello, Alexei and Daniel.
>>>>>
>>>>> Can you show us how to run testcases you are testing?
>>>>
>>>> Sorry for the delay; currently quite backlogged but will definitely take a look
>>>> at these reports. Regarding your question: majority of test cases are in the
>>>> kernel tree under selftests, see tools/testing/selftests/bpf/ .
>>>>
>>>
>>> Its unlikely to be apparmor. I went through the reports and saw nothing that
>>> would indicate apparmor involvement, but the primary reason is what is being tested
>>> in upstream apparmor atm.
>>>
>>> The current upstream code does nothing directly with skbuffs. Its
>>> possible that the audit code paths (kernel audit does grab skbuffs)
>>> could, but there are only a couple cases that would be triggered in
>>> the current fuzzing so this seems to be an unlikely source for such a
>>> large leak.
>>
>>
>> Ack. There is no direct evidence against apparmor, I am just trying to
>> get at least some hooks re the root cause.
>>
>> From all the weak indirect evidence, I leaning towards skb allocation
>> in an infinite loop (or a timer with infinite rate).
>>
>>>>> On 2018/09/22 22:25, Tetsuo Handa wrote:
>>>>>> Hello.
>>>>>>
>>>>>> syzbot is reporting many lockup problems on bpf.git / bpf-next.git / net.git / net-next.git trees.
>>>>>>
>>>>>>   INFO: rcu detected stall in br_multicast_port_group_expired (2)
>>>>>>   https://syzkaller.appspot.com/bug?id=15c7ad8cf35a07059e8a697a22527e11d294bc94
>>>>>>
>>>>>>   INFO: rcu detected stall in tun_chr_close
>>>>>>   https://syzkaller.appspot.com/bug?id=6c50618bde03e5a2eefdd0269cf9739c5ebb8270
>>>>>>
>>>>>>   INFO: rcu detected stall in discover_timer
>>>>>>   https://syzkaller.appspot.com/bug?id=55da031ddb910e58ab9c6853a5784efd94f03b54
>>>>>>
>>>>>>   INFO: rcu detected stall in ret_from_fork (2)
>>>>>>   https://syzkaller.appspot.com/bug?id=c83129a6683b44b39f5b8864a1325893c9218363
>>>>>>
>>>>>>   INFO: rcu detected stall in addrconf_rs_timer
>>>>>>   https://syzkaller.appspot.com/bug?id=21c029af65f81488edbc07a10ed20792444711b6
>>>>>>
>>>>>>   INFO: rcu detected stall in kthread (2)
>>>>>>   https://syzkaller.appspot.com/bug?id=6accd1ed11c31110fed1982f6ad38cc9676477d2
>>>>>>
>>>>>>   INFO: rcu detected stall in ext4_filemap_fault
>>>>>>   https://syzkaller.appspot.com/bug?id=817e38d20e9ee53390ac361bf0fd2007eaf188af
>>>>>>
>>>>>>   INFO: rcu detected stall in run_timer_softirq (2)
>>>>>>   https://syzkaller.appspot.com/bug?id=f5a230a3ff7822f8d39fddf8485931bd06ae47fe
>>>>>>
>>>>>>   INFO: rcu detected stall in bpf_prog_ADDR
>>>>>>   https://syzkaller.appspot.com/bug?id=fb4911fd0e861171cc55124e209f810a0dd68744
>>>>>>
>>>>>>   INFO: rcu detected stall in __run_timers (2)
>>>>>>   https://syzkaller.appspot.com/bug?id=65416569ddc8d2feb8f19066aa761f5a47f7451a
>>>>>>
>>>>>> The cause of lockup seems to be flood of printk() messages from memory allocation
>>>>>> failures, and one of out_of_memory() messages indicates that skbuff_head_cache
>>>>>> usage is huge enough to suspect in-kernel memory leaks.
>>>>>>
>>>>>>   [ 1554.547011] skbuff_head_cache    1847887KB    1847887KB
>>>>>>
>>>>>> Unfortunately, we cannot find from logs what syzbot is trying to do
>>>>>> because constant printk() messages is flooding away syzkaller messages.
>>>>>> Can you try running your testcases with kmemleak enabled?
>>>>>>
>>>>>
>>>>> On 2018/09/27 2:35, Dmitry Vyukov wrote:
>>>>>> I also started suspecting Apparmor. We switched to Apparmor on Aug 30:
>>>>>> https://groups.google.com/d/msg/syzkaller-bugs/o73lO4KGh0w/j9pcH2tSBAAJ
>>>>>> Now the instances that use SELinux and Smack explicitly contain that
>>>>>> in the name, but the rest are Apparmor.
>>>>>> Aug 30 roughly matches these assorted "task hung" reports. Perhaps
>>>>>> some Apparmor hook leaks a reference to skbs?
>>>>>
>>>>> Maybe. They have CONFIG_DEFAULT_SECURITY="apparmor". But I'm wondering why
>>>>> this problem is not occurring on linux-next.git when this problem is occurring
>>>>> on bpf.git / bpf-next.git / net.git / net-next.git trees. Is syzbot running
>>>>> different testcases depending on which git tree is targeted?
>>>>>
>>> this is another reason that it is doubtful that its apparmor.
>
> On Thu, Sep 27, 2018 at 2:52 PM, edumazet
>> Have you tried kmemleak perhaps, it might give us a clue, but it seems
>> obvious the leak would be in TX path.
>
> So, I've tried. Now what? :)
>
> I've uploaded all reports to:
> https://drive.google.com/file/d/107LUW0zmYbXmxfQCWoLpeenxXJsXIkxj/view?usp=sharing
> This is on net tree d4ce58082f206bf6e7d697380c7bc5480a8b0264
>
> memory leak in __lookup_hash    33    Sep 27 2018 16:35:50
> memory leak in new_inode_pseudo    43    Sep 27 2018 16:41:14
> memory leak in path_openat    1    Sep 27 2018 16:12:53
> memory leak in rhashtable_init    1    Sep 27 2018 16:41:41
> memory leak in shmem_symlink    4    Sep 27 2018 16:34:34
> memory leak in __anon_vma_prepare    1    Sep 27 2018 17:30:02
> memory leak in __do_execve_file    4    Sep 27 2018 18:14:10
> memory leak in __do_sys_perf_event_open    2    Sep 27 2018 17:40:05
> memory leak in __es_insert_extent    3    Sep 27 2018 17:24:52
> memory leak in __getblk_gfp    3    Sep 27 2018 18:19:30
> memory leak in __handle_mm_fault    1    Sep 27 2018 18:11:31
> memory leak in __hw_addr_create_ex    2    Sep 27 2018 18:10:56
> memory leak in __ip_mc_inc_group    13    Sep 27 2018 18:24:31
> memory leak in __khugepaged_enter    1    Sep 27 2018 15:40:25
> memory leak in __list_lru_init    27    Sep 27 2018 17:30:48
> memory leak in __neigh_create    12    Sep 27 2018 17:40:28
> memory leak in __netlink_create    1    Sep 27 2018 15:40:23
> memory leak in __register_sysctl_table    1    Sep 27 2018 17:36:57
> memory leak in __send_signal    1    Sep 27 2018 18:30:48
> memory leak in __sys_socket    7    Sep 27 2018 15:43:20
> memory leak in anon_inode_getfile    4    Sep 27 2018 18:17:29
> memory leak in bpf_prog_store_orig_filter    1    Sep 27 2018 17:59:14
> memory leak in br_multicast_new_group    2    Sep 27 2018 18:16:42
> memory leak in br_multicast_new_port_group    3    Sep 27 2018 18:17:39
> memory leak in build_sched_domains    2    Sep 27 2018 17:28:55
> memory leak in clone_mnt    2    Sep 27 2018 18:18:48
> memory leak in compute_effective_progs    1    Sep 27 2018 18:35:15
> memory leak in create_empty_buffers    1    Sep 27 2018 17:35:42
> memory leak in create_filter_start    2    Sep 27 2018 18:16:00
> memory leak in create_pipe_files    3    Sep 27 2018 18:21:47
> memory leak in do_ip6t_set_ctl    1    Sep 27 2018 15:40:21
> memory leak in do_ipt_set_ctl    1    Sep 27 2018 17:37:35
> memory leak in do_signalfd4    1    Sep 27 2018 18:00:45
> memory leak in do_syslog    2    Sep 27 2018 18:06:22
> memory leak in ep_insert    11    Sep 27 2018 17:39:14
> memory leak in ep_ptable_queue_proc    3    Sep 27 2018 17:37:17
> memory leak in ext4_mb_new_group_pa    1    Sep 27 2018 17:36:04
> memory leak in ext4_mb_new_inode_pa    3    Sep 27 2018 17:39:19
> memory leak in fdb_create    13    Sep 27 2018 18:24:03
> memory leak in fib6_add_1    11    Sep 27 2018 18:19:53
> memory leak in fib_table_insert    1    Sep 27 2018 17:36:49
> memory leak in find_get_context    1    Sep 27 2018 15:42:00
> memory leak in fsnotify_add_mark_locked    11    Sep 27 2018 17:31:41
> memory leak in idr_get_free    2    Sep 27 2018 18:17:34
> memory leak in iget_locked    1    Sep 27 2018 17:52:59
> memory leak in inet_frag_find    3    Sep 27 2018 18:23:53
> memory leak in inotify_update_watch    25    Sep 27 2018 17:30:59
> memory leak in ioc_create_icq    1    Sep 27 2018 15:42:40
> memory leak in ip6_pol_route    4    Sep 27 2018 18:24:59
> memory leak in ip6_route_info_create    1    Sep 27 2018 17:38:09
> memory leak in ip6t_register_table    1    Sep 27 2018 17:35:39
> memory leak in ip_route_output_key_hash_rcu    4    Sep 27 2018 18:21:21
> memory leak in ipt_register_table    3    Sep 27 2018 17:39:58
> memory leak in ipv6_add_addr    1    Sep 27 2018 17:38:49
> memory leak in load_elf_binary    1    Sep 27 2018 18:09:44
> memory leak in map_create    9    Sep 27 2018 18:25:10
> memory leak in memcg_update_all_list_lrus    1    Sep 27 2018 15:39:36
> memory leak in ndisc_send_rs    1    Sep 27 2018 17:50:03
> memory leak in neigh_table_init    6    Sep 27 2018 17:23:41
> memory leak in nf_hook_entries_grow    5    Sep 27 2018 15:43:41
> memory leak in packet_sendmsg    1    Sep 27 2018 18:08:43
> memory leak in pcpu_create_chunk    1    Sep 27 2018 17:48:41
> memory leak in prepare_creds    18    Sep 27 2018 17:29:10
> memory leak in prepare_kernel_cred    15    Sep 27 2018 18:23:42
> memory leak in process_preds    2    Sep 27 2018 18:11:01
> memory leak in rht_deferred_worker    9    Sep 27 2018 18:24:29
> memory leak in sched_init_domains    2    Sep 27 2018 18:12:42
> memory leak in sctp_addr_wq_mgmt    1    Sep 27 2018 17:46:20
> memory leak in sget    5    Sep 27 2018 18:03:40
> memory leak in shmem_symlink    30    Sep 27 2018 17:32:09
> memory leak in skb_clone    3    Sep 27 2018 18:13:22
> memory leak in submit_bh_wbc    1    Sep 27 2018 17:49:06
> memory leak in tracepoint_probe_register_prio    1    Sep 27 2018 17:39:13
> memory leak in xt_replace_table    4    Sep 27 2018 15:43:19
> memory leak in __delayacct_tsk_init    2    Sep 27 2018 17:02:53
> memory leak in disk_expand_part_tbl    1    Sep 27 2018 16:59:05
> memory leak in do_ip6t_set_ctl    14    Sep 27 2018 15:46:37
> memory leak in neigh_table_init    4    Sep 27 2018 17:10:29
> memory leak in do_check    1    Sep 27 2018 18:38:13


I see at least 3 bridge-related:
memory leak in skb_clone
memory leak in br_multicast_new_group
memory leak in br_multicast_new_port_group

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-09-27 23:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-22 13:25 bpf: Massive skbuff_head_cache memory leak? Tetsuo Handa
2018-09-26 21:09 ` Tetsuo Handa
2018-09-26 21:22   ` Daniel Borkmann
2018-09-26 23:35     ` John Johansen
2018-09-27 10:27       ` Dmitry Vyukov
2018-09-27 16:47         ` Dmitry Vyukov
2018-09-27 16:53           ` Dmitry Vyukov
2018-09-27 10:24   ` Dmitry Vyukov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.