From: Randy Dunlap <rdunlap@infradead.org>
To: Muchun Song <songmuchun@bytedance.com>,
gregkh@linuxfoundation.org, rafael@kernel.org, mst@redhat.com,
jasowang@redhat.com, davem@davemloft.net, kuba@kernel.org,
adobriyan@gmail.com, akpm@linux-foundation.org,
edumazet@google.com, kuznet@ms2.inr.ac.ru,
yoshfuji@linux-ipv6.org, steffen.klassert@secunet.com,
herbert@gondor.apana.org.au, shakeelb@google.com,
will@kernel.org, mhocko@suse.com, guro@fb.com, neilb@suse.de,
rppt@kernel.org, samitolvanen@google.com,
kirill.shutemov@linux.intel.com, feng.tang@intel.com,
pabeni@redhat.com, willemb@google.com, fw@strlen.de,
gustavoars@kernel.org, pablo@netfilter.org, decui@microsoft.com,
jakub@cloudflare.com, peterz@infradead.org,
christian.brauner@ubuntu.com, ebiederm@xmission.com,
tglx@linutronix.de, dave@stgolabs.net, walken@google.com,
jannh@google.com, chenqiwu@xiaomi.com, christophe.leroy@c-s.fr,
minchan@kernel.org, kafai@fb.com, ast@kernel.org,
daniel@iogearbox.net, linmiaohe@huawei.com,
keescook@chromium.org
Cc: linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [PATCH] mm: proc: add Sock to /proc/meminfo
Date: Sat, 10 Oct 2020 09:36:15 -0700 [thread overview]
Message-ID: <f6dfa37f-5991-3e96-93b8-737f60128151@infradead.org> (raw)
In-Reply-To: <20201010103854.66746-1-songmuchun@bytedance.com>
Hi,
On 10/10/20 3:38 AM, Muchun Song wrote:
> The amount of memory allocated to sockets buffer can become significant.
> However, we do not display the amount of memory consumed by sockets
> buffer. In this case, knowing where the memory is consumed by the kernel
> is very difficult. On our server with 500GB RAM, sometimes we can see
> 25GB disappear through /proc/meminfo. After our analysis, we found the
> following memory allocation path which consumes the memory with page_owner
> enabled.
>
> 849698 times:
> Page allocated via order 3, mask 0x4052c0(GFP_NOWAIT|__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP)
> __alloc_pages_nodemask+0x11d/0x290
> skb_page_frag_refill+0x68/0xf0
> sk_page_frag_refill+0x19/0x70
> tcp_sendmsg_locked+0x2f4/0xd10
> tcp_sendmsg+0x29/0xa0
> sock_sendmsg+0x30/0x40
> sock_write_iter+0x8f/0x100
> __vfs_write+0x10b/0x190
> vfs_write+0xb0/0x190
> ksys_write+0x5a/0xd0
> do_syscall_64+0x5d/0x110
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
> drivers/base/node.c | 2 ++
> drivers/net/virtio_net.c | 3 +--
> fs/proc/meminfo.c | 1 +
> include/linux/mmzone.h | 1 +
> include/linux/skbuff.h | 43 ++++++++++++++++++++++++++++++++++++++--
> kernel/exit.c | 3 +--
> mm/page_alloc.c | 7 +++++--
> mm/vmstat.c | 1 +
> net/core/sock.c | 8 ++++----
> net/ipv4/tcp.c | 3 +--
> net/xfrm/xfrm_state.c | 3 +--
> 11 files changed, 59 insertions(+), 16 deletions(-)
Thanks for finding that.
Please update Documentation/filesystems/proc.rst "meminfo" section also.
--
~Randy
next prev parent reply other threads:[~2020-10-10 22:51 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-10 10:38 [PATCH] mm: proc: add Sock to /proc/meminfo Muchun Song
2020-10-10 16:36 ` Randy Dunlap [this message]
2020-10-11 4:42 ` [External] " Muchun Song
2020-10-11 13:52 ` Mike Rapoport
2020-10-11 16:00 ` [External] " Muchun Song
2020-10-11 18:39 ` Cong Wang
2020-10-12 4:22 ` [External] " Muchun Song
2020-10-12 7:42 ` Eric Dumazet
2020-10-12 8:39 ` Muchun Song
2020-10-12 9:24 ` Eric Dumazet
2020-10-12 9:53 ` Muchun Song
2020-10-12 22:12 ` Cong Wang
2020-10-13 3:52 ` Muchun Song
2020-10-13 6:55 ` Eric Dumazet
2020-10-13 8:09 ` Mike Rapoport
2020-10-13 14:43 ` Randy Dunlap
2020-10-13 15:12 ` Mike Rapoport
2020-10-13 15:21 ` Randy Dunlap
2020-10-14 5:34 ` Mike Rapoport
2020-10-13 15:28 ` Muchun Song
2020-10-16 15:38 ` Vlastimil Babka
2020-10-16 20:53 ` Minchan Kim
2020-10-19 17:23 ` Shakeel Butt
2020-10-12 21:46 ` Cong Wang
2020-10-13 3:29 ` Muchun Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f6dfa37f-5991-3e96-93b8-737f60128151@infradead.org \
--to=rdunlap@infradead.org \
--cc=adobriyan@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=ast@kernel.org \
--cc=chenqiwu@xiaomi.com \
--cc=christian.brauner@ubuntu.com \
--cc=christophe.leroy@c-s.fr \
--cc=daniel@iogearbox.net \
--cc=dave@stgolabs.net \
--cc=davem@davemloft.net \
--cc=decui@microsoft.com \
--cc=ebiederm@xmission.com \
--cc=edumazet@google.com \
--cc=feng.tang@intel.com \
--cc=fw@strlen.de \
--cc=gregkh@linuxfoundation.org \
--cc=guro@fb.com \
--cc=gustavoars@kernel.org \
--cc=herbert@gondor.apana.org.au \
--cc=jakub@cloudflare.com \
--cc=jannh@google.com \
--cc=jasowang@redhat.com \
--cc=kafai@fb.com \
--cc=keescook@chromium.org \
--cc=kirill.shutemov@linux.intel.com \
--cc=kuba@kernel.org \
--cc=kuznet@ms2.inr.ac.ru \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=minchan@kernel.org \
--cc=mst@redhat.com \
--cc=neilb@suse.de \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pablo@netfilter.org \
--cc=peterz@infradead.org \
--cc=rafael@kernel.org \
--cc=rppt@kernel.org \
--cc=samitolvanen@google.com \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=steffen.klassert@secunet.com \
--cc=tglx@linutronix.de \
--cc=virtualization@lists.linux-foundation.org \
--cc=walken@google.com \
--cc=will@kernel.org \
--cc=willemb@google.com \
--cc=yoshfuji@linux-ipv6.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).