From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D06BC43467 for ; Mon, 12 Oct 2020 09:53:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 422D5208B6 for ; Mon, 12 Oct 2020 09:53:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="S/Uy5/9o" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387482AbgJLJxl (ORCPT ); Mon, 12 Oct 2020 05:53:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726510AbgJLJxk (ORCPT ); Mon, 12 Oct 2020 05:53:40 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63082C0613D0 for ; Mon, 12 Oct 2020 02:53:39 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id n14so13014882pff.6 for ; Mon, 12 Oct 2020 02:53:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=sn6xmUdTB1gp5VGgvXCTEMXAJuxSUuB9tlWw8Fq4HPM=; b=S/Uy5/9opCRWAvXzlKUsyfnuZH3nXfZuek8qLpeXaXtoTgaMHqMjFB5J2Jaix/Uwhj XNWc3JMXvST3cZmD/26t/lODN3tnZFTzPhvhvqy0pWDbXyil+lwS5gSRdQxaBDSiPKGC JInmtJ24vb4sJA9K4IaSnG3Ib6bIEfusm/ej0bT/tLxyBUCpMTfeDCV53PVWTjX2RJgc 5rt+YMd9xEy4+Y4SIeF/ISKwkaHtJX3fYLKUaQRDA1aUkUezT1qrgRYxrsxIc/rHMobo jr48Ir65g6U8N1A7x5Cs7XdVXR8dMTn2ra4ki4rDO6CcBr/hOMEMbkzxnRAImKM0E0kV Yd/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=sn6xmUdTB1gp5VGgvXCTEMXAJuxSUuB9tlWw8Fq4HPM=; b=Nbd5yVu1ES9drsEdG8OeeNM34qSB0vEIulNSwwGxUrhAJ1OBHOMepYQSWwxkW99fXY Am/IaZJl8TsrIaBkiXrDTaPMq/gnoxt6OjH07Dk0/GKl6gYFiLyM7oeTf0ybef6wRFq4 tNeKRN12gUFipYcJ4tJiff7oQV+cSMZ+/qtVvDJGY5fnM5uVN72QgsW6zZAuOOHHf6Xu vB6tT7MK6DkGf4dDZu0z0ehuD2iOqGZN8PhhjMw8ZPrrpLGstbQgfan4+2Lt+e7b1Ch7 /pZCZq7E/KjzCl6KH2YgBU1MxAlkAa6w0qRXBLoIn0w3WXJ5bFR4dqOihv+Q8Qk05UJb c4yQ== X-Gm-Message-State: AOAM531XRgueyQ3twnVYzVY+sCwaT/yfYq19Uwjyn3UHpHDbT8Io+Hkn 5xKnYp09KVGz573x24GAO5GFXjUfqyO9W4XMQsToww== X-Google-Smtp-Source: ABdhPJyZzFLu56q7KSwjoqmED0nD/Mf8NBR5wmig0VDfTyRAvWQtaIbkn88N932UzTVPy2+058Fq79lgqoFQOV16kaQ= X-Received: by 2002:a17:90a:4749:: with SMTP id y9mr6534556pjg.229.1602496418720; Mon, 12 Oct 2020 02:53:38 -0700 (PDT) MIME-Version: 1.0 References: <20201010103854.66746-1-songmuchun@bytedance.com> <9262ea44-fc3a-0b30-54dd-526e16df85d1@gmail.com> In-Reply-To: <9262ea44-fc3a-0b30-54dd-526e16df85d1@gmail.com> From: Muchun Song Date: Mon, 12 Oct 2020 17:53:01 +0800 Message-ID: Subject: Re: [External] Re: [PATCH] mm: proc: add Sock to /proc/meminfo To: Eric Dumazet Cc: Eric Dumazet , Cong Wang , Greg KH , rafael@kernel.org, "Michael S. Tsirkin" , Jason Wang , David Miller , Jakub Kicinski , Alexey Dobriyan , Andrew Morton , Alexey Kuznetsov , Hideaki YOSHIFUJI , Steffen Klassert , Herbert Xu , Shakeel Butt , Will Deacon , Michal Hocko , Roman Gushchin , Neil Brown , rppt@kernel.org, Sami Tolvanen , "Kirill A. Shutemov" , Feng Tang , Paolo Abeni , Willem de Bruijn , Randy Dunlap , Florian Westphal , gustavoars@kernel.org, Pablo Neira Ayuso , Dexuan Cui , Jakub Sitnicki , Peter Zijlstra , Christian Brauner , "Eric W. Biederman" , Thomas Gleixner , dave@stgolabs.net, Michel Lespinasse , Jann Horn , chenqiwu@xiaomi.com, christophe.leroy@c-s.fr, Minchan Kim , Martin KaFai Lau , Alexei Starovoitov , Daniel Borkmann , Miaohe Lin , Kees Cook , LKML , virtualization@lists.linux-foundation.org, Linux Kernel Network Developers , linux-fsdevel , linux-mm Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Oct 12, 2020 at 5:24 PM Eric Dumazet wrote= : > > > > On 10/12/20 10:39 AM, Muchun Song wrote: > > On Mon, Oct 12, 2020 at 3:42 PM Eric Dumazet wrot= e: > >> > >> On Mon, Oct 12, 2020 at 6:22 AM Muchun Song = wrote: > >>> > >>> On Mon, Oct 12, 2020 at 2:39 AM Cong Wang = wrote: > >>>> > >>>> On Sat, Oct 10, 2020 at 3:39 AM Muchun Song wrote: > >>>>> > >>>>> The amount of memory allocated to sockets buffer can become signifi= cant. > >>>>> However, we do not display the amount of memory consumed by sockets > >>>>> buffer. In this case, knowing where the memory is consumed by the k= ernel > >>>> > >>>> We do it via `ss -m`. Is it not sufficient? And if not, why not addi= ng it there > >>>> rather than /proc/meminfo? > >>> > >>> If the system has little free memory, we can know where the memory is= via > >>> /proc/meminfo. If a lot of memory is consumed by socket buffer, we ca= nnot > >>> know it when the Sock is not shown in the /proc/meminfo. If the unawa= re user > >>> can't think of the socket buffer, naturally they will not `ss -m`. Th= e > >>> end result > >>> is that we still don=E2=80=99t know where the memory is consumed. And= we add the > >>> Sock to the /proc/meminfo just like the memcg does('sock' item in the= cgroup > >>> v2 memory.stat). So I think that adding to /proc/meminfo is sufficien= t. > >>> > >>>> > >>>>> static inline void __skb_frag_unref(skb_frag_t *frag) > >>>>> { > >>>>> - put_page(skb_frag_page(frag)); > >>>>> + struct page *page =3D skb_frag_page(frag); > >>>>> + > >>>>> + if (put_page_testzero(page)) { > >>>>> + dec_sock_node_page_state(page); > >>>>> + __put_page(page); > >>>>> + } > >>>>> } > >>>> > >>>> You mix socket page frag with skb frag at least, not sure this is ex= actly > >>>> what you want, because clearly skb page frags are frequently used > >>>> by network drivers rather than sockets. > >>>> > >>>> Also, which one matches this dec_sock_node_page_state()? Clearly > >>>> not skb_fill_page_desc() or __skb_frag_ref(). > >>> > >>> Yeah, we call inc_sock_node_page_state() in the skb_page_frag_refill(= ). > >>> So if someone gets the page returned by skb_page_frag_refill(), it mu= st > >>> put the page via __skb_frag_unref()/skb_frag_unref(). We use PG_priva= te > >>> to indicate that we need to dec the node page state when the refcount= of > >>> page reaches zero. > >>> > >> > >> Pages can be transferred from pipe to socket, socket to pipe (splice() > >> and zerocopy friends...) > >> > >> If you want to track TCP memory allocations, you always can look at > >> /proc/net/sockstat, > >> without adding yet another expensive memory accounting. > > > > The 'mem' item in the /proc/net/sockstat does not represent real > > memory usage. This is just the total amount of charged memory. > > > > For example, if a task sends a 10-byte message, it only charges one > > page to memcg. But the system may allocate 8 pages. Therefore, it > > does not truly reflect the memory allocated by the above memory > > allocation path. We can see the difference via the following message. > > > > cat /proc/net/sockstat > > sockets: used 698 > > TCP: inuse 70 orphan 0 tw 617 alloc 134 mem 13 > > UDP: inuse 90 mem 4 > > UDPLITE: inuse 0 > > RAW: inuse 1 > > FRAG: inuse 0 memory 0 > > > > cat /proc/meminfo | grep Sock > > Sock: 13664 kB > > > > The /proc/net/sockstat only shows us that there are 17*4 kB TCP > > memory allocations. But apply this patch, we can see that we truly > > allocate 13664 kB(May be greater than this value because of per-cpu > > stat cache). Of course the load of the example here is not high. In > > some high load cases, I believe the difference here will be even > > greater. > > > > This is great, but you have not addressed my feedback. > > TCP memory allocations are bounded by /proc/sys/net/ipv4/tcp_mem > > Fact that the memory is forward allocated or not is a detail. > > If you think we must pre-allocate memory, instead of forward allocations, > your patch does not address this. Adding one line per consumer in /proc/m= eminfo looks > wrong to me. I think that the consumer which consumes a lot of memory should be added to the /proc/meminfo. This can help us know the user of large memory. > > If you do not want 9.37 % of physical memory being possibly used by TCP, > just change /proc/sys/net/ipv4/tcp_mem accordingly ? We are not complaining about TCP using too much memory, but how do we know that TCP uses a lot of memory. When I firstly face this problem, I do not know who uses the 25GB memory and it is not shown in the /proc/mem= info. If we can know the amount memory of the socket buffer via /proc/meminfo, we may not need to spend a lot of time troubleshooting this problem. Not every= one knows that a lot of memory may be used here. But I believe many people should know /proc/meminfo to confirm memory users. Thanks. > > --=20 Yours, Muchun