From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 620B7C433E7 for ; Mon, 12 Oct 2020 04:23:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24A3E20735 for ; Mon, 12 Oct 2020 04:23:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=bytedance-com.20150623.gappssmtp.com header.i=@bytedance-com.20150623.gappssmtp.com header.b="n5A3l84I" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727808AbgJLEWx (ORCPT ); Mon, 12 Oct 2020 00:22:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58134 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727722AbgJLEWx (ORCPT ); Mon, 12 Oct 2020 00:22:53 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3B92C0613D0 for ; Sun, 11 Oct 2020 21:22:52 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id a200so12350123pfa.10 for ; Sun, 11 Oct 2020 21:22:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=86h9GnvGBi5wC9xCp0mEaMT6C/YP6RZUf0IR+KiY+mM=; b=n5A3l84IVanhAXQ+RpWAMhL2/paRrC+kBPfk32SkjMJ3AMIe+3jVIqh5wck4Qd64RH OJ1A23lm7sibahUbKhN34W8lVpaxR2Sth1OKPcLb6ZGuB6MapkRU9HNBC3XkRM4rrB/L qkKvRFLA1BpjbDhEJv4NzLXFuijYW2r+o17N5n+k4+ZaLNoO3bpDGymp68ohPmNCfJ3Z 33n6gE3eMLl3UD9fIFLUFcSbulpMADG+sOEVBQazltObrafBDS+pVXURv2NVG3AWKyVq BavLV2VLLjgz1sFHY7rqtk/5uidIKHx52b48F5hLddg4pbIQX6HpruoRh2OA+Ep8QMyS KNCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=86h9GnvGBi5wC9xCp0mEaMT6C/YP6RZUf0IR+KiY+mM=; b=sujAecxiWCJaPwnYBaecOTIpOvEHOGp4IfmtvnGCk8YjXxr7n8FbWvxD1HbZlx38/q Xpzkdk0/GryzT9SxOhVCUcJzNZkTe8LoFaP8wVY3IamxraGWgnfdPzIhzG5CIVgprAVy NR7kTaUzJ22N6y6s3tT+S/+FvqA+UMwh3k4ms7UPV2ckIwY6iTXuExM1/ZqJja26OktS rkeBYLCYlDJrA+IfiJk02jZGPX9SrYpg1kBrwhhK3vD90thYWJsKGKXp433LDbOildEV hGVapZVTIBwhzXj+1PojjTw5+EPQU/ZFv7M5bM6GvFeILoEbFsk9a7ggQorMYLzL9fRM R9Gw== X-Gm-Message-State: AOAM532r+kR61Djv16JTBQv9nz8UzVsK7nnj8Ctk3i6vSJevhk4fv5VG NiW4NAFik2B5I3kqoGJlk3rX7m5E5kQ3OjDvVuIa7Q== X-Google-Smtp-Source: ABdhPJxHHIJDJzppxANA9QNUbZw/HickM6E0AFMSiV9107EW+Zh0ghm7JqhHQ8xdMJzxoaSvB20+kIMEP4lWGOY6O1g= X-Received: by 2002:a17:90a:890f:: with SMTP id u15mr18101013pjn.147.1602476572410; Sun, 11 Oct 2020 21:22:52 -0700 (PDT) MIME-Version: 1.0 References: <20201010103854.66746-1-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Mon, 12 Oct 2020 12:22:16 +0800 Message-ID: Subject: Re: [External] Re: [PATCH] mm: proc: add Sock to /proc/meminfo To: Cong Wang Cc: Greg KH , rafael@kernel.org, "Michael S. Tsirkin" , Jason Wang , David Miller , Jakub Kicinski , Alexey Dobriyan , Andrew Morton , Eric Dumazet , Alexey Kuznetsov , Hideaki YOSHIFUJI , Steffen Klassert , Herbert Xu , Shakeel Butt , Will Deacon , Michal Hocko , Roman Gushchin , Neil Brown , rppt@kernel.org, Sami Tolvanen , "Kirill A. Shutemov" , Feng Tang , Paolo Abeni , Willem de Bruijn , Randy Dunlap , Florian Westphal , gustavoars@kernel.org, Pablo Neira Ayuso , decui@microsoft.com, Jakub Sitnicki , Peter Zijlstra , Christian Brauner , "Eric W. Biederman" , Thomas Gleixner , dave@stgolabs.net, Michel Lespinasse , Jann Horn , chenqiwu@xiaomi.com, christophe.leroy@c-s.fr, Minchan Kim , Martin KaFai Lau , Alexei Starovoitov , Daniel Borkmann , Miaohe Lin , Kees Cook , LKML , virtualization@lists.linux-foundation.org, Linux Kernel Network Developers , linux-fsdevel , linux-mm Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org On Mon, Oct 12, 2020 at 2:39 AM Cong Wang wrote: > > On Sat, Oct 10, 2020 at 3:39 AM Muchun Song wr= ote: > > > > The amount of memory allocated to sockets buffer can become significant= . > > However, we do not display the amount of memory consumed by sockets > > buffer. In this case, knowing where the memory is consumed by the kerne= l > > We do it via `ss -m`. Is it not sufficient? And if not, why not adding it= there > rather than /proc/meminfo? If the system has little free memory, we can know where the memory is via /proc/meminfo. If a lot of memory is consumed by socket buffer, we cannot know it when the Sock is not shown in the /proc/meminfo. If the unaware use= r can't think of the socket buffer, naturally they will not `ss -m`. The end result is that we still don=E2=80=99t know where the memory is consumed. And we ad= d the Sock to the /proc/meminfo just like the memcg does('sock' item in the cgrou= p v2 memory.stat). So I think that adding to /proc/meminfo is sufficient. > > > static inline void __skb_frag_unref(skb_frag_t *frag) > > { > > - put_page(skb_frag_page(frag)); > > + struct page *page =3D skb_frag_page(frag); > > + > > + if (put_page_testzero(page)) { > > + dec_sock_node_page_state(page); > > + __put_page(page); > > + } > > } > > You mix socket page frag with skb frag at least, not sure this is exactly > what you want, because clearly skb page frags are frequently used > by network drivers rather than sockets. > > Also, which one matches this dec_sock_node_page_state()? Clearly > not skb_fill_page_desc() or __skb_frag_ref(). Yeah, we call inc_sock_node_page_state() in the skb_page_frag_refill(). So if someone gets the page returned by skb_page_frag_refill(), it must put the page via __skb_frag_unref()/skb_frag_unref(). We use PG_private to indicate that we need to dec the node page state when the refcount of page reaches zero. Thanks. > > Thanks. --=20 Yours, Muchun