From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5757EC433FE for ; Fri, 22 Apr 2022 02:30:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1443362AbiDVCcw (ORCPT ); Thu, 21 Apr 2022 22:32:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1443333AbiDVCcc (ORCPT ); Thu, 21 Apr 2022 22:32:32 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FE1B2180A for ; Thu, 21 Apr 2022 19:29:41 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id g9so6201517pgc.10 for ; Thu, 21 Apr 2022 19:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=vnwa2/E4TTvsZNdvRU29uFcAMGoetyFfng2lxAsZyvs=; b=i1aDJ+cBEN/Qi0ifvWu89VN2VtJEAI+sUbVu6CO17eUtch7Lc4hQk6+ROT49jL6OLb m19kGV7c4Ku7RsiVZieFkbNkwoFiYrXk2EbJxTzXlFy+iEqdeSHMzmTrZFtYLZGehrO2 78TfXZ93IpSmgjFbprLy1RpRR1/MfIGKvkfleAnU/Y9ASAj+uNjV0FHWwQn3h0Hxguov 6uIT34XqSqtOTLHKsHxDYLoX0EfRtHrEdCLBTn4MAQdoHSznb43hyaaTXbZvzZXJjSf5 x5augORvT5fWzGVwH/qBPybU6plMoSpSjehao1MPjubdz6MWConLgheFyK9RQOZecART hS3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=vnwa2/E4TTvsZNdvRU29uFcAMGoetyFfng2lxAsZyvs=; b=s4KWNXfy4quTGT85754gqoHPqRtPWEY0bPLDNDMdpijhcHXZcvfwSymWToijxE1z0b Gh2DxrM5+JAenCvnQaMfWHNFd7dDgAaWrEc/ixlAvzgZpqd3oHbkXc3r2KH5IjVTQEi9 o+rNCMkKym0Ev1+rL8cf1nHZ/leLAv3YeGSQyabvX1XJIn6VYLU6ro4VgpE4dpQJy+vV VK5Kl6yvE1e8LDm4+n1ZSjMsnlY9jy57XSF023n6e6d6e8DtKLopbakbAFEHEnEHhAAT sdLaWvZegqHPgCTmGn/pNlWNSDMG3haNlhzQGxNBq/3twjy7zI67sn8LIl/oOX4T+qhm 9sKA== X-Gm-Message-State: AOAM5300dM1rrpK1GvJkW4DBew0cKBA6vaxDtGh291Cf9Ub962c4yyqy It4Xzy1hf/vxTVCdzfOPkiYX0A== X-Google-Smtp-Source: ABdhPJy1JPBV19Zq+eQCHMebemfhqlw58lFdv0J33srKIMnQ3hu7/xkE7HYk7ATMdQ0s2RkOc08PTQ== X-Received: by 2002:a63:6886:0:b0:3aa:9204:d1eb with SMTP id d128-20020a636886000000b003aa9204d1ebmr2070069pgc.63.1650594580714; Thu, 21 Apr 2022 19:29:40 -0700 (PDT) Received: from localhost ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id 130-20020a621488000000b00505e6092973sm447810pfu.68.2022.04.21.19.29.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Apr 2022 19:29:40 -0700 (PDT) Date: Fri, 22 Apr 2022 10:29:36 +0800 From: Muchun Song To: Waiman Long Cc: Roman Gushchin , Johannes Weiner , Michal Hocko , Shakeel Butt , Andrew Morton , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org, "Matthew Wilcox (Oracle)" , Yang Shi , Vlastimil Babka Subject: Re: [PATCH] mm/memcg: Free percpu stats memory of dying memcg's Message-ID: References: <20220421145845.1044652-1-longman@redhat.com> <112a4d7f-bc53-6e59-7bb8-6fecb65d045d@redhat.com> <58c41f14-356e-88dd-54aa-dc6873bf80ff@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <58c41f14-356e-88dd-54aa-dc6873bf80ff@redhat.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 21, 2022 at 02:46:00PM -0400, Waiman Long wrote: > On 4/21/22 13:59, Roman Gushchin wrote: > > On Thu, Apr 21, 2022 at 01:28:20PM -0400, Waiman Long wrote: > > > On 4/21/22 12:33, Roman Gushchin wrote: > > > > On Thu, Apr 21, 2022 at 10:58:45AM -0400, Waiman Long wrote: > > > > > For systems with large number of CPUs, the majority of the memory > > > > > consumed by the mem_cgroup structure is actually the percpu stats > > > > > memory. When a large number of memory cgroups are continuously created > > > > > and destroyed (like in a container host), it is possible that more > > > > > and more mem_cgroup structures remained in the dying state holding up > > > > > increasing amount of percpu memory. > > > > > > > > > > We can't free up the memory of the dying mem_cgroup structure due to > > > > > active references in some other places. However, the percpu stats memory > > > > > allocated to that mem_cgroup is a different story. > > > > > > > > > > This patch adds a new percpu_stats_disabled variable to keep track of > > > > > the state of the percpu stats memory. If the variable is set, percpu > > > > > stats update will be disabled for that particular memcg. All the stats > > > > > update will be forward to its parent instead. Reading of the its percpu > > > > > stats will return 0. > > > > > > > > > > The flushing and freeing of the percpu stats memory is a multi-step > > > > > process. The percpu_stats_disabled variable is set when the memcg is > > > > > being set to offline state. After a grace period with the help of RCU, > > > > > the percpu stats data are flushed and then freed. > > > > > > > > > > This will greatly reduce the amount of memory held up by dying memory > > > > > cgroups. > > > > > > > > > > By running a simple management tool for container 2000 times per test > > > > > run, below are the results of increases of percpu memory (as reported > > > > > in /proc/meminfo) and nr_dying_descendants in root's cgroup.stat. > > > > Hi Waiman! > > > > > > > > I've been proposing the same idea some time ago: > > > > https://lore.kernel.org/all/20190312223404.28665-7-guro@fb.com/T/ . > > > > > > > > However I dropped it with the thinking that with many other fixes > > > > preventing the accumulation of the dying cgroups it's not worth the added > > > > complexity and a potential cpu overhead. > > > > > > > > I think it ultimately comes to the number of dying cgroups. If it's low, > > > > memory savings are not worth the cpu overhead. If it's high, they are. > > > > I hope long-term to drive it down significantly (with lru-pages reparenting > > > > being the first major milestone), but it might take a while. > > > > > > > > I don't have a strong opinion either way, just want to dump my thoughts > > > > on this. > > > I have quite a number of customer cases complaining about increasing percpu > > > memory usages. The number of dying memcg's can go to tens of thousands. From > > > my own investigation, I believe that those dying memcg's are not freed > > > because they are pinned down by references in the page structure. I am aware > > > that we support the use of objcg in the page structure which will allow easy > > > reparenting, but most pages don't do that and it is not easy to do this > > > conversion and it may take quite a while to do that. > > The big question is whether there is a memory pressure on those systems. > > If yes, and the number of dying cgroups is growing, it's worth investigating. > > It might be due to the sharing of pagecache pages and this will be ultimately > > fixed with implementing of the pagecache reparenting. But it also might be due > > to other bugs, which are fixable, so it would be great to understand. > > > Pagecache reparenting will probably fix the problem that I have seen. Is > someone working on this? > We also encountered dying cgroup issue on our servers for a long time. I have worked on this for a while and proposed a resolution [1] based on obj_cgroup APIs to charge the LRU pages. [1] https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/ Thanks. From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muchun Song Subject: Re: [PATCH] mm/memcg: Free percpu stats memory of dying memcg's Date: Fri, 22 Apr 2022 10:29:36 +0800 Message-ID: References: <20220421145845.1044652-1-longman@redhat.com> <112a4d7f-bc53-6e59-7bb8-6fecb65d045d@redhat.com> <58c41f14-356e-88dd-54aa-dc6873bf80ff@redhat.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=vnwa2/E4TTvsZNdvRU29uFcAMGoetyFfng2lxAsZyvs=; b=i1aDJ+cBEN/Qi0ifvWu89VN2VtJEAI+sUbVu6CO17eUtch7Lc4hQk6+ROT49jL6OLb m19kGV7c4Ku7RsiVZieFkbNkwoFiYrXk2EbJxTzXlFy+iEqdeSHMzmTrZFtYLZGehrO2 78TfXZ93IpSmgjFbprLy1RpRR1/MfIGKvkfleAnU/Y9ASAj+uNjV0FHWwQn3h0Hxguov 6uIT34XqSqtOTLHKsHxDYLoX0EfRtHrEdCLBTn4MAQdoHSznb43hyaaTXbZvzZXJjSf5 x5augORvT5fWzGVwH/qBPybU6plMoSpSjehao1MPjubdz6MWConLgheFyK9RQOZecART hS3A== Content-Disposition: inline In-Reply-To: <58c41f14-356e-88dd-54aa-dc6873bf80ff-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Waiman Long Cc: Roman Gushchin , Johannes Weiner , Michal Hocko , Shakeel Butt , Andrew Morton , linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, "Matthew Wilcox (Oracle)" , Yang Shi , Vlastimil Babka On Thu, Apr 21, 2022 at 02:46:00PM -0400, Waiman Long wrote: > On 4/21/22 13:59, Roman Gushchin wrote: > > On Thu, Apr 21, 2022 at 01:28:20PM -0400, Waiman Long wrote: > > > On 4/21/22 12:33, Roman Gushchin wrote: > > > > On Thu, Apr 21, 2022 at 10:58:45AM -0400, Waiman Long wrote: > > > > > For systems with large number of CPUs, the majority of the memory > > > > > consumed by the mem_cgroup structure is actually the percpu stats > > > > > memory. When a large number of memory cgroups are continuously created > > > > > and destroyed (like in a container host), it is possible that more > > > > > and more mem_cgroup structures remained in the dying state holding up > > > > > increasing amount of percpu memory. > > > > > > > > > > We can't free up the memory of the dying mem_cgroup structure due to > > > > > active references in some other places. However, the percpu stats memory > > > > > allocated to that mem_cgroup is a different story. > > > > > > > > > > This patch adds a new percpu_stats_disabled variable to keep track of > > > > > the state of the percpu stats memory. If the variable is set, percpu > > > > > stats update will be disabled for that particular memcg. All the stats > > > > > update will be forward to its parent instead. Reading of the its percpu > > > > > stats will return 0. > > > > > > > > > > The flushing and freeing of the percpu stats memory is a multi-step > > > > > process. The percpu_stats_disabled variable is set when the memcg is > > > > > being set to offline state. After a grace period with the help of RCU, > > > > > the percpu stats data are flushed and then freed. > > > > > > > > > > This will greatly reduce the amount of memory held up by dying memory > > > > > cgroups. > > > > > > > > > > By running a simple management tool for container 2000 times per test > > > > > run, below are the results of increases of percpu memory (as reported > > > > > in /proc/meminfo) and nr_dying_descendants in root's cgroup.stat. > > > > Hi Waiman! > > > > > > > > I've been proposing the same idea some time ago: > > > > https://lore.kernel.org/all/20190312223404.28665-7-guro-b10kYP2dOMg@public.gmane.org/T/ . > > > > > > > > However I dropped it with the thinking that with many other fixes > > > > preventing the accumulation of the dying cgroups it's not worth the added > > > > complexity and a potential cpu overhead. > > > > > > > > I think it ultimately comes to the number of dying cgroups. If it's low, > > > > memory savings are not worth the cpu overhead. If it's high, they are. > > > > I hope long-term to drive it down significantly (with lru-pages reparenting > > > > being the first major milestone), but it might take a while. > > > > > > > > I don't have a strong opinion either way, just want to dump my thoughts > > > > on this. > > > I have quite a number of customer cases complaining about increasing percpu > > > memory usages. The number of dying memcg's can go to tens of thousands. From > > > my own investigation, I believe that those dying memcg's are not freed > > > because they are pinned down by references in the page structure. I am aware > > > that we support the use of objcg in the page structure which will allow easy > > > reparenting, but most pages don't do that and it is not easy to do this > > > conversion and it may take quite a while to do that. > > The big question is whether there is a memory pressure on those systems. > > If yes, and the number of dying cgroups is growing, it's worth investigating. > > It might be due to the sharing of pagecache pages and this will be ultimately > > fixed with implementing of the pagecache reparenting. But it also might be due > > to other bugs, which are fixable, so it would be great to understand. > > > Pagecache reparenting will probably fix the problem that I have seen. Is > someone working on this? > We also encountered dying cgroup issue on our servers for a long time. I have worked on this for a while and proposed a resolution [1] based on obj_cgroup APIs to charge the LRU pages. [1] https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/ Thanks.