From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51AFAC07E9C for ; Sat, 10 Jul 2021 01:52:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 38BF261380 for ; Sat, 10 Jul 2021 01:52:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231674AbhGJBzF (ORCPT ); Fri, 9 Jul 2021 21:55:05 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6796 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229703AbhGJBzE (ORCPT ); Fri, 9 Jul 2021 21:55:04 -0400 Received: from dggeme703-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GMCYw3mvvzXgYd; Sat, 10 Jul 2021 09:46:44 +0800 (CST) Received: from [10.174.177.209] (10.174.177.209) by dggeme703-chm.china.huawei.com (10.1.199.99) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Sat, 10 Jul 2021 09:52:15 +0800 Subject: Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions To: Suren Baghdasaryan CC: , , , , , , , , , , , , , , , , , , , , , Tejun Heo References: <20210710003626.3549282-1-surenb@google.com> From: Miaohe Lin Message-ID: Date: Sat, 10 Jul 2021 09:52:15 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <20210710003626.3549282-1-surenb@google.com> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.209] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggeme703-chm.china.huawei.com (10.1.199.99) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/7/10 8:36, Suren Baghdasaryan wrote: > Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and > cgroup_throttle_swaprate functions. This minimizes the memcg overhead in > the pagefault and exit_mmap paths when memcgs are disabled using > cgroup_disable=memory command-line option. > This change results in ~2.1% overhead reduction when running PFT test > comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y, > CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core > ARM64 Android device. > > Signed-off-by: Suren Baghdasaryan > Reviewed-by: Shakeel Butt > Acked-by: Johannes Weiner > --- > mm/memcontrol.c | 3 +++ > mm/swapfile.c | 3 +++ > mm/vmpressure.c | 7 ++++++- > 3 files changed, 12 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index ae1f5d0cb581..a228cd51c4bd 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) > struct mem_cgroup *memcg; > unsigned short id; > > + if (mem_cgroup_disabled()) > + return; > + > id = swap_cgroup_record(entry, 0, nr_pages); > rcu_read_lock(); > memcg = mem_cgroup_from_id(id); > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 1e07d1c776f2..707fa0481bb4 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) > struct swap_info_struct *si, *next; > int nid = page_to_nid(page); > > + if (mem_cgroup_disabled()) > + return; > + Many thanks for your patch. But I'am somewhat confused about this change. IMO, cgroup_throttle_swaprate() is only related to blk_cgroup and it seems it's irrelevant to mem_cgroup. Could you please have a explanation for me? Thanks! > if (!(gfp_mask & __GFP_IO)) > return; > > diff --git a/mm/vmpressure.c b/mm/vmpressure.c > index d69019fc3789..9b172561fded 100644 > --- a/mm/vmpressure.c > +++ b/mm/vmpressure.c > @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work) > void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, > unsigned long scanned, unsigned long reclaimed) > { > - struct vmpressure *vmpr = memcg_to_vmpressure(memcg); > + struct vmpressure *vmpr; > + > + if (mem_cgroup_disabled()) > + return; > + > + vmpr = memcg_to_vmpressure(memcg); > > /* > * Here we only want to account pressure that userland is able to > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Miaohe Lin Subject: Re: [PATCH v3 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Date: Sat, 10 Jul 2021 09:52:15 +0800 Message-ID: References: <20210710003626.3549282-1-surenb@google.com> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20210710003626.3549282-1-surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> Content-Language: en-US List-ID: Content-Type: text/plain; charset="us-ascii" To: Suren Baghdasaryan Cc: hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, guro-b10kYP2dOMg@public.gmane.org, songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org, shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, alexs-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, vbabka-AlSwsSmVLrQ@public.gmane.org, axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org, iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org, david-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org, apopple-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org, minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, kernel-team-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org, Tejun Heo On 2021/7/10 8:36, Suren Baghdasaryan wrote: > Add mem_cgroup_disabled check in vmpressure, mem_cgroup_uncharge_swap and > cgroup_throttle_swaprate functions. This minimizes the memcg overhead in > the pagefault and exit_mmap paths when memcgs are disabled using > cgroup_disable=memory command-line option. > This change results in ~2.1% overhead reduction when running PFT test > comparing {CONFIG_MEMCG=n, CONFIG_MEMCG_SWAP=n} against {CONFIG_MEMCG=y, > CONFIG_MEMCG_SWAP=y, cgroup_disable=memory} configuration on an 8-core > ARM64 Android device. > > Signed-off-by: Suren Baghdasaryan > Reviewed-by: Shakeel Butt > Acked-by: Johannes Weiner > --- > mm/memcontrol.c | 3 +++ > mm/swapfile.c | 3 +++ > mm/vmpressure.c | 7 ++++++- > 3 files changed, 12 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index ae1f5d0cb581..a228cd51c4bd 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -7305,6 +7305,9 @@ void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) > struct mem_cgroup *memcg; > unsigned short id; > > + if (mem_cgroup_disabled()) > + return; > + > id = swap_cgroup_record(entry, 0, nr_pages); > rcu_read_lock(); > memcg = mem_cgroup_from_id(id); > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 1e07d1c776f2..707fa0481bb4 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -3778,6 +3778,9 @@ void cgroup_throttle_swaprate(struct page *page, gfp_t gfp_mask) > struct swap_info_struct *si, *next; > int nid = page_to_nid(page); > > + if (mem_cgroup_disabled()) > + return; > + Many thanks for your patch. But I'am somewhat confused about this change. IMO, cgroup_throttle_swaprate() is only related to blk_cgroup and it seems it's irrelevant to mem_cgroup. Could you please have a explanation for me? Thanks! > if (!(gfp_mask & __GFP_IO)) > return; > > diff --git a/mm/vmpressure.c b/mm/vmpressure.c > index d69019fc3789..9b172561fded 100644 > --- a/mm/vmpressure.c > +++ b/mm/vmpressure.c > @@ -240,7 +240,12 @@ static void vmpressure_work_fn(struct work_struct *work) > void vmpressure(gfp_t gfp, struct mem_cgroup *memcg, bool tree, > unsigned long scanned, unsigned long reclaimed) > { > - struct vmpressure *vmpr = memcg_to_vmpressure(memcg); > + struct vmpressure *vmpr; > + > + if (mem_cgroup_disabled()) > + return; > + > + vmpr = memcg_to_vmpressure(memcg); > > /* > * Here we only want to account pressure that userland is able to >