From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD4DCC07E99 for ; Mon, 12 Jul 2021 15:55:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BAF2A610C7 for ; Mon, 12 Jul 2021 15:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233213AbhGLP6n (ORCPT ); Mon, 12 Jul 2021 11:58:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230148AbhGLP6l (ORCPT ); Mon, 12 Jul 2021 11:58:41 -0400 Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E4BCC0613DD for ; Mon, 12 Jul 2021 08:55:53 -0700 (PDT) Received: by mail-yb1-xb32.google.com with SMTP id r135so29845325ybc.0 for ; Mon, 12 Jul 2021 08:55:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Raj1UT+7iEKURSfnOW0pEUCg9+O5+IgIN4/nayR4b58=; b=rAzcoXutzrxwsjJ5J3QY3cKNxelhImJ/Gets9R0vMnBqddQbr4Op/jax00Dd+VbjB/ UDS8PgKmDbUEc88QO3W90w7SKlzN5ihTeaDe1kItEZ2oyYWflTJ2Ja1NYvdO9hb76vxc /Jw3sngbmTWyzMszx/o24ySRU0iWr//eTJN1sP/tjsA+ftQj6CACobZkTSHM9GbuE6lp XyEnvRzAk2kZsD7W+rCNQfy7ohOASzQAQddDALqxtshI6GEdd6KWTvi6lqA4aUm6KyWB 4LiPZDRqQ7DIrN6S6+2mRItFScOhD2X/BvsGT56obyMikYYeSKKHt4mXZjlgrW3Cqn+A VzOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Raj1UT+7iEKURSfnOW0pEUCg9+O5+IgIN4/nayR4b58=; b=oz7UalPRADeLVvu5me5ZzYZ/w0VtMdrjBV69pPYByh14TzdjSU3o0QvFCx7Kf4asQk dfA+Mwgm8I07SIq+7biQuPKkvJpW4O8g9s8/9G4Yg1muEMjDRppo2E1atMghVmzWkoVQ ftXfbn6gmLRRNacwwoDTqzrtRiCq29SuNmVIuyszqs6qSDcpbFtTUQ4ZDFTYsIGKhtuF xnpHLu/V6BiWTNVqrX3+ETm32cGM8BMdnslEKsh4dwsewFKQrTKlzp1v0g3b+vFDuqZj sBJyOTdM+SbGIJakcJRNgveNTiAK6OxLkD1xIyAuYLZFmSxryJzZdnG0HwypI5numxEi gSYQ== X-Gm-Message-State: AOAM532YUo8Rz6u5Hn/5qh9EjUlC6dhJz5wfLywVnEq0g1qkFUgn71vD LDyuRmPYjFrxOTIUxzm6/CWcC/z3idAiX5+GdnVPCQ== X-Google-Smtp-Source: ABdhPJx8Wzn4MXANCSwsv2C5UtK/S4TmoqfrZSJa2X310gqPQQRfxU9XUzfck3KrssDDp9CBbu2UW/2tzSem8lhHKGs= X-Received: by 2002:a25:d912:: with SMTP id q18mr12462254ybg.294.1626105352175; Mon, 12 Jul 2021 08:55:52 -0700 (PDT) MIME-Version: 1.0 References: <20210710003626.3549282-1-surenb@google.com> <20210710003626.3549282-2-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 12 Jul 2021 08:55:41 -0700 Message-ID: Subject: Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config To: Michal Hocko Cc: Tejun Heo , Johannes Weiner , vdavydov.dev@gmail.com, Andrew Morton , Shakeel Butt , Roman Gushchin , songmuchun@bytedance.com, Yang Shi , alexs@kernel.org, richard.weiyang@gmail.com, Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple@nvidia.com, Minchan Kim , Miaohe Lin , LKML , cgroups mailinglist , linux-mm , kernel-team Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 12, 2021 at 12:15 AM Michal Hocko wrote: > > On Fri 09-07-21 17:36:25, Suren Baghdasaryan wrote: > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Reviewed-by: Shakeel Butt > > With doc updated as suggested by Muchun Song > Acked-by: Michal Hocko Thanks! Will fix the comment and post v4 later today. > > Thanks! > > > --- > > include/linux/memcontrol.h | 28 +++++++++++++++++++++++++--- > > mm/memcontrol.c | 29 ++++++++++------------------- > > 2 files changed, 35 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index bfe5c486f4ad..39fa88051a42 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask); > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + if (mem_cgroup_disabled()) > > + return 0; > > + return __mem_cgroup_charge(page, mm, gfp_mask); > > +} > > + > > int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > gfp_t gfp, swp_entry_t entry); > > void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > > > > -void mem_cgroup_uncharge(struct page *page); > > -void mem_cgroup_uncharge_list(struct list_head *page_list); > > +void __mem_cgroup_uncharge(struct page *page); > > +static inline void mem_cgroup_uncharge(struct page *page) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge(page); > > +} > > + > > +void __mem_cgroup_uncharge_list(struct list_head *page_list); > > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge_list(page_list); > > +} > > > > void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index a228cd51c4bd..cdaf7003b43d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > > atomic_long_read(&parent->memory.children_low_usage))); > > } > > > > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > - gfp_t gfp) > > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) > > { > > unsigned int nr_pages = thp_nr_pages(page); > > int ret; > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > } > > > > /** > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup > > * @page: page to charge > > * @mm: mm context of the victim > > * @gfp_mask: reclaim mode > > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > * > > * Returns 0 on success. Otherwise, an error code is returned. > > */ > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > { > > struct mem_cgroup *memcg; > > int ret; > > > > - if (mem_cgroup_disabled()) > > - return 0; > > - > > memcg = get_mem_cgroup_from_mm(mm); > > - ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + ret = charge_memcg(page, memcg, gfp_mask); > > css_put(&memcg->css); > > > > return ret; > > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > memcg = get_mem_cgroup_from_mm(mm); > > rcu_read_unlock(); > > > > - ret = __mem_cgroup_charge(page, memcg, gfp); > > + ret = charge_memcg(page, memcg, gfp); > > > > css_put(&memcg->css); > > return ret; > > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > > } > > > > /** > > - * mem_cgroup_uncharge - uncharge a page > > + * __mem_cgroup_uncharge - uncharge a page > > * @page: page to uncharge > > * > > * Uncharge a page previously charged with mem_cgroup_charge(). > > */ > > -void mem_cgroup_uncharge(struct page *page) > > +void __mem_cgroup_uncharge(struct page *page) > > { > > struct uncharge_gather ug; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > /* Don't touch page->lru of any random page, pre-check: */ > > if (!page_memcg(page)) > > return; > > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page) > > } > > > > /** > > - * mem_cgroup_uncharge_list - uncharge a list of page > > + * __mem_cgroup_uncharge_list - uncharge a list of page > > * @page_list: list of pages to uncharge > > * > > * Uncharge a list of pages previously charged with > > * mem_cgroup_charge(). > > */ > > -void mem_cgroup_uncharge_list(struct list_head *page_list) > > +void __mem_cgroup_uncharge_list(struct list_head *page_list) > > { > > struct uncharge_gather ug; > > struct page *page; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > uncharge_gather_clear(&ug); > > list_for_each_entry(page, page_list, lru) > > uncharge_page(page, &ug); > > -- > > 2.32.0.93.g670b81a890-goog > > -- > Michal Hocko > SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C39DC07E9A for ; Mon, 12 Jul 2021 15:55:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E308E610C7 for ; Mon, 12 Jul 2021 15:55:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E308E610C7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C6746B00A1; Mon, 12 Jul 2021 11:55:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 076D26B00AB; Mon, 12 Jul 2021 11:55:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E31946B00AD; Mon, 12 Jul 2021 11:55:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0032.hostedemail.com [216.40.44.32]) by kanga.kvack.org (Postfix) with ESMTP id B493F6B00A1 for ; Mon, 12 Jul 2021 11:55:54 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CDBFE1828B301 for ; Mon, 12 Jul 2021 15:55:53 +0000 (UTC) X-FDA: 78354386586.19.E79491D Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) by imf03.hostedemail.com (Postfix) with ESMTP id 1492030002D1 for ; Mon, 12 Jul 2021 15:55:52 +0000 (UTC) Received: by mail-yb1-f174.google.com with SMTP id a16so29746417ybt.8 for ; Mon, 12 Jul 2021 08:55:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Raj1UT+7iEKURSfnOW0pEUCg9+O5+IgIN4/nayR4b58=; b=rAzcoXutzrxwsjJ5J3QY3cKNxelhImJ/Gets9R0vMnBqddQbr4Op/jax00Dd+VbjB/ UDS8PgKmDbUEc88QO3W90w7SKlzN5ihTeaDe1kItEZ2oyYWflTJ2Ja1NYvdO9hb76vxc /Jw3sngbmTWyzMszx/o24ySRU0iWr//eTJN1sP/tjsA+ftQj6CACobZkTSHM9GbuE6lp XyEnvRzAk2kZsD7W+rCNQfy7ohOASzQAQddDALqxtshI6GEdd6KWTvi6lqA4aUm6KyWB 4LiPZDRqQ7DIrN6S6+2mRItFScOhD2X/BvsGT56obyMikYYeSKKHt4mXZjlgrW3Cqn+A VzOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=Raj1UT+7iEKURSfnOW0pEUCg9+O5+IgIN4/nayR4b58=; b=ekni36jmEhg9UTnR/xLy3p7CxIZMvemsTdHzKiJnwuNoApu9U95Ru3uBkGExZ7bOUH x01XRwwHwXfi73XE+LEY1DWmd2wQK0RDCZdjIal0vR8CnJlGOkz10p5oPAkrtQdjkqBF 0ax+yZUpi/gws7xB/odFJnVBXnS1tf84rm+E/93Z1fmJfsKjGsstXCL+roJOO6K/IyD5 sE5vL55U3P1B4TxS6S8yT+vwSNU86IEEqs7AD4lmYwU/ThZXuZaLWGhEapP2vRwZ8drv uqtR+piGwPkfRzjVjFwIX2HuHOF7oXR0YyjfbD0T4E9SHaYtBfacVzHCZz0ZK+0xrVfC rApw== X-Gm-Message-State: AOAM533p58qg+Qs+wNwn6D106I+Fl7ADya2ySyOnGKG9c2j0XQJJWmX5 e8hCJKCcrmZDSjPEdlJAbi8rr8O765xoQZDrKaWFYQ== X-Google-Smtp-Source: ABdhPJx8Wzn4MXANCSwsv2C5UtK/S4TmoqfrZSJa2X310gqPQQRfxU9XUzfck3KrssDDp9CBbu2UW/2tzSem8lhHKGs= X-Received: by 2002:a25:d912:: with SMTP id q18mr12462254ybg.294.1626105352175; Mon, 12 Jul 2021 08:55:52 -0700 (PDT) MIME-Version: 1.0 References: <20210710003626.3549282-1-surenb@google.com> <20210710003626.3549282-2-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 12 Jul 2021 08:55:41 -0700 Message-ID: Subject: Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config To: Michal Hocko Cc: Tejun Heo , Johannes Weiner , vdavydov.dev@gmail.com, Andrew Morton , Shakeel Butt , Roman Gushchin , songmuchun@bytedance.com, Yang Shi , alexs@kernel.org, richard.weiyang@gmail.com, Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple@nvidia.com, Minchan Kim , Miaohe Lin , LKML , cgroups mailinglist , linux-mm , kernel-team Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=rAzcoXut; spf=pass (imf03.hostedemail.com: domain of surenb@google.com designates 209.85.219.174 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam02 X-Stat-Signature: o7c6nzss9yg631onhttcgjez4td3i19s X-Rspamd-Queue-Id: 1492030002D1 X-HE-Tag: 1626105352-985414 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 12, 2021 at 12:15 AM Michal Hocko wrote: > > On Fri 09-07-21 17:36:25, Suren Baghdasaryan wrote: > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Reviewed-by: Shakeel Butt > > With doc updated as suggested by Muchun Song > Acked-by: Michal Hocko Thanks! Will fix the comment and post v4 later today. > > Thanks! > > > --- > > include/linux/memcontrol.h | 28 +++++++++++++++++++++++++--- > > mm/memcontrol.c | 29 ++++++++++------------------- > > 2 files changed, 35 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index bfe5c486f4ad..39fa88051a42 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask); > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + if (mem_cgroup_disabled()) > > + return 0; > > + return __mem_cgroup_charge(page, mm, gfp_mask); > > +} > > + > > int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > gfp_t gfp, swp_entry_t entry); > > void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > > > > -void mem_cgroup_uncharge(struct page *page); > > -void mem_cgroup_uncharge_list(struct list_head *page_list); > > +void __mem_cgroup_uncharge(struct page *page); > > +static inline void mem_cgroup_uncharge(struct page *page) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge(page); > > +} > > + > > +void __mem_cgroup_uncharge_list(struct list_head *page_list); > > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge_list(page_list); > > +} > > > > void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index a228cd51c4bd..cdaf7003b43d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > > atomic_long_read(&parent->memory.children_low_usage))); > > } > > > > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > - gfp_t gfp) > > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) > > { > > unsigned int nr_pages = thp_nr_pages(page); > > int ret; > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > } > > > > /** > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup > > * @page: page to charge > > * @mm: mm context of the victim > > * @gfp_mask: reclaim mode > > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > * > > * Returns 0 on success. Otherwise, an error code is returned. > > */ > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > { > > struct mem_cgroup *memcg; > > int ret; > > > > - if (mem_cgroup_disabled()) > > - return 0; > > - > > memcg = get_mem_cgroup_from_mm(mm); > > - ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + ret = charge_memcg(page, memcg, gfp_mask); > > css_put(&memcg->css); > > > > return ret; > > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > memcg = get_mem_cgroup_from_mm(mm); > > rcu_read_unlock(); > > > > - ret = __mem_cgroup_charge(page, memcg, gfp); > > + ret = charge_memcg(page, memcg, gfp); > > > > css_put(&memcg->css); > > return ret; > > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > > } > > > > /** > > - * mem_cgroup_uncharge - uncharge a page > > + * __mem_cgroup_uncharge - uncharge a page > > * @page: page to uncharge > > * > > * Uncharge a page previously charged with mem_cgroup_charge(). > > */ > > -void mem_cgroup_uncharge(struct page *page) > > +void __mem_cgroup_uncharge(struct page *page) > > { > > struct uncharge_gather ug; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > /* Don't touch page->lru of any random page, pre-check: */ > > if (!page_memcg(page)) > > return; > > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page) > > } > > > > /** > > - * mem_cgroup_uncharge_list - uncharge a list of page > > + * __mem_cgroup_uncharge_list - uncharge a list of page > > * @page_list: list of pages to uncharge > > * > > * Uncharge a list of pages previously charged with > > * mem_cgroup_charge(). > > */ > > -void mem_cgroup_uncharge_list(struct list_head *page_list) > > +void __mem_cgroup_uncharge_list(struct list_head *page_list) > > { > > struct uncharge_gather ug; > > struct page *page; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > uncharge_gather_clear(&ug); > > list_for_each_entry(page, page_list, lru) > > uncharge_page(page, &ug); > > -- > > 2.32.0.93.g670b81a890-goog > > -- > Michal Hocko > SUSE Labs From mboxrd@z Thu Jan 1 00:00:00 1970 From: Suren Baghdasaryan Subject: Re: [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Date: Mon, 12 Jul 2021 08:55:41 -0700 Message-ID: References: <20210710003626.3549282-1-surenb@google.com> <20210710003626.3549282-2-surenb@google.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=Raj1UT+7iEKURSfnOW0pEUCg9+O5+IgIN4/nayR4b58=; b=rAzcoXutzrxwsjJ5J3QY3cKNxelhImJ/Gets9R0vMnBqddQbr4Op/jax00Dd+VbjB/ UDS8PgKmDbUEc88QO3W90w7SKlzN5ihTeaDe1kItEZ2oyYWflTJ2Ja1NYvdO9hb76vxc /Jw3sngbmTWyzMszx/o24ySRU0iWr//eTJN1sP/tjsA+ftQj6CACobZkTSHM9GbuE6lp XyEnvRzAk2kZsD7W+rCNQfy7ohOASzQAQddDALqxtshI6GEdd6KWTvi6lqA4aUm6KyWB 4LiPZDRqQ7DIrN6S6+2mRItFScOhD2X/BvsGT56obyMikYYeSKKHt4mXZjlgrW3Cqn+A VzOg== In-Reply-To: List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Michal Hocko Cc: Tejun Heo , Johannes Weiner , vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, Andrew Morton , Shakeel Butt , Roman Gushchin , songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org, Yang Shi , alexs-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org, Minchan Kim , Miaohe Lin , LKML , cgroups mailinglist , linux-mm , kernel-team On Mon, Jul 12, 2021 at 12:15 AM Michal Hocko wrote: > > On Fri 09-07-21 17:36:25, Suren Baghdasaryan wrote: > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Reviewed-by: Shakeel Butt > > With doc updated as suggested by Muchun Song > Acked-by: Michal Hocko Thanks! Will fix the comment and post v4 later today. > > Thanks! > > > --- > > include/linux/memcontrol.h | 28 +++++++++++++++++++++++++--- > > mm/memcontrol.c | 29 ++++++++++------------------- > > 2 files changed, 35 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index bfe5c486f4ad..39fa88051a42 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask); > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + if (mem_cgroup_disabled()) > > + return 0; > > + return __mem_cgroup_charge(page, mm, gfp_mask); > > +} > > + > > int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > gfp_t gfp, swp_entry_t entry); > > void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > > > > -void mem_cgroup_uncharge(struct page *page); > > -void mem_cgroup_uncharge_list(struct list_head *page_list); > > +void __mem_cgroup_uncharge(struct page *page); > > +static inline void mem_cgroup_uncharge(struct page *page) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge(page); > > +} > > + > > +void __mem_cgroup_uncharge_list(struct list_head *page_list); > > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge_list(page_list); > > +} > > > > void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index a228cd51c4bd..cdaf7003b43d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > > atomic_long_read(&parent->memory.children_low_usage))); > > } > > > > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > - gfp_t gfp) > > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) > > { > > unsigned int nr_pages = thp_nr_pages(page); > > int ret; > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > } > > > > /** > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup > > * @page: page to charge > > * @mm: mm context of the victim > > * @gfp_mask: reclaim mode > > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > * > > * Returns 0 on success. Otherwise, an error code is returned. > > */ > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > { > > struct mem_cgroup *memcg; > > int ret; > > > > - if (mem_cgroup_disabled()) > > - return 0; > > - > > memcg = get_mem_cgroup_from_mm(mm); > > - ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + ret = charge_memcg(page, memcg, gfp_mask); > > css_put(&memcg->css); > > > > return ret; > > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > memcg = get_mem_cgroup_from_mm(mm); > > rcu_read_unlock(); > > > > - ret = __mem_cgroup_charge(page, memcg, gfp); > > + ret = charge_memcg(page, memcg, gfp); > > > > css_put(&memcg->css); > > return ret; > > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > > } > > > > /** > > - * mem_cgroup_uncharge - uncharge a page > > + * __mem_cgroup_uncharge - uncharge a page > > * @page: page to uncharge > > * > > * Uncharge a page previously charged with mem_cgroup_charge(). > > */ > > -void mem_cgroup_uncharge(struct page *page) > > +void __mem_cgroup_uncharge(struct page *page) > > { > > struct uncharge_gather ug; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > /* Don't touch page->lru of any random page, pre-check: */ > > if (!page_memcg(page)) > > return; > > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page) > > } > > > > /** > > - * mem_cgroup_uncharge_list - uncharge a list of page > > + * __mem_cgroup_uncharge_list - uncharge a list of page > > * @page_list: list of pages to uncharge > > * > > * Uncharge a list of pages previously charged with > > * mem_cgroup_charge(). > > */ > > -void mem_cgroup_uncharge_list(struct list_head *page_list) > > +void __mem_cgroup_uncharge_list(struct list_head *page_list) > > { > > struct uncharge_gather ug; > > struct page *page; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > uncharge_gather_clear(&ug); > > list_for_each_entry(page, page_list, lru) > > uncharge_page(page, &ug); > > -- > > 2.32.0.93.g670b81a890-goog > > -- > Michal Hocko > SUSE Labs