From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF1BBC07E9A for ; Tue, 13 Jul 2021 01:12:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A2DD5610E6 for ; Tue, 13 Jul 2021 01:12:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233465AbhGMBPY (ORCPT ); Mon, 12 Jul 2021 21:15:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbhGMBPX (ORCPT ); Mon, 12 Jul 2021 21:15:23 -0400 Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E15EC0613DD for ; Mon, 12 Jul 2021 18:12:34 -0700 (PDT) Received: by mail-yb1-xb2b.google.com with SMTP id b13so32070219ybk.4 for ; Mon, 12 Jul 2021 18:12:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=c3oAlTpbgK6u4tipighDXmanWuC5k+j2pz7LHeW3niE=; b=seQEp+0/ZpN/RK+fvG8klQBC3j7eW84QimkF86nmGyY8xuENBGhMw/dtqU1HqofJx+ qSF0iqRZHKLoVWniM5fTCcOUAjyzBzRrYYe+wnEeP0L4Q3nCGdKo26wTAWGF2mauRJ13 tOjhFdoOphrNcXGK5aZ3VIEoNG5cpH3qtf44HaWa3kVoS3HyVvFPk1wzWkFt7rYB+wDj BMjF8FxNIvqAFCI1dYFSD4ul3G7utINcF0CRxMJaegbwDLfC9FzeE84nmEd3ruMsE9uO E7YXB/C92IzbWGPUaQZv0Ad5pbbwzMDyFghYA563uleRAgkGiwnjUnZaOijuQI4p0IFN MPGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=c3oAlTpbgK6u4tipighDXmanWuC5k+j2pz7LHeW3niE=; b=RGzxfTUStq8jq/ht9CQ/cfXvOf0R9FZ3FPB38YpqaZiBp4BVrZAljhQZvSBhvXeHdK 9H96H8LfOw8dcg5hIW4I5Ik/y0eliT3liTKUnRUpOdjNt4mreZHd6ETRwkTUKynRg24q 34SpelP7NONsA2OX4t4YXK/Y1euyNnjmUHqRlNM7GWemP8tupD9gTlknBKCIjnSFwmlM UpL4Y5P2LpkkKHjczMjlAmZwrq34pS/Na2ViNY19FygkteQRqlaUEIRHwtAWoQF230az CoaZfMyH9e9jaPO7q96YRGum2XzIbdQLDjbCTIMITWwStQtFAtKCgD42/cGAQdJg1kr1 r5ow== X-Gm-Message-State: AOAM533V6x2nBDDg6E7C8QGZIyaaK/5y6G5UvhIIW3AstEQjGMwTyNqK /uAy0ynUBVfQzVktUy9UWisfC23xHYZGCZ5kyFHzFw== X-Google-Smtp-Source: ABdhPJxoAygQp3R5au+npJIEr+9s6ZSsT7fy1m9V4+Af9mxLInsnUJJW5H0O1wZ3wJgC8UPwCnT32W71U/fnUkV4MPI= X-Received: by 2002:a25:71c4:: with SMTP id m187mr2357132ybc.397.1626138753430; Mon, 12 Jul 2021 18:12:33 -0700 (PDT) MIME-Version: 1.0 References: <20210710003626.3549282-1-surenb@google.com> <20210710003626.3549282-2-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 12 Jul 2021 18:12:22 -0700 Message-ID: Subject: Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config To: Muchun Song Cc: Tejun Heo , Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Shakeel Butt , Roman Gushchin , Yang Shi , Alex Shi , Wei Yang , Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple@nvidia.com, Minchan Kim , Miaohe Lin , LKML , Cgroups , Linux Memory Management List , kernel-team Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Jul 10, 2021 at 4:08 AM Muchun Song wrote: > > On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan wrote: > > > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Reviewed-by: Shakeel Butt > > Reviewed-by: Muchun Song > > But some nits below. > > > --- > > include/linux/memcontrol.h | 28 +++++++++++++++++++++++++--- > > mm/memcontrol.c | 29 ++++++++++------------------- > > 2 files changed, 35 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index bfe5c486f4ad..39fa88051a42 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask); > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + if (mem_cgroup_disabled()) > > + return 0; > > + return __mem_cgroup_charge(page, mm, gfp_mask); > > +} > > + > > int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > gfp_t gfp, swp_entry_t entry); > > void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > > > > -void mem_cgroup_uncharge(struct page *page); > > -void mem_cgroup_uncharge_list(struct list_head *page_list); > > +void __mem_cgroup_uncharge(struct page *page); > > +static inline void mem_cgroup_uncharge(struct page *page) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge(page); > > +} > > + > > +void __mem_cgroup_uncharge_list(struct list_head *page_list); > > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge_list(page_list); > > +} > > > > void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index a228cd51c4bd..cdaf7003b43d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > > atomic_long_read(&parent->memory.children_low_usage))); > > } > > > > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > - gfp_t gfp) > > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) > > { > > unsigned int nr_pages = thp_nr_pages(page); > > int ret; > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > } > > > > /** > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup > > * @page: page to charge > > * @mm: mm context of the victim > > * @gfp_mask: reclaim mode > > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > * > > * Returns 0 on success. Otherwise, an error code is returned. > > */ > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > { > > struct mem_cgroup *memcg; > > int ret; > > > > - if (mem_cgroup_disabled()) > > - return 0; > > - > > memcg = get_mem_cgroup_from_mm(mm); > > - ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + ret = charge_memcg(page, memcg, gfp_mask); > > css_put(&memcg->css); > > > > return ret; > > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > memcg = get_mem_cgroup_from_mm(mm); > > rcu_read_unlock(); > > > > - ret = __mem_cgroup_charge(page, memcg, gfp); > > + ret = charge_memcg(page, memcg, gfp); > > > > css_put(&memcg->css); > > return ret; > > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > > } > > > > /** > > - * mem_cgroup_uncharge - uncharge a page > > + * __mem_cgroup_uncharge - uncharge a page > > * @page: page to uncharge > > * > > * Uncharge a page previously charged with mem_cgroup_charge(). > > The comment here also needs to be updated. > > mem_cgroup_uncharge() -> __mem_cgroup_uncharge() > > > */ > > -void mem_cgroup_uncharge(struct page *page) > > +void __mem_cgroup_uncharge(struct page *page) > > { > > struct uncharge_gather ug; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > /* Don't touch page->lru of any random page, pre-check: */ > > if (!page_memcg(page)) > > return; > > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page) > > } > > > > /** > > - * mem_cgroup_uncharge_list - uncharge a list of page > > + * __mem_cgroup_uncharge_list - uncharge a list of page > > * @page_list: list of pages to uncharge > > * > > * Uncharge a list of pages previously charged with > > * mem_cgroup_charge(). > > Should be __mem_cgroup_charge(). Fixed and posted in v4 at: https://lore.kernel.org/patchwork/project/lkml/list/?series=507943 Thanks! > > Thanks. > > > */ > > -void mem_cgroup_uncharge_list(struct list_head *page_list) > > +void __mem_cgroup_uncharge_list(struct list_head *page_list) > > { > > struct uncharge_gather ug; > > struct page *page; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > uncharge_gather_clear(&ug); > > list_for_each_entry(page, page_list, lru) > > uncharge_page(page, &ug); > > -- > > 2.32.0.93.g670b81a890-goog > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-23.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37F1EC07E9A for ; Tue, 13 Jul 2021 01:13:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D0629610E6 for ; Tue, 13 Jul 2021 01:13:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0629610E6 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F3EAD6B0095; Mon, 12 Jul 2021 21:13:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EEF436B0096; Mon, 12 Jul 2021 21:13:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D69566B0098; Mon, 12 Jul 2021 21:13:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id B3CB46B0095 for ; Mon, 12 Jul 2021 21:13:35 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C120E141C4 for ; Tue, 13 Jul 2021 01:13:34 +0000 (UTC) X-FDA: 78355791948.05.E2816D3 Received: from mail-yb1-f180.google.com (mail-yb1-f180.google.com [209.85.219.180]) by imf05.hostedemail.com (Postfix) with ESMTP id 5E7CB5008B0B for ; Tue, 13 Jul 2021 01:12:34 +0000 (UTC) Received: by mail-yb1-f180.google.com with SMTP id v189so23590603ybg.3 for ; Mon, 12 Jul 2021 18:12:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=c3oAlTpbgK6u4tipighDXmanWuC5k+j2pz7LHeW3niE=; b=seQEp+0/ZpN/RK+fvG8klQBC3j7eW84QimkF86nmGyY8xuENBGhMw/dtqU1HqofJx+ qSF0iqRZHKLoVWniM5fTCcOUAjyzBzRrYYe+wnEeP0L4Q3nCGdKo26wTAWGF2mauRJ13 tOjhFdoOphrNcXGK5aZ3VIEoNG5cpH3qtf44HaWa3kVoS3HyVvFPk1wzWkFt7rYB+wDj BMjF8FxNIvqAFCI1dYFSD4ul3G7utINcF0CRxMJaegbwDLfC9FzeE84nmEd3ruMsE9uO E7YXB/C92IzbWGPUaQZv0Ad5pbbwzMDyFghYA563uleRAgkGiwnjUnZaOijuQI4p0IFN MPGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=c3oAlTpbgK6u4tipighDXmanWuC5k+j2pz7LHeW3niE=; b=nGVToN8MhEtFr8WbfS7589frlMOV36zACif4k9bU0PWRjE2i0UKGYlRSBTlXjxr6Jg P78Zff/GgLIrmCpAZFhDlKcZne/P0+b4wqQiwpwKydwOQav5JP7ZGrQTqFo1t+hqSgmq dphyVfSPHkzxWEGi4eX4T/5CGpmRFXCFDYSTsDFFd89tqYIVjNafegVtZ3htL775Lh6U lcnSUB89oaY0d8m+hKOVHXJznvKlEYC3enj1TUrmM7T0AXrkZfr7Z3bLfSkoeDjXOU6r eOgHCByvnkGi2WpfYdgZfP3DJO3zIDuPWGVcqGQi0QDRld1dWZHUtdiJ3f5/HHJiDfzt /zkQ== X-Gm-Message-State: AOAM533KjkvcfB9oqXj23KjsNv046vkRulFqGnmYmgdImzCXYUHNDaf4 bpVqF9xZ78/ruHP3jyZcQShpdOr5rsuI2dyAYd45hg== X-Google-Smtp-Source: ABdhPJxoAygQp3R5au+npJIEr+9s6ZSsT7fy1m9V4+Af9mxLInsnUJJW5H0O1wZ3wJgC8UPwCnT32W71U/fnUkV4MPI= X-Received: by 2002:a25:71c4:: with SMTP id m187mr2357132ybc.397.1626138753430; Mon, 12 Jul 2021 18:12:33 -0700 (PDT) MIME-Version: 1.0 References: <20210710003626.3549282-1-surenb@google.com> <20210710003626.3549282-2-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 12 Jul 2021 18:12:22 -0700 Message-ID: Subject: Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config To: Muchun Song Cc: Tejun Heo , Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Shakeel Butt , Roman Gushchin , Yang Shi , Alex Shi , Wei Yang , Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple@nvidia.com, Minchan Kim , Miaohe Lin , LKML , Cgroups , Linux Memory Management List , kernel-team Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b="seQEp+0/"; spf=pass (imf05.hostedemail.com: domain of surenb@google.com designates 209.85.219.180 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam02 X-Stat-Signature: sg9yxrajcdtdukps4rzwmdj1yqkrf4q1 X-Rspamd-Queue-Id: 5E7CB5008B0B X-HE-Tag: 1626138754-15222 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jul 10, 2021 at 4:08 AM Muchun Song wrote: > > On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan wrote: > > > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Reviewed-by: Shakeel Butt > > Reviewed-by: Muchun Song > > But some nits below. > > > --- > > include/linux/memcontrol.h | 28 +++++++++++++++++++++++++--- > > mm/memcontrol.c | 29 ++++++++++------------------- > > 2 files changed, 35 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index bfe5c486f4ad..39fa88051a42 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask); > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + if (mem_cgroup_disabled()) > > + return 0; > > + return __mem_cgroup_charge(page, mm, gfp_mask); > > +} > > + > > int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > gfp_t gfp, swp_entry_t entry); > > void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > > > > -void mem_cgroup_uncharge(struct page *page); > > -void mem_cgroup_uncharge_list(struct list_head *page_list); > > +void __mem_cgroup_uncharge(struct page *page); > > +static inline void mem_cgroup_uncharge(struct page *page) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge(page); > > +} > > + > > +void __mem_cgroup_uncharge_list(struct list_head *page_list); > > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge_list(page_list); > > +} > > > > void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index a228cd51c4bd..cdaf7003b43d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > > atomic_long_read(&parent->memory.children_low_usage))); > > } > > > > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > - gfp_t gfp) > > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) > > { > > unsigned int nr_pages = thp_nr_pages(page); > > int ret; > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > } > > > > /** > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup > > * @page: page to charge > > * @mm: mm context of the victim > > * @gfp_mask: reclaim mode > > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > * > > * Returns 0 on success. Otherwise, an error code is returned. > > */ > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > { > > struct mem_cgroup *memcg; > > int ret; > > > > - if (mem_cgroup_disabled()) > > - return 0; > > - > > memcg = get_mem_cgroup_from_mm(mm); > > - ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + ret = charge_memcg(page, memcg, gfp_mask); > > css_put(&memcg->css); > > > > return ret; > > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > memcg = get_mem_cgroup_from_mm(mm); > > rcu_read_unlock(); > > > > - ret = __mem_cgroup_charge(page, memcg, gfp); > > + ret = charge_memcg(page, memcg, gfp); > > > > css_put(&memcg->css); > > return ret; > > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > > } > > > > /** > > - * mem_cgroup_uncharge - uncharge a page > > + * __mem_cgroup_uncharge - uncharge a page > > * @page: page to uncharge > > * > > * Uncharge a page previously charged with mem_cgroup_charge(). > > The comment here also needs to be updated. > > mem_cgroup_uncharge() -> __mem_cgroup_uncharge() > > > */ > > -void mem_cgroup_uncharge(struct page *page) > > +void __mem_cgroup_uncharge(struct page *page) > > { > > struct uncharge_gather ug; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > /* Don't touch page->lru of any random page, pre-check: */ > > if (!page_memcg(page)) > > return; > > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page) > > } > > > > /** > > - * mem_cgroup_uncharge_list - uncharge a list of page > > + * __mem_cgroup_uncharge_list - uncharge a list of page > > * @page_list: list of pages to uncharge > > * > > * Uncharge a list of pages previously charged with > > * mem_cgroup_charge(). > > Should be __mem_cgroup_charge(). Fixed and posted in v4 at: https://lore.kernel.org/patchwork/project/lkml/list/?series=507943 Thanks! > > Thanks. > > > */ > > -void mem_cgroup_uncharge_list(struct list_head *page_list) > > +void __mem_cgroup_uncharge_list(struct list_head *page_list) > > { > > struct uncharge_gather ug; > > struct page *page; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > uncharge_gather_clear(&ug); > > list_for_each_entry(page, page_list, lru) > > uncharge_page(page, &ug); > > -- > > 2.32.0.93.g670b81a890-goog > > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Suren Baghdasaryan Subject: Re: [External] [PATCH v3 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Date: Mon, 12 Jul 2021 18:12:22 -0700 Message-ID: References: <20210710003626.3549282-1-surenb@google.com> <20210710003626.3549282-2-surenb@google.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=c3oAlTpbgK6u4tipighDXmanWuC5k+j2pz7LHeW3niE=; b=seQEp+0/ZpN/RK+fvG8klQBC3j7eW84QimkF86nmGyY8xuENBGhMw/dtqU1HqofJx+ qSF0iqRZHKLoVWniM5fTCcOUAjyzBzRrYYe+wnEeP0L4Q3nCGdKo26wTAWGF2mauRJ13 tOjhFdoOphrNcXGK5aZ3VIEoNG5cpH3qtf44HaWa3kVoS3HyVvFPk1wzWkFt7rYB+wDj BMjF8FxNIvqAFCI1dYFSD4ul3G7utINcF0CRxMJaegbwDLfC9FzeE84nmEd3ruMsE9uO E7YXB/C92IzbWGPUaQZv0Ad5pbbwzMDyFghYA563uleRAgkGiwnjUnZaOijuQI4p0IFN MPGQ== In-Reply-To: List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Muchun Song Cc: Tejun Heo , Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Shakeel Butt , Roman Gushchin , Yang Shi , Alex Shi , Wei Yang , Vlastimil Babka , Jens Axboe , Joonsoo Kim , David Hildenbrand , Matthew Wilcox , apopple-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org, Minchan Kim , Miaohe Lin , LKML , Cgroups , Linux Memory Management List , kernel-team On Sat, Jul 10, 2021 at 4:08 AM Muchun Song wrote: > > On Sat, Jul 10, 2021 at 8:36 AM Suren Baghdasaryan wrote: > > > > Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions > > functions to perform mem_cgroup_disabled static key check inline before > > calling the main body of the function. This minimizes the memcg overhead > > in the pagefault and exit_mmap paths when memcgs are disabled using > > cgroup_disable=memory command-line option. > > This change results in ~0.4% overhead reduction when running PFT test > > comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory} > > configurationon on an 8-core ARM64 Android device. > > > > Signed-off-by: Suren Baghdasaryan > > Reviewed-by: Shakeel Butt > > Reviewed-by: Muchun Song > > But some nits below. > > > --- > > include/linux/memcontrol.h | 28 +++++++++++++++++++++++++--- > > mm/memcontrol.c | 29 ++++++++++------------------- > > 2 files changed, 35 insertions(+), 22 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index bfe5c486f4ad..39fa88051a42 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -693,13 +693,35 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg) > > page_counter_read(&memcg->memory); > > } > > > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask); > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask); > > +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > +{ > > + if (mem_cgroup_disabled()) > > + return 0; > > + return __mem_cgroup_charge(page, mm, gfp_mask); > > +} > > + > > int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > gfp_t gfp, swp_entry_t entry); > > void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); > > > > -void mem_cgroup_uncharge(struct page *page); > > -void mem_cgroup_uncharge_list(struct list_head *page_list); > > +void __mem_cgroup_uncharge(struct page *page); > > +static inline void mem_cgroup_uncharge(struct page *page) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge(page); > > +} > > + > > +void __mem_cgroup_uncharge_list(struct list_head *page_list); > > +static inline void mem_cgroup_uncharge_list(struct list_head *page_list) > > +{ > > + if (mem_cgroup_disabled()) > > + return; > > + __mem_cgroup_uncharge_list(page_list); > > +} > > > > void mem_cgroup_migrate(struct page *oldpage, struct page *newpage); > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index a228cd51c4bd..cdaf7003b43d 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -6701,8 +6701,7 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root, > > atomic_long_read(&parent->memory.children_low_usage))); > > } > > > > -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > - gfp_t gfp) > > +static int charge_memcg(struct page *page, struct mem_cgroup *memcg, gfp_t gfp) > > { > > unsigned int nr_pages = thp_nr_pages(page); > > int ret; > > @@ -6723,7 +6722,7 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > } > > > > /** > > - * mem_cgroup_charge - charge a newly allocated page to a cgroup > > + * __mem_cgroup_charge - charge a newly allocated page to a cgroup > > * @page: page to charge > > * @mm: mm context of the victim > > * @gfp_mask: reclaim mode > > @@ -6736,16 +6735,14 @@ static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg, > > * > > * Returns 0 on success. Otherwise, an error code is returned. > > */ > > -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) > > +int __mem_cgroup_charge(struct page *page, struct mm_struct *mm, > > + gfp_t gfp_mask) > > { > > struct mem_cgroup *memcg; > > int ret; > > > > - if (mem_cgroup_disabled()) > > - return 0; > > - > > memcg = get_mem_cgroup_from_mm(mm); > > - ret = __mem_cgroup_charge(page, memcg, gfp_mask); > > + ret = charge_memcg(page, memcg, gfp_mask); > > css_put(&memcg->css); > > > > return ret; > > @@ -6780,7 +6777,7 @@ int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *mm, > > memcg = get_mem_cgroup_from_mm(mm); > > rcu_read_unlock(); > > > > - ret = __mem_cgroup_charge(page, memcg, gfp); > > + ret = charge_memcg(page, memcg, gfp); > > > > css_put(&memcg->css); > > return ret; > > @@ -6916,18 +6913,15 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug) > > } > > > > /** > > - * mem_cgroup_uncharge - uncharge a page > > + * __mem_cgroup_uncharge - uncharge a page > > * @page: page to uncharge > > * > > * Uncharge a page previously charged with mem_cgroup_charge(). > > The comment here also needs to be updated. > > mem_cgroup_uncharge() -> __mem_cgroup_uncharge() > > > */ > > -void mem_cgroup_uncharge(struct page *page) > > +void __mem_cgroup_uncharge(struct page *page) > > { > > struct uncharge_gather ug; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > /* Don't touch page->lru of any random page, pre-check: */ > > if (!page_memcg(page)) > > return; > > @@ -6938,20 +6932,17 @@ void mem_cgroup_uncharge(struct page *page) > > } > > > > /** > > - * mem_cgroup_uncharge_list - uncharge a list of page > > + * __mem_cgroup_uncharge_list - uncharge a list of page > > * @page_list: list of pages to uncharge > > * > > * Uncharge a list of pages previously charged with > > * mem_cgroup_charge(). > > Should be __mem_cgroup_charge(). Fixed and posted in v4 at: https://lore.kernel.org/patchwork/project/lkml/list/?series=507943 Thanks! > > Thanks. > > > */ > > -void mem_cgroup_uncharge_list(struct list_head *page_list) > > +void __mem_cgroup_uncharge_list(struct list_head *page_list) > > { > > struct uncharge_gather ug; > > struct page *page; > > > > - if (mem_cgroup_disabled()) > > - return; > > - > > uncharge_gather_clear(&ug); > > list_for_each_entry(page, page_list, lru) > > uncharge_page(page, &ug); > > -- > > 2.32.0.93.g670b81a890-goog > >