From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CDF6C54FCC for ; Mon, 20 Apr 2020 22:12:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D4955206E9 for ; Mon, 20 Apr 2020 22:12:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="W9M8209L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D4955206E9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 715BE8E0012; Mon, 20 Apr 2020 18:12:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 674BE8E0003; Mon, 20 Apr 2020 18:12:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 564008E0012; Mon, 20 Apr 2020 18:12:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3F5778E0003 for ; Mon, 20 Apr 2020 18:12:06 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id F26054DA1 for ; Mon, 20 Apr 2020 22:12:05 +0000 (UTC) X-FDA: 76729632210.01.bath36_6997e75ed7d1f X-HE-Tag: bath36_6997e75ed7d1f X-Filterd-Recvd-Size: 10939 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Apr 2020 22:12:05 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id x8so8844982qtp.13 for ; Mon, 20 Apr 2020 15:12:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0ByVKo45aqE3C3DiOWQlsfGiXPD8l0eCrADnL4xHp3M=; b=W9M8209Lc67ASOYNxMxTQgqQ83SefdkGr4BMt3zVc2SbUfZkCY8RKITBaHT3GEOBhj N6lcEWc71w4InzGYzCXejgX6dB57cHmEO7jFYsstDdDGBjxMHjRaEeEgsw/FS7v0xboV 8kJ9WEuhnEhURfAcJrMRRRUKtpl23ZrvvZG1L024jGU6N6WvFDqxir/r+ZLBe1XD77/C 3Yaim4ynqkVQTMgEXcwAP2ozd6akgOygsfVgsn/eDTH8+9VGOpu5KStmyVSRak9cEonB pYfrKK3YsJB+eMlMlWfXLRjbHSdSuD96IhBn35BCOQyPlYBUk9zopL+9drvDv9A0RPCR e2EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0ByVKo45aqE3C3DiOWQlsfGiXPD8l0eCrADnL4xHp3M=; b=SrsfacTVJ3V9e/C6kzordNi+fGDtH0tZivSAh+Us+OI+d7pvDwG5F5jJMd/cb0Nr4m B8Kr/1x2+vFrTXV7w3p6yVIvuzAt+zDbKVwmSqNItZH4PEBRf1/L8p8IbcUk+uiCBxoz Moq/QtviuxerIUhCXJ61JhOqndHSxdMX9BlBTcb+ZFk2P6FiavIhqSs2oGlqOD+yEyKv 7kAA70ywQ3vEXWNnz3nne1/BLICLqTOGOywyIQJLdQhBpl4bJazsrnYDOpfawj1f7t5c tqUefe1gRa2mlGpptLwlLu4p6vSbRZU2wQQ2pWbRe3gv8FqleXfhH0EaPOkjLqsmwTzb oQEQ== X-Gm-Message-State: AGi0PuYiXoimmZHDpwXGRRxDnHhj8KOT2t7gTAnluYVZhnGC98yn82n+ sOrShdmU73/OGwuHBinDIg/qpg== X-Google-Smtp-Source: APiQypKo2DmYJKGiyL9AbVGNCrQ9cisUiwv9yX0i7x9v3iG5reuyNDJ7IGJ5vGas8mXirtHTbM036w== X-Received: by 2002:ac8:1703:: with SMTP id w3mr18392750qtj.267.1587420724916; Mon, 20 Apr 2020 15:12:04 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id 31sm435219qta.56.2020.04.20.15.12.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:12:04 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 13/18] mm: memcontrol: drop unused try/commit/cancel charge API Date: Mon, 20 Apr 2020 18:11:21 -0400 Message-Id: <20200420221126.341272-14-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are no more users. RIP in peace. Signed-off-by: Johannes Weiner --- include/linux/memcontrol.h | 36 ----------- mm/memcontrol.c | 126 +++++-------------------------------- 2 files changed, 15 insertions(+), 147 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 9ac8122ec1cd..52eb6411cfee 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -357,14 +357,6 @@ static inline unsigned long mem_cgroup_protection(st= ruct mem_cgroup *memcg, enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); =20 -int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp); -int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp); -void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memc= g, - bool lrucare); -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memc= g); - int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, bool lrucare); =20 @@ -846,34 +838,6 @@ static inline enum mem_cgroup_protection mem_cgroup_= protected( return MEMCG_PROT_NONE; } =20 -static inline int mem_cgroup_try_charge(struct page *page, struct mm_str= uct *mm, - gfp_t gfp_mask, - struct mem_cgroup **memcgp) -{ - *memcgp =3D NULL; - return 0; -} - -static inline int mem_cgroup_try_charge_delay(struct page *page, - struct mm_struct *mm, - gfp_t gfp_mask, - struct mem_cgroup **memcgp) -{ - *memcgp =3D NULL; - return 0; -} - -static inline void mem_cgroup_commit_charge(struct page *page, - struct mem_cgroup *memcg, - bool lrucare) -{ -} - -static inline void mem_cgroup_cancel_charge(struct page *page, - struct mem_cgroup *memcg) -{ -} - static inline int mem_cgroup_charge(struct page *page, struct mm_struct = *mm, gfp_t gfp_mask, bool lrucare) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7845a87b94d5..d5aee5577ff3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6431,29 +6431,26 @@ enum mem_cgroup_protection mem_cgroup_protected(s= truct mem_cgroup *root, } =20 /** - * mem_cgroup_try_charge - try charging a page + * mem_cgroup_charge - charge a newly allocated page to a cgroup * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @memcgp: charged memcg return + * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * - * Returns 0 on success, with *@memcgp pointing to the charged memcg. - * Otherwise, an error code is returned. - * - * After page->mapping has been set up, the caller must finalize the - * charge with mem_cgroup_commit_charge(). Or abort the transaction - * with mem_cgroup_cancel_charge() in case page instantiation fails. + * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, + bool lrucare) { unsigned int nr_pages =3D hpage_nr_pages(page); struct mem_cgroup *memcg =3D NULL; int ret =3D 0; =20 + VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); + if (mem_cgroup_disabled()) goto out; =20 @@ -6485,56 +6482,8 @@ int mem_cgroup_try_charge(struct page *page, struc= t mm_struct *mm, memcg =3D get_mem_cgroup_from_mm(mm); =20 ret =3D try_charge(memcg, gfp_mask, nr_pages); - - css_put(&memcg->css); -out: - *memcgp =3D memcg; - return ret; -} - -int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp) -{ - int ret; - - ret =3D mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); - if (*memcgp) - cgroup_throttle_swaprate(page, gfp_mask); - return ret; -} - -/** - * mem_cgroup_commit_charge - commit a page charge - * @page: page to charge - * @memcg: memcg to charge the page to - * @lrucare: page might be on LRU already - * - * Finalize a charge transaction started by mem_cgroup_try_charge(), - * after page->mapping has been set up. This must happen atomically - * as part of the page instantiation, i.e. under the page table lock - * for anonymous pages, under the page lock for page and swap cache. - * - * In addition, the page must not be on the LRU during the commit, to - * prevent racing with task migration. If it might be, use @lrucare. - * - * Use mem_cgroup_cancel_charge() to cancel the transaction instead. - */ -void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memc= g, - bool lrucare) -{ - unsigned int nr_pages =3D hpage_nr_pages(page); - - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - - if (mem_cgroup_disabled()) - return; - /* - * Swap faults will attempt to charge the same page multiple - * times. But reuse_swap_page() might have removed the page - * from swapcache already, so we can't check PageSwapCache(). - */ - if (!memcg) - return; + if (ret) + goto out_put; =20 commit_charge(page, memcg, lrucare); =20 @@ -6552,55 +6501,11 @@ void mem_cgroup_commit_charge(struct page *page, = struct mem_cgroup *memcg, */ mem_cgroup_uncharge_swap(entry, nr_pages); } -} =20 -/** - * mem_cgroup_cancel_charge - cancel a page charge - * @page: page to charge - * @memcg: memcg to charge the page to - * - * Cancel a charge transaction started by mem_cgroup_try_charge(). - */ -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memc= g) -{ - unsigned int nr_pages =3D hpage_nr_pages(page); - - if (mem_cgroup_disabled()) - return; - /* - * Swap faults will attempt to charge the same page multiple - * times. But reuse_swap_page() might have removed the page - * from swapcache already, so we can't check PageSwapCache(). - */ - if (!memcg) - return; - - cancel_charge(memcg, nr_pages); -} - -/** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already - * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. - * - * Returns 0 on success. Otherwise, an error code is returned. - */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, - bool lrucare) -{ - struct mem_cgroup *memcg; - int ret; - - ret =3D mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); - if (ret) - return ret; - mem_cgroup_commit_charge(page, memcg, lrucare); - return 0; +out_put: + css_put(&memcg->css); +out: + return ret; } =20 struct uncharge_gather { @@ -6707,8 +6612,7 @@ static void uncharge_list(struct list_head *page_li= st) * mem_cgroup_uncharge - uncharge a page * @page: page to uncharge * - * Uncharge a page previously charged with mem_cgroup_try_charge() and - * mem_cgroup_commit_charge(). + * Uncharge a page previously charged with mem_cgroup_charge(). */ void mem_cgroup_uncharge(struct page *page) { @@ -6731,7 +6635,7 @@ void mem_cgroup_uncharge(struct page *page) * @page_list: list of pages to uncharge * * Uncharge a list of pages previously charged with - * mem_cgroup_try_charge() and mem_cgroup_commit_charge(). + * mem_cgroup_charge(). */ void mem_cgroup_uncharge_list(struct list_head *page_list) { --=20 2.26.0