From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FAEFC54E4A for ; Fri, 8 May 2020 18:32:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1BE982192A for ; Fri, 8 May 2020 18:32:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="vQAFtSNf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1BE982192A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1334B90000E; Fri, 8 May 2020 14:32:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 06C43900005; Fri, 8 May 2020 14:32:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E29E190000E; Fri, 8 May 2020 14:32:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0215.hostedemail.com [216.40.44.215]) by kanga.kvack.org (Postfix) with ESMTP id C7147900005 for ; Fri, 8 May 2020 14:32:36 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 86D048248047 for ; Fri, 8 May 2020 18:32:36 +0000 (UTC) X-FDA: 76794397512.15.waves32_6e0ac001b4c3a X-HE-Tag: waves32_6e0ac001b4c3a X-Filterd-Recvd-Size: 10945 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 May 2020 18:32:36 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id 23so1633852qkf.0 for ; Fri, 08 May 2020 11:32:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RJuCgdGkxri9O9i9NP/KXNDfKziAlbiYeA2e0yljems=; b=vQAFtSNfEXFc12lcmRHSrslBLWwrL9/3NL8nC2K0IdgACd51+qanHt6Urlnq1WUhwI JlMWMxShXW7YHbRJmcxBlCLEzY9XfmWvKgnISBWFAUfl0MT4yJw6cQYm+rvMs/tKASEQ UCyDKUKAOdMY/0jv68FVFWyaE94OedLnLjx9+FfgAe2Idzi+4XLQUxJl11mQk2JJeFI5 nb1/R5trHh7mna8SiwpM5GOcV/UiXnRDLKbs1lZOV8wnWD65UG88MxcPsD+Q1GYLxJWT BoZXgKZVl1dt0idRIQlHUtff4hpoJUJCfpcfmOqVTcrIdjwhmtr11K1dLp8KH/uBoR8W EEAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RJuCgdGkxri9O9i9NP/KXNDfKziAlbiYeA2e0yljems=; b=QUqSbgIoHV+fXQTbPujeOy4EGiFOYY6FFSarTMluWyOiX+KCc85NZCXJOTMMrC1N3Z at1HBE3NPyby8vd5X8jWqs60oe6Qr5zMGqOO0X2j5tygfeTIbSBkZSTq+RAU6iOh07wd xRxUeFzpwqv7xJssfxlJCiRQt79snmV7Gs6hzGb2HM6m48CKkYVv+bIv0kWsjg97PfYC x7+Y5enZ3n75xbPXnWGoyxSKgddqoUI64nzpqG8JvJUAzY87uLUbQmnfRiKmVq5BQJ// VE53z5GBnys3bkln1u5LoHSmEeryfaJqT0S8VD0ns7TP7xJgQkKSmzyJw7NCfqiINYcg gJzA== X-Gm-Message-State: AGi0PuZVAH+nMo2i5Jv+qfUTsEnWX0a5HgH0vJjdjUeShRgxgVKD/IvF 74pOP8mpvWtWW728PWeQyQRbuQ== X-Google-Smtp-Source: APiQypIpzqmpD0rszP5jLG6wkpB0LTXe+PC8lP5EdHe8jySUT5DIu12pY4ZLhwARoj4ZwLkx8MQKAQ== X-Received: by 2002:a05:620a:16db:: with SMTP id a27mr3878709qkn.441.1588962755453; Fri, 08 May 2020 11:32:35 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:2627]) by smtp.gmail.com with ESMTPSA id k2sm2195385qta.39.2020.05.08.11.32.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 May 2020 11:32:34 -0700 (PDT) From: Johannes Weiner To: Andrew Morton Cc: Alex Shi , Joonsoo Kim , Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 13/19] mm: memcontrol: drop unused try/commit/cancel charge API Date: Fri, 8 May 2020 14:31:00 -0400 Message-Id: <20200508183105.225460-14-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200508183105.225460-1-hannes@cmpxchg.org> References: <20200508183105.225460-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are no more users. RIP in peace. Signed-off-by: Johannes Weiner Reviewed-by: Joonsoo Kim --- include/linux/memcontrol.h | 36 ----------- mm/memcontrol.c | 126 +++++-------------------------------- 2 files changed, 15 insertions(+), 147 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 9b1054bf6d35..23608d3ee70f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -369,14 +369,6 @@ static inline bool mem_cgroup_below_min(struct mem_c= group *memcg) page_counter_read(&memcg->memory); } =20 -int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp); -int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp); -void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memc= g, - bool lrucare); -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memc= g); - int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, bool lrucare); =20 @@ -867,34 +859,6 @@ static inline bool mem_cgroup_below_min(struct mem_c= group *memcg) return false; } =20 -static inline int mem_cgroup_try_charge(struct page *page, struct mm_str= uct *mm, - gfp_t gfp_mask, - struct mem_cgroup **memcgp) -{ - *memcgp =3D NULL; - return 0; -} - -static inline int mem_cgroup_try_charge_delay(struct page *page, - struct mm_struct *mm, - gfp_t gfp_mask, - struct mem_cgroup **memcgp) -{ - *memcgp =3D NULL; - return 0; -} - -static inline void mem_cgroup_commit_charge(struct page *page, - struct mem_cgroup *memcg, - bool lrucare) -{ -} - -static inline void mem_cgroup_cancel_charge(struct page *page, - struct mem_cgroup *memcg) -{ -} - static inline int mem_cgroup_charge(struct page *page, struct mm_struct = *mm, gfp_t gfp_mask, bool lrucare) { diff --git a/mm/memcontrol.c b/mm/memcontrol.c index fd92c1c99e1f..7b9bb7ca0b44 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6432,29 +6432,26 @@ void mem_cgroup_calculate_protection(struct mem_c= group *root, } =20 /** - * mem_cgroup_try_charge - try charging a page + * mem_cgroup_charge - charge a newly allocated page to a cgroup * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @memcgp: charged memcg return + * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * - * Returns 0 on success, with *@memcgp pointing to the charged memcg. - * Otherwise, an error code is returned. - * - * After page->mapping has been set up, the caller must finalize the - * charge with mem_cgroup_commit_charge(). Or abort the transaction - * with mem_cgroup_cancel_charge() in case page instantiation fails. + * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, + bool lrucare) { unsigned int nr_pages =3D hpage_nr_pages(page); struct mem_cgroup *memcg =3D NULL; int ret =3D 0; =20 + VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); + if (mem_cgroup_disabled()) goto out; =20 @@ -6486,56 +6483,8 @@ int mem_cgroup_try_charge(struct page *page, struc= t mm_struct *mm, memcg =3D get_mem_cgroup_from_mm(mm); =20 ret =3D try_charge(memcg, gfp_mask, nr_pages); - - css_put(&memcg->css); -out: - *memcgp =3D memcg; - return ret; -} - -int mem_cgroup_try_charge_delay(struct page *page, struct mm_struct *mm, - gfp_t gfp_mask, struct mem_cgroup **memcgp) -{ - int ret; - - ret =3D mem_cgroup_try_charge(page, mm, gfp_mask, memcgp); - if (*memcgp) - cgroup_throttle_swaprate(page, gfp_mask); - return ret; -} - -/** - * mem_cgroup_commit_charge - commit a page charge - * @page: page to charge - * @memcg: memcg to charge the page to - * @lrucare: page might be on LRU already - * - * Finalize a charge transaction started by mem_cgroup_try_charge(), - * after page->mapping has been set up. This must happen atomically - * as part of the page instantiation, i.e. under the page table lock - * for anonymous pages, under the page lock for page and swap cache. - * - * In addition, the page must not be on the LRU during the commit, to - * prevent racing with task migration. If it might be, use @lrucare. - * - * Use mem_cgroup_cancel_charge() to cancel the transaction instead. - */ -void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memc= g, - bool lrucare) -{ - unsigned int nr_pages =3D hpage_nr_pages(page); - - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - - if (mem_cgroup_disabled()) - return; - /* - * Swap faults will attempt to charge the same page multiple - * times. But reuse_swap_page() might have removed the page - * from swapcache already, so we can't check PageSwapCache(). - */ - if (!memcg) - return; + if (ret) + goto out_put; =20 commit_charge(page, memcg, lrucare); =20 @@ -6553,55 +6502,11 @@ void mem_cgroup_commit_charge(struct page *page, = struct mem_cgroup *memcg, */ mem_cgroup_uncharge_swap(entry, nr_pages); } -} =20 -/** - * mem_cgroup_cancel_charge - cancel a page charge - * @page: page to charge - * @memcg: memcg to charge the page to - * - * Cancel a charge transaction started by mem_cgroup_try_charge(). - */ -void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memc= g) -{ - unsigned int nr_pages =3D hpage_nr_pages(page); - - if (mem_cgroup_disabled()) - return; - /* - * Swap faults will attempt to charge the same page multiple - * times. But reuse_swap_page() might have removed the page - * from swapcache already, so we can't check PageSwapCache(). - */ - if (!memcg) - return; - - cancel_charge(memcg, nr_pages); -} - -/** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already - * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. - * - * Returns 0 on success. Otherwise, an error code is returned. - */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, - bool lrucare) -{ - struct mem_cgroup *memcg; - int ret; - - ret =3D mem_cgroup_try_charge(page, mm, gfp_mask, &memcg); - if (ret) - return ret; - mem_cgroup_commit_charge(page, memcg, lrucare); - return 0; +out_put: + css_put(&memcg->css); +out: + return ret; } =20 struct uncharge_gather { @@ -6706,8 +6611,7 @@ static void uncharge_list(struct list_head *page_li= st) * mem_cgroup_uncharge - uncharge a page * @page: page to uncharge * - * Uncharge a page previously charged with mem_cgroup_try_charge() and - * mem_cgroup_commit_charge(). + * Uncharge a page previously charged with mem_cgroup_charge(). */ void mem_cgroup_uncharge(struct page *page) { @@ -6730,7 +6634,7 @@ void mem_cgroup_uncharge(struct page *page) * @page_list: list of pages to uncharge * * Uncharge a list of pages previously charged with - * mem_cgroup_try_charge() and mem_cgroup_commit_charge(). + * mem_cgroup_charge(). */ void mem_cgroup_uncharge_list(struct list_head *page_list) { --=20 2.26.2