From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EE03C07E99 for ; Mon, 12 Jul 2021 03:29:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2699260FF4 for ; Mon, 12 Jul 2021 03:29:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2699260FF4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4865C6B0098; Sun, 11 Jul 2021 23:29:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 435C76B0099; Sun, 11 Jul 2021 23:29:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B1C68D0001; Sun, 11 Jul 2021 23:29:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id 0222F6B0098 for ; Sun, 11 Jul 2021 23:29:43 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 288DC183EC5B7 for ; Mon, 12 Jul 2021 03:29:43 +0000 (UTC) X-FDA: 78352506246.37.ABF3577 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf05.hostedemail.com (Postfix) with ESMTP id BE2E95014ADD for ; Mon, 12 Jul 2021 03:29:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=HEeocRxLvQb7MNuTcOQiZSGKxe4FyR28zF0pf7d0OPg=; b=O/TB32X/NeH/yC/Wr8L6HLtTqP XdhBLrGlActibq1VUorNvM175rPxKRQhvFG7CBeNkPHtRBTQ1ocCrj+OsVb83GLmEXXM03fZbqSlD Yq/xkXiMwx7+q0IY4n8lvtwO160o5r8bI6k78bJPVOy2XaEhhvKWpGMYEBmbB7Zz1+n163n8g69kq 8dTRgOwSwLyiguaz2qq4PQ1TEU8FOX2vWfAuhG/HiwDZOpteqbzlT8H/zfiIvyYeSN1z74h2b0Jkm o2p2tS9eaWVIbsUERyKh1kyoJyNCZ6cx7gLzdNwOSg1H338DjY56C00+NAL5LGKbatI6taHX2oc2S 8VhT4dCw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m2mce-00GoJc-7J; Mon, 12 Jul 2021 03:28:54 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v13 040/137] mm/memcg: Convert mem_cgroup_charge() to take a folio Date: Mon, 12 Jul 2021 04:05:24 +0100 Message-Id: <20210712030701.4000097-41-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712030701.4000097-1-willy@infradead.org> References: <20210712030701.4000097-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="O/TB32X/"; spf=none (imf05.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Stat-Signature: em7rgn57s3pzo8hr7rgre95awch76adj X-Rspamd-Queue-Id: BE2E95014ADD X-Rspamd-Server: rspam01 X-HE-Tag: 1626060582-291229 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert all callers of mem_cgroup_charge() to call page_folio() on the page they're currently passing in. Many of them will be converted to use folios themselves soon. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/memcontrol.h | 6 +++--- kernel/events/uprobes.c | 3 ++- mm/filemap.c | 2 +- mm/huge_memory.c | 2 +- mm/khugepaged.c | 4 ++-- mm/ksm.c | 3 ++- mm/memcontrol.c | 26 +++++++++++++------------- mm/memory.c | 9 +++++---- mm/migrate.c | 2 +- mm/shmem.c | 2 +- mm/userfaultfd.c | 2 +- 11 files changed, 32 insertions(+), 29 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 044d0b87586f..ce250303d3a5 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -704,7 +704,7 @@ static inline bool mem_cgroup_below_min(struct mem_cg= roup *memcg) page_counter_read(&memcg->memory); } =20 -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask); +int mem_cgroup_charge(struct folio *, struct mm_struct *, gfp_t); int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *m= m, gfp_t gfp, swp_entry_t entry); void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); @@ -1185,8 +1185,8 @@ static inline bool mem_cgroup_below_min(struct mem_= cgroup *memcg) return false; } =20 -static inline int mem_cgroup_charge(struct page *page, struct mm_struct = *mm, - gfp_t gfp_mask) +static inline int mem_cgroup_charge(struct folio *folio, + struct mm_struct *mm, gfp_t gfp) { return 0; } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index af24dc3febbe..6357c3580d07 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,7 +167,8 @@ static int __replace_page(struct vm_area_struct *vma,= unsigned long addr, addr + PAGE_SIZE); =20 if (new_page) { - err =3D mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); + err =3D mem_cgroup_charge(page_folio(new_page), vma->vm_mm, + GFP_KERNEL); if (err) return err; } diff --git a/mm/filemap.c b/mm/filemap.c index 8e6c69db5559..44498bfe7b45 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -872,7 +872,7 @@ noinline int __add_to_page_cache_locked(struct page *= page, page->index =3D offset; =20 if (!huge) { - error =3D mem_cgroup_charge(page, NULL, gfp); + error =3D mem_cgroup_charge(page_folio(page), NULL, gfp); if (error) goto error; charged =3D true; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index afff3ac87067..ecb1fb1f5f3e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -603,7 +603,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct= vm_fault *vmf, =20 VM_BUG_ON_PAGE(!PageCompound(page), page); =20 - if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b0412be08fa2..8f6d7fdea9f4 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1087,7 +1087,7 @@ static void collapse_huge_page(struct mm_struct *mm= , goto out_nolock; } =20 - if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { + if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { result =3D SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1658,7 +1658,7 @@ static void collapse_file(struct mm_struct *mm, goto out; } =20 - if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { + if (unlikely(mem_cgroup_charge(page_folio(new_page), mm, gfp))) { result =3D SCAN_CGROUP_CHARGE_FAIL; goto out; } diff --git a/mm/ksm.c b/mm/ksm.c index 3fa9bc8a67cf..23d36b59f997 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2580,7 +2580,8 @@ struct page *ksm_might_need_to_copy(struct page *pa= ge, return page; /* let do_swap_page report the error */ =20 new_page =3D alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); - if (new_page && mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL)) { + if (new_page && + mem_cgroup_charge(page_folio(new_page), vma->vm_mm, GFP_KERNEL)) { put_page(new_page); new_page =3D NULL; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f64869c0e06e..ebad42c55f76 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -6681,10 +6681,9 @@ void mem_cgroup_calculate_protection(struct mem_cg= roup *root, atomic_long_read(&parent->memory.children_low_usage))); } =20 -static int __mem_cgroup_charge(struct page *page, struct mem_cgroup *mem= cg, +static int __mem_cgroup_charge(struct folio *folio, struct mem_cgroup *m= emcg, gfp_t gfp) { - struct folio *folio =3D page_folio(page); unsigned int nr_pages =3D folio_nr_pages(folio); int ret; =20 @@ -6697,27 +6696,27 @@ static int __mem_cgroup_charge(struct page *page,= struct mem_cgroup *memcg, =20 local_irq_disable(); mem_cgroup_charge_statistics(memcg, nr_pages); - memcg_check_events(memcg, page_to_nid(page)); + memcg_check_events(memcg, folio_nid(folio)); local_irq_enable(); out: return ret; } =20 /** - * mem_cgroup_charge - charge a newly allocated page to a cgroup - * @page: page to charge - * @mm: mm context of the victim - * @gfp_mask: reclaim mode + * mem_cgroup_charge - Charge a newly allocated folio to a cgroup. + * @folio: Folio to charge. + * @mm: mm context of the allocating task. + * @gfp: reclaim mode * - * Try to charge @page to the memcg that @mm belongs to, reclaiming - * pages according to @gfp_mask if necessary. if @mm is NULL, try to + * Try to charge @folio to the memcg that @mm belongs to, reclaiming + * pages according to @gfp if necessary. If @mm is NULL, try to * charge to the active memcg. * - * Do not use this for pages allocated for swapin. + * Do not use this for folios allocated for swapin. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask) +int mem_cgroup_charge(struct folio *folio, struct mm_struct *mm, gfp_t g= fp) { struct mem_cgroup *memcg; int ret; @@ -6726,7 +6725,7 @@ int mem_cgroup_charge(struct page *page, struct mm_= struct *mm, gfp_t gfp_mask) return 0; =20 memcg =3D get_mem_cgroup_from_mm(mm); - ret =3D __mem_cgroup_charge(page, memcg, gfp_mask); + ret =3D __mem_cgroup_charge(folio, memcg, gfp); css_put(&memcg->css); =20 return ret; @@ -6747,6 +6746,7 @@ int mem_cgroup_charge(struct page *page, struct mm_= struct *mm, gfp_t gfp_mask) int mem_cgroup_swapin_charge_page(struct page *page, struct mm_struct *m= m, gfp_t gfp, swp_entry_t entry) { + struct folio *folio =3D page_folio(page); struct mem_cgroup *memcg; unsigned short id; int ret; @@ -6761,7 +6761,7 @@ int mem_cgroup_swapin_charge_page(struct page *page= , struct mm_struct *mm, memcg =3D get_mem_cgroup_from_mm(mm); rcu_read_unlock(); =20 - ret =3D __mem_cgroup_charge(page, memcg, gfp); + ret =3D __mem_cgroup_charge(folio, memcg, gfp); =20 css_put(&memcg->css); return ret; diff --git a/mm/memory.c b/mm/memory.c index 2f111f9b3dbc..614418e26e2c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -990,7 +990,7 @@ page_copy_prealloc(struct mm_struct *src_mm, struct v= m_area_struct *vma, if (!new_page) return NULL; =20 - if (mem_cgroup_charge(new_page, src_mm, GFP_KERNEL)) { + if (mem_cgroup_charge(page_folio(new_page), src_mm, GFP_KERNEL)) { put_page(new_page); return NULL; } @@ -3019,7 +3019,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf= ) } } =20 - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); =20 @@ -3768,7 +3768,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) if (!page) goto oom; =20 - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); =20 @@ -4183,7 +4183,8 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf= ) if (!vmf->cow_page) return VM_FAULT_OOM; =20 - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { + if (mem_cgroup_charge(page_folio(vmf->cow_page), vma->vm_mm, + GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } diff --git a/mm/migrate.c b/mm/migrate.c index 23cbd9de030b..01c05d7f9d6a 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2811,7 +2811,7 @@ static void migrate_vma_insert_page(struct migrate_= vma *migrate, =20 if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), vma->vm_mm, GFP_KERNEL)) goto abort; =20 /* diff --git a/mm/shmem.c b/mm/shmem.c index 70d9ce294bb4..3931fed5c8d8 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -685,7 +685,7 @@ static int shmem_add_to_page_cache(struct page *page, page->index =3D index; =20 if (!PageSwapCache(page)) { - error =3D mem_cgroup_charge(page, charge_mm, gfp); + error =3D mem_cgroup_charge(page_folio(page), charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 0e2132834bc7..5d0f55f3c0ed 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -164,7 +164,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, __SetPageUptodate(page); =20 ret =3D -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) + if (mem_cgroup_charge(page_folio(page), dst_mm, GFP_KERNEL)) goto out_release; =20 ret =3D mfill_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, --=20 2.30.2