From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96A12C54FCC for ; Mon, 20 Apr 2020 22:12:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3DCB921744 for ; Mon, 20 Apr 2020 22:12:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="lD2oKzCp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3DCB921744 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CDE2B8E0016; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8F308E0003; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B30568E0016; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 952148E0003 for ; Mon, 20 Apr 2020 18:12:14 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 62888180AD83E for ; Mon, 20 Apr 2020 22:12:14 +0000 (UTC) X-FDA: 76729632588.17.neck53_6ad42a37dcd09 X-HE-Tag: neck53_6ad42a37dcd09 X-Filterd-Recvd-Size: 13991 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Apr 2020 22:12:13 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id m67so12493616qke.12 for ; Mon, 20 Apr 2020 15:12:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=lD2oKzCp8iMw2/AoEsPsa7wNaiC7gPfC/xtQbGFpVKtiYZcZw1aaTPz3iutggIrJYj +F39OgqOjUV24z5rbhHJ0hRn7w+tBm0XBBT7c1Hmxu0UF5Y+Ggh+HK6Kostsdj/NyvyH EvFx5J+8al9GXzRRuJ3vDEWRZckh7TPr5J+sH+iEGksj7idRcraj57Nkv8ntPVxnw1hI Es0XUIHbguzDRlfAFCk9aHkdV2KUD9nI9sHGr9WFre3CS53I8JC2//bxVqEsp8NV4vLb erMHFO1mmEvAsoU7i168DpDvh3bNY1WtbSXX62iEASE2Lqz20vy20jkDFpc01nai5awo ERxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PncRoalTORTzbPDipW6LC/3hBRJEv36Hcb5+zF4gMQI=; b=ssB9kZfqHWZwklj2YtADhypYQGBypVv+S+SUUUKECt7wTwLyG+Z+bu+pCSkf7dsg4s c0y7cdr7+xydCVT/h5fosxCvZV6MRAK/rIf1byaIRy62BtTV5E/iMz90KeBFt+gBiql1 eURr48E1yDpabtbVNXwSw5zLo+WJTSvCEH6ahX7BXb0f4sdhj0M0d8emJRfuqMqSezNi jT6BV4CiuNiqxqvdoOqggA2PMjAh1xdfPoOmEyFbxiW9xAR5HslKF8JicKwDg6zCGG23 ld1TxsDyXoeoS2Gh+F4/aVJ4VVVKdfkaFOZLY+/LiIr/0GayowgU3LaqhAcrax4KUngc Jxng== X-Gm-Message-State: AGi0PubleLuzj8leSU+QIJ3TBaoGNfrMXvOUi/+JUZOse8goe32wP6sl 9cyySyKN/FyYheLtBBPOM32TUA== X-Google-Smtp-Source: APiQypKmCeYifa3cirZ/DIUOjt014kXlMLmlAxwLPSf6UL0vF1T1XfAndFyIPskVlxtnbXWOHNwFMA== X-Received: by 2002:a37:e310:: with SMTP id y16mr18741966qki.275.1587420733327; Mon, 20 Apr 2020 15:12:13 -0700 (PDT) Received: from localhost ([2620:10d:c091:480::1:e6b6]) by smtp.gmail.com with ESMTPSA id o68sm568301qka.110.2020.04.20.15.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Apr 2020 15:12:12 -0700 (PDT) From: Johannes Weiner To: Joonsoo Kim , Alex Shi Cc: Shakeel Butt , Hugh Dickins , Michal Hocko , "Kirill A. Shutemov" , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 17/18] mm: memcontrol: delete unused lrucare handling Date: Mon, 20 Apr 2020 18:11:25 -0400 Message-Id: <20200420221126.341272-18-hannes@cmpxchg.org> X-Mailer: git-send-email 2.26.0 In-Reply-To: <20200420221126.341272-1-hannes@cmpxchg.org> References: <20200420221126.341272-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Johannes Weiner --- include/linux/memcontrol.h | 5 ++-- kernel/events/uprobes.c | 3 +- mm/filemap.c | 2 +- mm/huge_memory.c | 7 ++--- mm/khugepaged.c | 4 +-- mm/memcontrol.c | 57 +++----------------------------------- mm/memory.c | 8 +++--- mm/migrate.c | 2 +- mm/shmem.c | 2 +- mm/swap_state.c | 2 +- mm/userfaultfd.c | 2 +- 11 files changed, 21 insertions(+), 73 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index d458f1d90aa4..4b868e5a687f 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -357,8 +357,7 @@ static inline unsigned long mem_cgroup_protection(str= uct mem_cgroup *memcg, enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); =20 -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, - bool lrucare); +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask); =20 void mem_cgroup_uncharge(struct page *page); void mem_cgroup_uncharge_list(struct list_head *page_list); @@ -839,7 +838,7 @@ static inline enum mem_cgroup_protection mem_cgroup_p= rotected( } =20 static inline int mem_cgroup_charge(struct page *page, struct mm_struct = *mm, - gfp_t gfp_mask, bool lrucare) + gfp_t gfp_mask) { return 0; } diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 4253c153e985..eddc8db96027 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -167,8 +167,7 @@ static int __replace_page(struct vm_area_struct *vma,= unsigned long addr, addr + PAGE_SIZE); =20 if (new_page) { - err =3D mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL, - false); + err =3D mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL); if (err) return err; } diff --git a/mm/filemap.c b/mm/filemap.c index a10bd6696049..f73b221314df 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -845,7 +845,7 @@ static int __add_to_page_cache_locked(struct page *pa= ge, page->index =3D offset; =20 if (!huge) { - error =3D mem_cgroup_charge(page, current->mm, gfp_mask, false); + error =3D mem_cgroup_charge(page, current->mm, gfp_mask); if (error) goto error; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 0b33eaf0740a..35a716720e26 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -593,7 +593,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct= vm_fault *vmf, =20 VM_BUG_ON_PAGE(!PageCompound(page), page); =20 - if (mem_cgroup_charge(page, vma->vm_mm, gfp, false)) { + if (mem_cgroup_charge(page, vma->vm_mm, gfp)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); @@ -1276,7 +1276,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(stru= ct vm_fault *vmf, vmf->address, page_to_nid(page)); if (unlikely(!pages[i] || mem_cgroup_charge(pages[i], vma->vm_mm, - GFP_KERNEL, false))) { + GFP_KERNEL))) { if (pages[i]) put_page(pages[i]); while (--i >=3D 0) @@ -1430,8 +1430,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf= , pmd_t orig_pmd) goto out; } =20 - if (unlikely(mem_cgroup_charge(new_page, vma->vm_mm, huge_gfp, - false))) { + if (unlikely(mem_cgroup_charge(new_page, vma->vm_mm, huge_gfp))) { put_page(new_page); split_huge_pmd(vma, vmf->pmd, vmf->address); if (page) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 5cf8082fb038..28c6d84db4ee 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -973,7 +973,7 @@ static void collapse_huge_page(struct mm_struct *mm, goto out_nolock; } =20 - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result =3D SCAN_CGROUP_CHARGE_FAIL; goto out_nolock; } @@ -1527,7 +1527,7 @@ static void collapse_file(struct mm_struct *mm, goto out; } =20 - if (unlikely(mem_cgroup_charge(new_page, mm, gfp, false))) { + if (unlikely(mem_cgroup_charge(new_page, mm, gfp))) { result =3D SCAN_CGROUP_CHARGE_FAIL; goto out; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1d7408a8744a..a8cce52b6b4d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2601,51 +2601,9 @@ static void cancel_charge(struct mem_cgroup *memcg= , unsigned int nr_pages) css_put_many(&memcg->css, nr_pages); } =20 -static void lock_page_lru(struct page *page, int *isolated) +static void commit_charge(struct page *page, struct mem_cgroup *memcg) { - pg_data_t *pgdat =3D page_pgdat(page); - - spin_lock_irq(&pgdat->lru_lock); - if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec =3D mem_cgroup_page_lruvec(page, pgdat); - ClearPageLRU(page); - del_page_from_lru_list(page, lruvec, page_lru(page)); - *isolated =3D 1; - } else - *isolated =3D 0; -} - -static void unlock_page_lru(struct page *page, int isolated) -{ - pg_data_t *pgdat =3D page_pgdat(page); - - if (isolated) { - struct lruvec *lruvec; - - lruvec =3D mem_cgroup_page_lruvec(page, pgdat); - VM_BUG_ON_PAGE(PageLRU(page), page); - SetPageLRU(page); - add_page_to_lru_list(page, lruvec, page_lru(page)); - } - spin_unlock_irq(&pgdat->lru_lock); -} - -static void commit_charge(struct page *page, struct mem_cgroup *memcg, - bool lrucare) -{ - int isolated; - VM_BUG_ON_PAGE(page->mem_cgroup, page); - - /* - * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page - * may already be on some other mem_cgroup's LRU. Take care of it. - */ - if (lrucare) - lock_page_lru(page, &isolated); - /* * Nobody should be changing or seriously looking at * page->mem_cgroup at this point: @@ -2661,9 +2619,6 @@ static void commit_charge(struct page *page, struct= mem_cgroup *memcg, * have the page locked */ page->mem_cgroup =3D memcg; - - if (lrucare) - unlock_page_lru(page, isolated); } =20 #ifdef CONFIG_MEMCG_KMEM @@ -6433,22 +6388,18 @@ enum mem_cgroup_protection mem_cgroup_protected(s= truct mem_cgroup *root, * @page: page to charge * @mm: mm context of the victim * @gfp_mask: reclaim mode - * @lrucare: page might be on the LRU already * * Try to charge @page to the memcg that @mm belongs to, reclaiming * pages according to @gfp_mask if necessary. * * Returns 0 on success. Otherwise, an error code is returned. */ -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask, - bool lrucare) +int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp= _mask) { unsigned int nr_pages =3D hpage_nr_pages(page); struct mem_cgroup *memcg =3D NULL; int ret =3D 0; =20 - VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page); - if (mem_cgroup_disabled()) goto out; =20 @@ -6482,7 +6433,7 @@ int mem_cgroup_charge(struct page *page, struct mm_= struct *mm, gfp_t gfp_mask, if (ret) goto out_put; =20 - commit_charge(page, memcg, lrucare); + commit_charge(page, memcg); =20 local_irq_disable(); mem_cgroup_charge_statistics(memcg, page, nr_pages); @@ -6685,7 +6636,7 @@ void mem_cgroup_migrate(struct page *oldpage, struc= t page *newpage) page_counter_charge(&memcg->memsw, nr_pages); css_get_many(&memcg->css, nr_pages); =20 - commit_charge(newpage, memcg, false); + commit_charge(newpage, memcg); =20 local_irq_save(flags); mem_cgroup_charge_statistics(memcg, newpage, nr_pages); diff --git a/mm/memory.c b/mm/memory.c index 5d266532fc40..0ad4db56bea2 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2677,7 +2677,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf= ) } } =20 - if (mem_cgroup_charge(new_page, mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) goto oom_free_new; cgroup_throttle_swaprate(new_page, GFP_KERNEL); =20 @@ -3136,7 +3136,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) /* Tell memcg to use swap ownership records */ SetPageSwapCache(page); err =3D mem_cgroup_charge(page, vma->vm_mm, - GFP_KERNEL, false); + GFP_KERNEL); ClearPageSwapCache(page); if (err) goto out_page; @@ -3360,7 +3360,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) if (!page) goto oom; =20 - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto oom_free_page; cgroup_throttle_swaprate(page, GFP_KERNEL); =20 @@ -3856,7 +3856,7 @@ static vm_fault_t do_cow_fault(struct vm_fault *vmf= ) if (!vmf->cow_page) return VM_FAULT_OOM; =20 - if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL, false)) { + if (mem_cgroup_charge(vmf->cow_page, vma->vm_mm, GFP_KERNEL)) { put_page(vmf->cow_page); return VM_FAULT_OOM; } diff --git a/mm/migrate.c b/mm/migrate.c index a3361c744069..ced652d069ee 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2792,7 +2792,7 @@ static void migrate_vma_insert_page(struct migrate_= vma *migrate, =20 if (unlikely(anon_vma_prepare(vma))) goto abort; - if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, vma->vm_mm, GFP_KERNEL)) goto abort; =20 /* diff --git a/mm/shmem.c b/mm/shmem.c index 966f150a4823..add10d448bc6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -624,7 +624,7 @@ static int shmem_add_to_page_cache(struct page *page, page->index =3D index; =20 if (!PageSwapCache(page)) { - error =3D mem_cgroup_charge(page, charge_mm, gfp, false); + error =3D mem_cgroup_charge(page, charge_mm, gfp); if (error) { if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); diff --git a/mm/swap_state.c b/mm/swap_state.c index f3b9073bfff3..26fded65c30d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -427,7 +427,7 @@ struct page *__read_swap_cache_async(swp_entry_t entr= y, gfp_t gfp_mask, if (add_to_swap_cache(page, entry, gfp_mask & GFP_KERNEL)) goto fail_unlock; =20 - if (mem_cgroup_charge(page, NULL, gfp_mask & GFP_KERNEL, false)) + if (mem_cgroup_charge(page, NULL, gfp_mask & GFP_KERNEL)) goto fail_delete; =20 /* Initiate read into locked page */ diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 2745489415cc..7f5194046b01 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -96,7 +96,7 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, __SetPageUptodate(page); =20 ret =3D -ENOMEM; - if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL, false)) + if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; =20 _dst_pte =3D pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); --=20 2.26.0