From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6448C07E99 for ; Mon, 12 Jul 2021 03:33:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 63D1760FF4 for ; Mon, 12 Jul 2021 03:33:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 63D1760FF4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 876B66B0092; Sun, 11 Jul 2021 23:33:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8275D6B0095; Sun, 11 Jul 2021 23:33:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C8B86B0099; Sun, 11 Jul 2021 23:33:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 4D9AE6B0092 for ; Sun, 11 Jul 2021 23:33:41 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7178F1BCB9 for ; Mon, 12 Jul 2021 03:33:40 +0000 (UTC) X-FDA: 78352516200.08.1084A33 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf11.hostedemail.com (Postfix) with ESMTP id 0C326F0000B8 for ; Mon, 12 Jul 2021 03:33:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/UjTcOa2b9NZ51jRDlbakD/TMtkd1JmIEo2wWKFvoHE=; b=MWRVbEgm8kF2L5w3487EDmjVqI VfP3Q7ydhTq3HiOG1sQKt0od1oN6JJC6aVFI9wt+H40rs6fmVMwwJ5YUmYj6K9dtcFu0mO20YqDus 5M6YQLOW5pXkf74/7zDJTlajcNTtCLISlMUSM8wTqTMdrZhiwnQtja8/CH8/Z6u34fZdeUbOxo6mb lTlKKwkHQcPA0XsMWYnv2+DUGr42QUBqORSApCDt78nKZK8oxsEOcFPhS+JoS7rsHHwDDVSJeCNOV hRK51jNdmEfW2zANk98GV0lPbq81hMGHOnSPk2mvYjdW916aFCIr7nF+uM0VsZ7hjlL3zc0lWTma+ Orf4yQow==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m2mgQ-00Goav-Rw; Mon, 12 Jul 2021 03:32:45 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH v13 048/137] mm/memcg: Add folio_lruvec_lock() and similar functions Date: Mon, 12 Jul 2021 04:05:32 +0100 Message-Id: <20210712030701.4000097-49-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712030701.4000097-1-willy@infradead.org> References: <20210712030701.4000097-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=MWRVbEgm; spf=none (imf11.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 0C326F0000B8 X-Stat-Signature: rempfxfzcfgtgtjwro48r7wf8t3yku84 X-HE-Tag: 1626060819-731943 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These are the folio equivalents of lock_page_lruvec() and similar functions. Also convert lruvec_memcg_debug() to take a folio. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/memcontrol.h | 29 ++++++++++++----------- mm/compaction.c | 2 +- mm/huge_memory.c | 5 ++-- mm/memcontrol.c | 48 ++++++++++++++++---------------------- mm/rmap.c | 2 +- mm/swap.c | 8 ++++--- mm/vmscan.c | 3 ++- 7 files changed, 48 insertions(+), 49 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index da878d24b0e3..fae246c4b5bf 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -768,15 +768,16 @@ struct mem_cgroup *mem_cgroup_from_task(struct task= _struct *p); =20 struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); =20 -struct lruvec *lock_page_lruvec(struct page *page); -struct lruvec *lock_page_lruvec_irq(struct page *page); -struct lruvec *lock_page_lruvec_irqsave(struct page *page, +struct lruvec *folio_lruvec_lock(struct folio *folio); +struct lruvec *folio_lruvec_lock_irq(struct folio *folio); +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags); =20 #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page); +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio); #else -static inline void lruvec_memcg_debug(struct lruvec *lruvec, struct page= *page) +static inline +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { } #endif @@ -1256,26 +1257,26 @@ static inline void mem_cgroup_put(struct mem_cgro= up *memcg) { } =20 -static inline struct lruvec *lock_page_lruvec(struct page *page) +static inline struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct pglist_data *pgdat =3D page_pgdat(page); + struct pglist_data *pgdat =3D folio_pgdat(folio); =20 spin_lock(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } =20 -static inline struct lruvec *lock_page_lruvec_irq(struct page *page) +static inline struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct pglist_data *pgdat =3D page_pgdat(page); + struct pglist_data *pgdat =3D folio_pgdat(folio); =20 spin_lock_irq(&pgdat->__lruvec.lru_lock); return &pgdat->__lruvec; } =20 -static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, +static inline struct lruvec *folio_lruvec_lock_irqsave(struct folio *fol= io, unsigned long *flagsp) { - struct pglist_data *pgdat =3D page_pgdat(page); + struct pglist_data *pgdat =3D folio_pgdat(folio); =20 spin_lock_irqsave(&pgdat->__lruvec.lru_lock, *flagsp); return &pgdat->__lruvec; @@ -1532,6 +1533,7 @@ static inline bool page_matches_lruvec(struct page = *page, struct lruvec *lruvec) static inline struct lruvec *relock_page_lruvec_irq(struct page *page, struct lruvec *locked_lruvec) { + struct folio *folio =3D page_folio(page); if (locked_lruvec) { if (page_matches_lruvec(page, locked_lruvec)) return locked_lruvec; @@ -1539,13 +1541,14 @@ static inline struct lruvec *relock_page_lruvec_i= rq(struct page *page, unlock_page_lruvec_irq(locked_lruvec); } =20 - return lock_page_lruvec_irq(page); + return folio_lruvec_lock_irq(folio); } =20 /* Don't lock again iff page's lruvec locked */ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *pag= e, struct lruvec *locked_lruvec, unsigned long *flags) { + struct folio *folio =3D page_folio(page); if (locked_lruvec) { if (page_matches_lruvec(page, locked_lruvec)) return locked_lruvec; @@ -1553,7 +1556,7 @@ static inline struct lruvec *relock_page_lruvec_irq= save(struct page *page, unlock_page_lruvec_irqrestore(locked_lruvec, *flags); } =20 - return lock_page_lruvec_irqsave(page, flags); + return folio_lruvec_lock_irqsave(folio, flags); } =20 #ifdef CONFIG_CGROUP_WRITEBACK diff --git a/mm/compaction.c b/mm/compaction.c index a88f7b893f80..6f77577be248 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1038,7 +1038,7 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); locked =3D lruvec; =20 - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, page_folio(page)); =20 /* Try get exclusive access under lock */ if (!skip_updated) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ecb1fb1f5f3e..763bf687ca92 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2431,7 +2431,8 @@ static void __split_huge_page_tail(struct page *hea= d, int tail, static void __split_huge_page(struct page *page, struct list_head *list, pgoff_t end) { - struct page *head =3D compound_head(page); + struct folio *folio =3D page_folio(page); + struct page *head =3D &folio->page; struct lruvec *lruvec; struct address_space *swap_cache =3D NULL; unsigned long offset =3D 0; @@ -2450,7 +2451,7 @@ static void __split_huge_page(struct page *page, st= ruct list_head *list, } =20 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec =3D lock_page_lruvec(head); + lruvec =3D folio_lruvec_lock(folio); =20 for (i =3D nr - 1; i >=3D 1; i--) { __split_huge_page_tail(head, i, lruvec, list); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3152a0e1ba6f..08add9e110ee 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1158,67 +1158,59 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memc= g, } =20 #ifdef CONFIG_DEBUG_VM -void lruvec_memcg_debug(struct lruvec *lruvec, struct page *page) +void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio) { struct mem_cgroup *memcg; =20 if (mem_cgroup_disabled()) return; =20 - memcg =3D page_memcg(page); + memcg =3D folio_memcg(folio); =20 if (!memcg) - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) !=3D root_mem_cgroup, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D root_mem_cgroup, folio); else - VM_BUG_ON_PAGE(lruvec_memcg(lruvec) !=3D memcg, page); + VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) !=3D memcg, folio); } #endif =20 /** - * lock_page_lruvec - lock and return lruvec for a given page. - * @page: the page + * folio_lruvec_lock - lock and return lruvec for a given folio. + * @folio: Pointer to the folio. * * These functions are safe to use under any of the following conditions= : - * - page locked - * - PageLRU cleared - * - lock_page_memcg() - * - page->_refcount is zero + * - folio locked + * - folio_lru cleared + * - folio_memcg_lock() + * - folio frozen (refcount of 0) */ -struct lruvec *lock_page_lruvec(struct page *page) +struct lruvec *folio_lruvec_lock(struct folio *folio) { - struct folio *folio =3D page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec =3D folio_lruvec(folio); =20 - lruvec =3D folio_lruvec(folio); spin_lock(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); =20 return lruvec; } =20 -struct lruvec *lock_page_lruvec_irq(struct page *page) +struct lruvec *folio_lruvec_lock_irq(struct folio *folio) { - struct folio *folio =3D page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec =3D folio_lruvec(folio); =20 - lruvec =3D folio_lruvec(folio); spin_lock_irq(&lruvec->lru_lock); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); =20 return lruvec; } =20 -struct lruvec *lock_page_lruvec_irqsave(struct page *page, unsigned long= *flags) +struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio, + unsigned long *flags) { - struct folio *folio =3D page_folio(page); - struct lruvec *lruvec; + struct lruvec *lruvec =3D folio_lruvec(folio); =20 - lruvec =3D folio_lruvec(folio); spin_lock_irqsave(&lruvec->lru_lock, *flags); - - lruvec_memcg_debug(lruvec, page); + lruvec_memcg_debug(lruvec, folio); =20 return lruvec; } diff --git a/mm/rmap.c b/mm/rmap.c index 795f9d5f8386..b416af486812 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -33,7 +33,7 @@ * mapping->private_lock (in __set_page_dirty_buffers) * lock_page_memcg move_lock (in __set_page_dirty_buff= ers) * i_pages lock (widely used) - * lruvec->lru_lock (in lock_page_lruvec_irq) + * lruvec->lru_lock (in folio_lruvec_lock_irq) * inode->i_lock (in set_page_dirty's __mark_inode_dirty= ) * bdi.wb->list_lock (in set_page_dirty's __mark_inode_d= irty) * sb_lock (within inode_lock in fs/fs-writeback.c) diff --git a/mm/swap.c b/mm/swap.c index d5136cac4267..a82812caf409 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -80,10 +80,11 @@ static DEFINE_PER_CPU(struct lru_pvecs, lru_pvecs) =3D= { static void __page_cache_release(struct page *page) { if (PageLRU(page)) { + struct folio *folio =3D page_folio(page); struct lruvec *lruvec; unsigned long flags; =20 - lruvec =3D lock_page_lruvec_irqsave(page, &flags); + lruvec =3D folio_lruvec_lock_irqsave(folio, &flags); del_page_from_lru_list(page, lruvec); __clear_page_lru_flags(page); unlock_page_lruvec_irqrestore(lruvec, flags); @@ -372,11 +373,12 @@ static inline void activate_page_drain(int cpu) =20 static void activate_page(struct page *page) { + struct folio *folio =3D page_folio(page); struct lruvec *lruvec; =20 - page =3D compound_head(page); + page =3D &folio->page; if (TestClearPageLRU(page)) { - lruvec =3D lock_page_lruvec_irq(page); + lruvec =3D folio_lruvec_lock_irq(folio); __activate_page(page, lruvec); unlock_page_lruvec_irq(lruvec); SetPageLRU(page); diff --git a/mm/vmscan.c b/mm/vmscan.c index 4620df62f0ff..0d48306d37dc 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1965,6 +1965,7 @@ static unsigned long isolate_lru_pages(unsigned lon= g nr_to_scan, */ int isolate_lru_page(struct page *page) { + struct folio *folio =3D page_folio(page); int ret =3D -EBUSY; =20 VM_BUG_ON_PAGE(!page_count(page), page); @@ -1974,7 +1975,7 @@ int isolate_lru_page(struct page *page) struct lruvec *lruvec; =20 get_page(page); - lruvec =3D lock_page_lruvec_irq(page); + lruvec =3D folio_lruvec_lock_irq(folio); del_page_from_lru_list(page, lruvec); unlock_page_lruvec_irq(lruvec); ret =3D 0; --=20 2.30.2