From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 722BDC433ED for ; Wed, 5 May 2021 17:38:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 02BE561073 for ; Wed, 5 May 2021 17:38:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 02BE561073 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8084C6B0070; Wed, 5 May 2021 13:38:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DEB36B006E; Wed, 5 May 2021 13:38:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 656D66B0072; Wed, 5 May 2021 13:38:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id 495976B006E for ; Wed, 5 May 2021 13:38:53 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E64AE181AC553 for ; Wed, 5 May 2021 17:38:52 +0000 (UTC) X-FDA: 78107887704.14.9671EC7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf24.hostedemail.com (Postfix) with ESMTP id D895EA000390 for ; Wed, 5 May 2021 17:38:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=g71cA5NAYjcDTOWfpDITnnU5YKEY6TQ2aucSVKC/UTc=; b=SAaTW4Rb4YTifeI44fsSvkAxOK FmXa6rMSInpKLOe4rYc5eC/Ck1oJPHEPF4i/R2F4e7A/FF77F83kjtE9ca+8sshjpqkbOVNhzE11I ZLdCp26Ljsxa5QPadeStnxq29p4qrH6SFU4/cbbLQ7Dg5xEohPFGASXVZvwKgRpu5ipX0WedE9svK bXgVIWKutbkXbde78LVySysaRsOcPFUhMCuHmJZyu3dpGcC22k6cc4QdZ8IcjTfR4t2sFEqjsdkNL ZKiXba2GMgpfr35pQx3BlKFnUBBj4eoGHkKx9v5iMj24IIqp3Lj91oo3hS5HZ0uJLKv0oC1wghqJE qo0dT3CA==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1leKxT-000cgm-AT; Wed, 05 May 2021 17:05:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v9 80/96] mm/filemap: Add filemap_get_folio and find_get_folio Date: Wed, 5 May 2021 16:06:12 +0100 Message-Id: <20210505150628.111735-81-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210505150628.111735-1-willy@infradead.org> References: <20210505150628.111735-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=SAaTW4Rb; dmarc=none; spf=none (imf24.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: p69uwg4k761mjjg7dfyk9mmq4pp1ynbr X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: D895EA000390 Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620236319-194113 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Turn pagecache_get_page() into a wrapper around filemap_get_folio(). Remove find_lock_head() as this use case is now covered by filemap_get_folio(). Reduces overall kernel size by 209 bytes. filemap_get_folio() is 316 bytes shorter than pagecache_get_page() was, but the new pagecache_get_page() is 99 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 31 +++++--------- mm/filemap.c | 90 +++++++++++++++++++---------------------- mm/folio-compat.c | 11 +++++ 3 files changed, 63 insertions(+), 69 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 8eab3d8400d2..03125035077c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -378,8 +378,10 @@ pgoff_t page_cache_prev_miss(struct address_space *m= apping, #define FGP_HEAD 0x00000080 #define FGP_ENTRY 0x00000100 =20 -struct page *pagecache_get_page(struct address_space *mapping, pgoff_t o= ffset, - int fgp_flags, gfp_t cache_gfp_mask); +struct folio *filemap_get_folio(struct address_space *mapping, pgoff_t i= ndex, + int fgp_flags, gfp_t gfp); +struct page *pagecache_get_page(struct address_space *mapping, pgoff_t i= ndex, + int fgp_flags, gfp_t gfp); =20 /** * find_get_page - find and get a page reference @@ -397,6 +399,12 @@ static inline struct page *find_get_page(struct addr= ess_space *mapping, return pagecache_get_page(mapping, offset, 0, 0); } =20 +static inline struct folio *find_get_folio(struct address_space *mapping= , + pgoff_t index) +{ + return filemap_get_folio(mapping, index, 0, 0); +} + static inline struct page *find_get_page_flags(struct address_space *map= ping, pgoff_t offset, int fgp_flags) { @@ -422,25 +430,6 @@ static inline struct page *find_lock_page(struct add= ress_space *mapping, return pagecache_get_page(mapping, index, FGP_LOCK, 0); } =20 -/** - * find_lock_head - Locate, pin and lock a pagecache page. - * @mapping: The address_space to search. - * @index: The page index. - * - * Looks up the page cache entry at @mapping & @index. If there is a - * page cache page, its head page is returned locked and with an increas= ed - * refcount. - * - * Context: May sleep. - * Return: A struct page which is !PageTail, or %NULL if there is no pag= e - * in the cache for this index. - */ -static inline struct page *find_lock_head(struct address_space *mapping, - pgoff_t index) -{ - return pagecache_get_page(mapping, index, FGP_LOCK | FGP_HEAD, 0); -} - /** * find_or_create_page - locate or add a pagecache page * @mapping: the page's address_space diff --git a/mm/filemap.c b/mm/filemap.c index b3714dddb045..3d8715a6dd08 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1777,95 +1777,89 @@ static struct folio *mapping_get_entry(struct add= ress_space *mapping, } =20 /** - * pagecache_get_page - Find and get a reference to a page. + * filemap_get_folio - Find and get a reference to a folio. * @mapping: The address_space to search. * @index: The page index. - * @fgp_flags: %FGP flags modify how the page is returned. - * @gfp_mask: Memory allocation flags to use if %FGP_CREAT is specified. + * @fgp_flags: %FGP flags modify how the folio is returned. + * @gfp: Memory allocation flags to use if %FGP_CREAT is specified. * * Looks up the page cache entry at @mapping & @index. * * @fgp_flags can be zero or more of these flags: * - * * %FGP_ACCESSED - The page will be marked accessed. - * * %FGP_LOCK - The page is returned locked. - * * %FGP_HEAD - If the page is present and a THP, return the head page - * rather than the exact page specified by the index. + * * %FGP_ACCESSED - The folio will be marked accessed. + * * %FGP_LOCK - The folio is returned locked. * * %FGP_ENTRY - If there is a shadow / swap / DAX entry, return it - * instead of allocating a new page to replace it. + * instead of allocating a new folio to replace it. * * %FGP_CREAT - If no page is present then a new page is allocated usi= ng - * @gfp_mask and added to the page cache and the VM's LRU list. + * @gfp and added to the page cache and the VM's LRU list. * The page is returned locked and with an increased refcount. * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the * page is already in cache. If the page was allocated, unlock it bef= ore * returning so the caller can do the same dance. - * * %FGP_WRITE - The page will be written - * * %FGP_NOFS - __GFP_FS will get cleared in gfp mask - * * %FGP_NOWAIT - Don't get blocked by page lock + * * %FGP_WRITE - The page will be written to by the caller. + * * %FGP_NOFS - __GFP_FS will get cleared in gfp. + * * %FGP_NOWAIT - Don't get blocked by page lock. * * If %FGP_LOCK or %FGP_CREAT are specified then the function may sleep = even * if the %GFP flags specified for %FGP_CREAT are atomic. * * If there is a page cache page, it is returned with an increased refco= unt. * - * Return: The found page or %NULL otherwise. + * Return: The found folio or %NULL otherwise. */ -struct page *pagecache_get_page(struct address_space *mapping, pgoff_t i= ndex, - int fgp_flags, gfp_t gfp_mask) +struct folio *filemap_get_folio(struct address_space *mapping, pgoff_t i= ndex, + int fgp_flags, gfp_t gfp) { struct folio *folio; - struct page *page; =20 repeat: folio =3D mapping_get_entry(mapping, index); - page =3D &folio->page; - if (xa_is_value(page)) { + if (xa_is_value(folio)) { if (fgp_flags & FGP_ENTRY) - return page; - page =3D NULL; + return folio; + folio =3D NULL; } - if (!page) + if (!folio) goto no_page; =20 if (fgp_flags & FGP_LOCK) { if (fgp_flags & FGP_NOWAIT) { - if (!trylock_page(page)) { - put_page(page); + if (!folio_trylock(folio)) { + folio_put(folio); return NULL; } } else { - lock_page(page); + folio_lock(folio); } =20 /* Has the page been truncated? */ - if (unlikely(page->mapping !=3D mapping)) { - unlock_page(page); - put_page(page); + if (unlikely(folio->mapping !=3D mapping)) { + folio_unlock(folio); + folio_put(folio); goto repeat; } - VM_BUG_ON_PAGE(!thp_contains(page, index), page); + VM_BUG_ON_FOLIO(!folio_contains(folio, index), folio); } =20 if (fgp_flags & FGP_ACCESSED) - mark_page_accessed(page); + folio_mark_accessed(folio); else if (fgp_flags & FGP_WRITE) { /* Clear idle flag for buffer write */ - if (page_is_idle(page)) - clear_page_idle(page); + if (folio_idle(folio)) + folio_clear_idle_flag(folio); } - if (!(fgp_flags & FGP_HEAD)) - page =3D find_subpage(page, index); =20 no_page: - if (!page && (fgp_flags & FGP_CREAT)) { + if (!folio && (fgp_flags & FGP_CREAT)) { int err; if ((fgp_flags & FGP_WRITE) && mapping_can_writeback(mapping)) - gfp_mask |=3D __GFP_WRITE; + gfp |=3D __GFP_WRITE; if (fgp_flags & FGP_NOFS) - gfp_mask &=3D ~__GFP_FS; + gfp &=3D ~__GFP_FS; =20 - page =3D __page_cache_alloc(gfp_mask); - if (!page) + folio =3D filemap_alloc_folio(gfp, 0); + if (!folio) return NULL; =20 if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP)))) @@ -1873,27 +1867,27 @@ struct page *pagecache_get_page(struct address_sp= ace *mapping, pgoff_t index, =20 /* Init accessed so avoid atomic mark_page_accessed later */ if (fgp_flags & FGP_ACCESSED) - __SetPageReferenced(page); + __folio_set_referenced_flag(folio); =20 - err =3D add_to_page_cache_lru(page, mapping, index, gfp_mask); + err =3D folio_add_to_page_cache(folio, mapping, index, gfp); if (unlikely(err)) { - put_page(page); - page =3D NULL; + folio_put(folio); + folio =3D NULL; if (err =3D=3D -EEXIST) goto repeat; } =20 /* - * add_to_page_cache_lru locks the page, and for mmap we expect - * an unlocked page. + * folio_add_to_page_cache locks the page, and for mmap + * we expect an unlocked page. */ - if (page && (fgp_flags & FGP_FOR_MMAP)) - unlock_page(page); + if (folio && (fgp_flags & FGP_FOR_MMAP)) + folio_unlock(folio); } =20 - return page; + return folio; } -EXPORT_SYMBOL(pagecache_get_page); +EXPORT_SYMBOL(filemap_get_folio); =20 static inline struct page *find_get_entry(struct xa_state *xas, pgoff_t = max, xa_mark_t mark) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 7de3839ad072..df0038c65da9 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -85,3 +85,14 @@ void lru_cache_add(struct page *page) folio_add_lru(page_folio(page)); } EXPORT_SYMBOL(lru_cache_add); + +struct page *pagecache_get_page(struct address_space *mapping, pgoff_t i= ndex, + int fgp_flags, gfp_t gfp) +{ + struct folio *folio =3D filemap_get_folio(mapping, index, fgp_flags, gf= p); + + if ((fgp_flags & FGP_HEAD) || !folio || xa_is_value(folio)) + return &folio->page; + return folio_file_page(folio, index); +} +EXPORT_SYMBOL(pagecache_get_page); --=20 2.30.2