From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3147C07E96 for ; Thu, 15 Jul 2021 04:47:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F0A861362 for ; Thu, 15 Jul 2021 04:47:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F0A861362 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9EADC8D005C; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C25D8D004B; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 863CF8D005C; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0254.hostedemail.com [216.40.44.254]) by kanga.kvack.org (Postfix) with ESMTP id 626E38D004B for ; Thu, 15 Jul 2021 00:47:15 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 52D60184B4335 for ; Thu, 15 Jul 2021 04:47:14 +0000 (UTC) X-FDA: 78363587988.29.B99BC33 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf03.hostedemail.com (Postfix) with ESMTP id E1C8830001A9 for ; Thu, 15 Jul 2021 04:47:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lA2AgVUZmaXQ4H9CBW3dmhy4YUll7WcOLfAqRGxUyrk=; b=CUQyQ/j+Reh9JMKY8uPrZ5nF8j nlKW47tUyRGXysW6STpfYOfhVvc/8KQOcRlJgQdidBnA4vVNgLbq6xAM0eUrHBY0OIjILhe/0bUBB uraG6GjA9ezmO4zCvTLWsPypd7/kv5zNoXXz1SqwbdQ8Eb/JJ6O5gXTiBqvhLK23EkQqloXN7XjdH PAGpVU1DL9t0ZKAdu3kTnrboFyKQ1afAZuDzHjNEQQdIUrBSRsTb6kOim41Rlyjj5fommy/+zHX5T yK66zlxj/yX1fOnSIJyl5Y1VcByP4Y2NY+XpM6DN+MWNmJwCAIf3SvokuWktu7ahI8GJOauEtbq4M pSStG7Jw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3tFu-002yWy-4t; Thu, 15 Jul 2021 04:46:10 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v14 086/138] mm/filemap: Add filemap_add_folio() Date: Thu, 15 Jul 2021 04:36:12 +0100 Message-Id: <20210715033704.692967-87-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="CUQyQ/j+"; spf=none (imf03.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Stat-Signature: a1yjggt3w16zgzp4xu6nqpecs3arfxzo X-Rspamd-Queue-Id: E1C8830001A9 X-HE-Tag: 1626324433-823435 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert __add_to_page_cache_locked() into __filemap_add_folio(). Add an assertion to it that (for !hugetlbfs), the folio is naturally aligned within the file. Move the prototype from mm.h to pagemap.h. Convert add_to_page_cache_lru() into filemap_add_folio(). Add a compatibility wrapper for unconverted callers. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/mm.h | 7 ----- include/linux/pagemap.h | 10 ++++-- kernel/bpf/verifier.c | 2 +- mm/filemap.c | 70 ++++++++++++++++++++--------------------- mm/folio-compat.c | 7 +++++ 5 files changed, 50 insertions(+), 46 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 4803f2c01367..99f5f736be64 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -213,13 +213,6 @@ int overcommit_kbytes_handler(struct ctl_table *, in= t, void *, size_t *, loff_t *); int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, loff_t *); -/* - * Any attempt to mark this function as static leads to build failure - * when CONFIG_DEBUG_INFO_BTF is enabled because __add_to_page_cache_loc= ked() - * is referred to by BPF code. This must be visible for error injection. - */ -int __add_to_page_cache_locked(struct page *page, struct address_space *= mapping, - pgoff_t index, gfp_t gfp, void **shadowp); =20 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 848acb44ac80..19b2e3bea14c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -877,9 +877,11 @@ static inline int fault_in_pages_readable(const char= __user *uaddr, int size) } =20 int add_to_page_cache_locked(struct page *page, struct address_space *ma= pping, - pgoff_t index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp); int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, - pgoff_t index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp); +int filemap_add_folio(struct address_space *mapping, struct folio *folio= , + pgoff_t index, gfp_t gfp); extern void delete_from_page_cache(struct page *page); extern void __delete_from_page_cache(struct page *page, void *shadow); void replace_page_cache_page(struct page *old, struct page *new); @@ -904,6 +906,10 @@ static inline int add_to_page_cache(struct page *pag= e, return error; } =20 +/* Must be non-static for BPF error injection */ +int __filemap_add_folio(struct address_space *mapping, struct folio *fol= io, + pgoff_t index, gfp_t gfp, void **shadowp); + /** * struct readahead_control - Describes a readahead request. * diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 42a4063de7cd..f0a4f8b818e4 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -13015,7 +13015,7 @@ BTF_SET_START(btf_non_sleepable_error_inject) /* Three functions below can be called from sleepable and non-sleepable = context. * Assume non-sleepable from bpf safety point of view. */ -BTF_ID(func, __add_to_page_cache_locked) +BTF_ID(func, __filemap_add_folio) BTF_ID(func, should_fail_alloc_page) BTF_ID(func, should_failslab) BTF_SET_END(btf_non_sleepable_error_inject) diff --git a/mm/filemap.c b/mm/filemap.c index 54989a32d6a8..4e34383fd894 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -855,26 +855,25 @@ void replace_page_cache_page(struct page *old, stru= ct page *new) } EXPORT_SYMBOL_GPL(replace_page_cache_page); =20 -noinline int __add_to_page_cache_locked(struct page *page, - struct address_space *mapping, - pgoff_t offset, gfp_t gfp, - void **shadowp) +noinline int __filemap_add_folio(struct address_space *mapping, + struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) { - XA_STATE(xas, &mapping->i_pages, offset); - int huge =3D PageHuge(page); + XA_STATE(xas, &mapping->i_pages, index); + int huge =3D folio_test_hugetlb(folio); int error; bool charged =3D false; =20 - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageSwapBacked(page), page); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(folio_test_swapbacked(folio), folio); mapping_set_update(&xas, mapping); =20 - get_page(page); - page->mapping =3D mapping; - page->index =3D offset; + folio_get(folio); + folio->mapping =3D mapping; + folio->index =3D index; =20 if (!huge) { - error =3D mem_cgroup_charge(page_folio(page), NULL, gfp); + error =3D mem_cgroup_charge(folio, NULL, gfp); + VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio); if (error) goto error; charged =3D true; @@ -886,7 +885,7 @@ noinline int __add_to_page_cache_locked(struct page *= page, unsigned int order =3D xa_get_order(xas.xa, xas.xa_index); void *entry, *old =3D NULL; =20 - if (order > thp_order(page)) + if (order > folio_order(folio)) xas_split_alloc(&xas, xa_load(xas.xa, xas.xa_index), order, gfp); xas_lock_irq(&xas); @@ -903,13 +902,13 @@ noinline int __add_to_page_cache_locked(struct page= *page, *shadowp =3D old; /* entry may have been split before we acquired lock */ order =3D xa_get_order(xas.xa, xas.xa_index); - if (order > thp_order(page)) { + if (order > folio_order(folio)) { xas_split(&xas, old, order); xas_reset(&xas); } } =20 - xas_store(&xas, page); + xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; =20 @@ -917,7 +916,7 @@ noinline int __add_to_page_cache_locked(struct page *= page, =20 /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_lruvec_page_state(page, NR_FILE_PAGES); + __lruvec_stat_add_folio(folio, NR_FILE_PAGES); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -925,19 +924,19 @@ noinline int __add_to_page_cache_locked(struct page= *page, if (xas_error(&xas)) { error =3D xas_error(&xas); if (charged) - mem_cgroup_uncharge(page_folio(page)); + mem_cgroup_uncharge(folio); goto error; } =20 - trace_mm_filemap_add_to_page_cache(page); + trace_mm_filemap_add_to_page_cache(&folio->page); return 0; error: - page->mapping =3D NULL; + folio->mapping =3D NULL; /* Leave page->index set: truncation relies upon it */ - put_page(page); + folio_put(folio); return error; } -ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); +ALLOW_ERROR_INJECTION(__filemap_add_folio, ERRNO); =20 /** * add_to_page_cache_locked - add a locked page to the pagecache @@ -954,39 +953,38 @@ ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, E= RRNO); int add_to_page_cache_locked(struct page *page, struct address_space *ma= pping, pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, + return __filemap_add_folio(mapping, page_folio(page), offset, gfp_mask, NULL); } EXPORT_SYMBOL(add_to_page_cache_locked); =20 -int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, - pgoff_t offset, gfp_t gfp_mask) +int filemap_add_folio(struct address_space *mapping, struct folio *folio= , + pgoff_t index, gfp_t gfp) { void *shadow =3D NULL; int ret; =20 - __SetPageLocked(page); - ret =3D __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, &shadow); + __folio_set_locked(folio); + ret =3D __filemap_add_folio(mapping, folio, index, gfp, &shadow); if (unlikely(ret)) - __ClearPageLocked(page); + __folio_clear_locked(folio); else { /* - * The page might have been evicted from cache only + * The folio might have been evicted from cache only * recently, in which case it should be activated like - * any other repeatedly accessed page. - * The exception is pages getting rewritten; evicting other + * any other repeatedly accessed folio. + * The exception is folios getting rewritten; evicting other * data from the working set, only to cache data that will * get overwritten with something else, is a waste of memory. */ - WARN_ON_ONCE(PageActive(page)); - if (!(gfp_mask & __GFP_WRITE) && shadow) - workingset_refault(page_folio(page), shadow); - lru_cache_add(page); + WARN_ON_ONCE(folio_test_active(folio)); + if (!(gfp & __GFP_WRITE) && shadow) + workingset_refault(folio, shadow); + folio_add_lru(folio); } return ret; } -EXPORT_SYMBOL_GPL(add_to_page_cache_lru); +EXPORT_SYMBOL_GPL(filemap_add_folio); =20 #ifdef CONFIG_NUMA struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 6de3cd78a4ae..6b19bc4ed6b0 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -108,3 +108,10 @@ void lru_cache_add(struct page *page) folio_add_lru(page_folio(page)); } EXPORT_SYMBOL(lru_cache_add); + +int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, + pgoff_t index, gfp_t gfp) +{ + return filemap_add_folio(mapping, page_folio(page), index, gfp); +} +EXPORT_SYMBOL(add_to_page_cache_lru); --=20 2.30.2