From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D8EDC48BE5 for ; Tue, 22 Jun 2021 12:55:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D265F60720 for ; Tue, 22 Jun 2021 12:55:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D265F60720 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6CA886B00CB; Tue, 22 Jun 2021 08:55:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 67A5A6B00CD; Tue, 22 Jun 2021 08:55:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CCCF6B00CE; Tue, 22 Jun 2021 08:55:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id 148F36B00CB for ; Tue, 22 Jun 2021 08:55:12 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A1C5811918 for ; Tue, 22 Jun 2021 12:55:11 +0000 (UTC) X-FDA: 78281355222.05.911FC84 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 41380E0009B7 for ; Tue, 22 Jun 2021 12:55:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=w6p2WrNTFEQDHkUr/mxTcJjSD8kMhdgKLjkNbgBz9BA=; b=Y4JHcoe/oY9MIRyAJLDb5V69Mh iVkRBP9nOT7jjf3BPnD+yzPMDobPxWrzw2N1ctkb6uPCA7mpZgqOjh8YUzBfbCbaAaVA6hli0aB/r BuE1LdISfotK+mHEwKWka4WF5WSmCEfbFUHNcvxFReSBP+z3wZm6DppcRBE11YMX2FaD8kQoMhb9i QTRfMNrI+0OI/bpag/BvH2B7E9Tq7yUHFPPuBGQjcT5VBD5l8ablHwUVYgHpEQBXswQGQzgk/xGT7 KN/O/9LGR0oYK83Kr2ZZfFo9EArluVt+cj0d30XB51DaO4v9KeaXtboCGS7QLnFgyFN3jUIsPviJi qeJpRP8g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lvfub-00EJ4B-A8; Tue, 22 Jun 2021 12:54:10 +0000 From: "Matthew Wilcox (Oracle)" To: akpm@linux-foundation.org Cc: "Matthew Wilcox (Oracle)" , linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2 43/46] mm/filemap: Add filemap_add_folio Date: Tue, 22 Jun 2021 13:15:48 +0100 Message-Id: <20210622121551.3398730-44-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210622121551.3398730-1-willy@infradead.org> References: <20210622121551.3398730-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 41380E0009B7 X-Stat-Signature: 6ikyoxuxgz8s7itoqdnqw3a45nuam1hy Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="Y4JHcoe/"; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-HE-Tag: 1624366511-461038 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pages being added to the page cache should already be folios, so just cast the page to a folio in the add_to_page_cache_lru() wrapper. Saves 96 bytes of text. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 7 ----- include/linux/pagemap.h | 16 ++++++++-- kernel/bpf/verifier.c | 2 +- mm/filemap.c | 69 ++++++++++++++++++++--------------------- 4 files changed, 47 insertions(+), 47 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d25ff74cf9e1..4ad03f4a9376 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -223,13 +223,6 @@ int overcommit_kbytes_handler(struct ctl_table *, in= t, void *, size_t *, loff_t *); int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, loff_t *); -/* - * Any attempt to mark this function as static leads to build failure - * when CONFIG_DEBUG_INFO_BTF is enabled because __add_to_page_cache_loc= ked() - * is referred to by BPF code. This must be visible for error injection. - */ -int __add_to_page_cache_locked(struct page *page, struct address_space *= mapping, - pgoff_t index, gfp_t gfp, void **shadowp); =20 #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 7637cc9333c9..b0c1d24fb01b 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -876,9 +876,9 @@ static inline int fault_in_pages_readable(const char = __user *uaddr, int size) } =20 int add_to_page_cache_locked(struct page *page, struct address_space *ma= pping, - pgoff_t index, gfp_t gfp_mask); -int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, - pgoff_t index, gfp_t gfp_mask); + pgoff_t index, gfp_t gfp); +int filemap_add_folio(struct address_space *mapping, struct folio *folio= , + pgoff_t index, gfp_t gfp); extern void delete_from_page_cache(struct page *page); extern void __delete_from_page_cache(struct page *page, void *shadow); void replace_page_cache_page(struct page *old, struct page *new); @@ -903,6 +903,16 @@ static inline int add_to_page_cache(struct page *pag= e, return error; } =20 +static inline int add_to_page_cache_lru(struct page *page, + struct address_space *mapping, pgoff_t index, gfp_t gfp) +{ + return filemap_add_folio(mapping, (struct folio *)page, index, gfp); +} + +/* Must be non-static for BPF error injection */ +int __filemap_add_folio(struct address_space *mapping, struct folio *fol= io, + pgoff_t index, gfp_t gfp, void **shadowp); + /** * struct readahead_control - Describes a readahead request. * diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c index 94ba5163d4c5..cab4d64c1809 100644 --- a/kernel/bpf/verifier.c +++ b/kernel/bpf/verifier.c @@ -12962,7 +12962,7 @@ BTF_SET_START(btf_non_sleepable_error_inject) /* Three functions below can be called from sleepable and non-sleepable = context. * Assume non-sleepable from bpf safety point of view. */ -BTF_ID(func, __add_to_page_cache_locked) +BTF_ID(func, __filemap_add_folio) BTF_ID(func, should_fail_alloc_page) BTF_ID(func, should_failslab) BTF_SET_END(btf_non_sleepable_error_inject) diff --git a/mm/filemap.c b/mm/filemap.c index 4debb11ecc3e..a174f8ce87ea 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -855,26 +855,24 @@ void replace_page_cache_page(struct page *old, stru= ct page *new) } EXPORT_SYMBOL_GPL(replace_page_cache_page); =20 -noinline int __add_to_page_cache_locked(struct page *page, - struct address_space *mapping, - pgoff_t offset, gfp_t gfp, - void **shadowp) +noinline int __filemap_add_folio(struct address_space *mapping, + struct folio *folio, pgoff_t index, gfp_t gfp, void **shadowp) { - XA_STATE(xas, &mapping->i_pages, offset); - int huge =3D PageHuge(page); + XA_STATE(xas, &mapping->i_pages, index); + int huge =3D folio_hugetlb(folio); int error; bool charged =3D false; =20 - VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(PageSwapBacked(page), page); + VM_BUG_ON_FOLIO(!folio_locked(folio), folio); + VM_BUG_ON_FOLIO(folio_swapbacked(folio), folio); mapping_set_update(&xas, mapping); =20 - get_page(page); - page->mapping =3D mapping; - page->index =3D offset; + folio_get(folio); + folio->mapping =3D mapping; + folio->index =3D index; =20 if (!huge) { - error =3D mem_cgroup_charge(page, current->mm, gfp); + error =3D folio_charge_cgroup(folio, current->mm, gfp); if (error) goto error; charged =3D true; @@ -886,7 +884,7 @@ noinline int __add_to_page_cache_locked(struct page *= page, unsigned int order =3D xa_get_order(xas.xa, xas.xa_index); void *entry, *old =3D NULL; =20 - if (order > thp_order(page)) + if (order > folio_order(folio)) xas_split_alloc(&xas, xa_load(xas.xa, xas.xa_index), order, gfp); xas_lock_irq(&xas); @@ -903,13 +901,13 @@ noinline int __add_to_page_cache_locked(struct page= *page, *shadowp =3D old; /* entry may have been split before we acquired lock */ order =3D xa_get_order(xas.xa, xas.xa_index); - if (order > thp_order(page)) { + if (order > folio_order(folio)) { xas_split(&xas, old, order); xas_reset(&xas); } } =20 - xas_store(&xas, page); + xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; =20 @@ -917,7 +915,7 @@ noinline int __add_to_page_cache_locked(struct page *= page, =20 /* hugetlb pages do not participate in page cache accounting */ if (!huge) - __inc_lruvec_page_state(page, NR_FILE_PAGES); + __lruvec_stat_add_folio(folio, NR_FILE_PAGES); unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -925,19 +923,19 @@ noinline int __add_to_page_cache_locked(struct page= *page, if (xas_error(&xas)) { error =3D xas_error(&xas); if (charged) - mem_cgroup_uncharge(page); + folio_uncharge_cgroup(folio); goto error; } =20 - trace_mm_filemap_add_to_page_cache(page); + trace_mm_filemap_add_to_page_cache(&folio->page); return 0; error: - page->mapping =3D NULL; + folio->mapping =3D NULL; /* Leave page->index set: truncation relies upon it */ - put_page(page); + folio_put(folio); return error; } -ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); +ALLOW_ERROR_INJECTION(__filemap_add_folio, ERRNO); =20 /** * add_to_page_cache_locked - add a locked page to the pagecache @@ -954,39 +952,38 @@ ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, E= RRNO); int add_to_page_cache_locked(struct page *page, struct address_space *ma= pping, pgoff_t offset, gfp_t gfp_mask) { - return __add_to_page_cache_locked(page, mapping, offset, + return __filemap_add_folio(mapping, page_folio(page), offset, gfp_mask, NULL); } EXPORT_SYMBOL(add_to_page_cache_locked); =20 -int add_to_page_cache_lru(struct page *page, struct address_space *mappi= ng, - pgoff_t offset, gfp_t gfp_mask) +int filemap_add_folio(struct address_space *mapping, struct folio *folio= , + pgoff_t index, gfp_t gfp) { void *shadow =3D NULL; int ret; =20 - __SetPageLocked(page); - ret =3D __add_to_page_cache_locked(page, mapping, offset, - gfp_mask, &shadow); + __folio_set_locked_flag(folio); + ret =3D __filemap_add_folio(mapping, folio, index, gfp, &shadow); if (unlikely(ret)) - __ClearPageLocked(page); + __folio_clear_locked_flag(folio); else { /* - * The page might have been evicted from cache only + * The folio might have been evicted from cache only * recently, in which case it should be activated like - * any other repeatedly accessed page. - * The exception is pages getting rewritten; evicting other + * any other repeatedly accessed folio. + * The exception is folios getting rewritten; evicting other * data from the working set, only to cache data that will * get overwritten with something else, is a waste of memory. */ - WARN_ON_ONCE(PageActive(page)); - if (!(gfp_mask & __GFP_WRITE) && shadow) - workingset_refault(page_folio(page), shadow); - lru_cache_add(page); + WARN_ON_ONCE(folio_active(folio)); + if (!(gfp & __GFP_WRITE) && shadow) + workingset_refault(folio, shadow); + folio_add_lru(folio); } return ret; } -EXPORT_SYMBOL_GPL(add_to_page_cache_lru); +EXPORT_SYMBOL_GPL(filemap_add_folio); =20 #ifdef CONFIG_NUMA struct folio *filemap_alloc_folio(gfp_t gfp, unsigned int order) --=20 2.30.2