From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59FE4C11F67 for ; Mon, 12 Jul 2021 03:40:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F20176113B for ; Mon, 12 Jul 2021 03:40:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F20176113B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 204CB6B0095; Sun, 11 Jul 2021 23:40:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B63F6B0098; Sun, 11 Jul 2021 23:40:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0560C6B0099; Sun, 11 Jul 2021 23:40:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id D95886B0095 for ; Sun, 11 Jul 2021 23:40:01 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 072118248047 for ; Mon, 12 Jul 2021 03:40:01 +0000 (UTC) X-FDA: 78352532202.06.3A137BD Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 9C5E2B00019C for ; Mon, 12 Jul 2021 03:40:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=uEN4I0K08wYhkdZFmVFJTaOFd4LQCFjM/qG/RT0zNVA=; b=Yq2bcoJ9Nbai/exw0OaSbIoFPH D8cHR+e7zCmGgZqtpTogvm1rs4i1lzJe3Sym9+c+O9tcXNn2hTxSonzrStYLOkuKyEaSjFbPDSPQE MruSamfRcZmOnSFyol6HfNDnDngxiedz6Z0mij5rPLQ8GaRinFo5oKXewb2t8HAscnXayM4iwe+W6 tbNr5VxBrGcfHv9qe6W+1LLvlQ3XYAx4cDTwNKvRjFnCVE6WrOZ+F10TsdMNAXsXLTKCJoOI/d2je O69Jgigp/q5VK5RksNGiICSkWs32meEwROAJMMDnUpxjEctNrnUeaoo/MpCiqMmAS6NX6AodLqpBT bJomJ8bQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m2mmq-00Gp3K-Jx; Mon, 12 Jul 2021 03:39:23 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v13 060/137] mm/migrate: Add folio_migrate_mapping() Date: Mon, 12 Jul 2021 04:05:44 +0100 Message-Id: <20210712030701.4000097-61-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210712030701.4000097-1-willy@infradead.org> References: <20210712030701.4000097-1-willy@infradead.org> MIME-Version: 1.0 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Yq2bcoJ9; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 9C5E2B00019C X-Stat-Signature: dj8uh3j58p3qjdzuy1wnbi4eko6uarur X-HE-Tag: 1626061200-234745 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reimplement migrate_page_move_mapping() as a wrapper around folio_migrate_mapping(). Saves 193 bytes of kernel text. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- include/linux/migrate.h | 2 + mm/folio-compat.c | 11 ++++++ mm/migrate.c | 85 +++++++++++++++++++++-------------------- 3 files changed, 57 insertions(+), 41 deletions(-) diff --git a/include/linux/migrate.h b/include/linux/migrate.h index 9b7b7cd3bae9..52bf62763205 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -52,6 +52,8 @@ extern int migrate_huge_page_move_mapping(struct addres= s_space *mapping, extern int migrate_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page, int extra_count); extern void copy_huge_page(struct page *dst, struct page *src); +int folio_migrate_mapping(struct address_space *mapping, + struct folio *newfolio, struct folio *folio, int extra_count); #else =20 static inline void putback_movable_pages(struct list_head *l) {} diff --git a/mm/folio-compat.c b/mm/folio-compat.c index a374747ae1c6..d883d964fd52 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -4,6 +4,7 @@ * eventually. */ =20 +#include #include #include =20 @@ -48,3 +49,13 @@ void mark_page_accessed(struct page *page) folio_mark_accessed(page_folio(page)); } EXPORT_SYMBOL(mark_page_accessed); + +#ifdef CONFIG_MIGRATION +int migrate_page_move_mapping(struct address_space *mapping, + struct page *newpage, struct page *page, int extra_count) +{ + return folio_migrate_mapping(mapping, page_folio(newpage), + page_folio(page), extra_count); +} +EXPORT_SYMBOL(migrate_page_move_mapping); +#endif diff --git a/mm/migrate.c b/mm/migrate.c index d8df117dca7e..19dd053b4a52 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -363,7 +363,7 @@ static int expected_page_refs(struct address_space *m= apping, struct page *page) */ expected_count +=3D is_device_private_page(page); if (mapping) - expected_count +=3D thp_nr_pages(page) + page_has_private(page); + expected_count +=3D compound_nr(page) + page_has_private(page); =20 return expected_count; } @@ -376,74 +376,75 @@ static int expected_page_refs(struct address_space = *mapping, struct page *page) * 2 for pages with a mapping * 3 for pages with a mapping and PagePrivate/PagePrivate2 set. */ -int migrate_page_move_mapping(struct address_space *mapping, - struct page *newpage, struct page *page, int extra_count) +int folio_migrate_mapping(struct address_space *mapping, + struct folio *newfolio, struct folio *folio, int extra_count) { - XA_STATE(xas, &mapping->i_pages, page_index(page)); + XA_STATE(xas, &mapping->i_pages, folio_index(folio)); struct zone *oldzone, *newzone; int dirty; - int expected_count =3D expected_page_refs(mapping, page) + extra_count; - int nr =3D thp_nr_pages(page); + int expected_count =3D expected_page_refs(mapping, &folio->page) + extr= a_count; + int nr =3D folio_nr_pages(folio); =20 if (!mapping) { /* Anonymous page without mapping */ - if (page_count(page) !=3D expected_count) + if (folio_ref_count(folio) !=3D expected_count) return -EAGAIN; =20 /* No turning back from here */ - newpage->index =3D page->index; - newpage->mapping =3D page->mapping; - if (PageSwapBacked(page)) - __SetPageSwapBacked(newpage); + newfolio->index =3D folio->index; + newfolio->mapping =3D folio->mapping; + if (folio_swapbacked(folio)) + __folio_set_swapbacked_flag(newfolio); =20 return MIGRATEPAGE_SUCCESS; } =20 - oldzone =3D page_zone(page); - newzone =3D page_zone(newpage); + oldzone =3D folio_zone(folio); + newzone =3D folio_zone(newfolio); =20 xas_lock_irq(&xas); - if (page_count(page) !=3D expected_count || xas_load(&xas) !=3D page) { + if (folio_ref_count(folio) !=3D expected_count || + xas_load(&xas) !=3D folio) { xas_unlock_irq(&xas); return -EAGAIN; } =20 - if (!page_ref_freeze(page, expected_count)) { + if (!folio_ref_freeze(folio, expected_count)) { xas_unlock_irq(&xas); return -EAGAIN; } =20 /* - * Now we know that no one else is looking at the page: + * Now we know that no one else is looking at the folio: * no turning back from here. */ - newpage->index =3D page->index; - newpage->mapping =3D page->mapping; - page_ref_add(newpage, nr); /* add cache reference */ - if (PageSwapBacked(page)) { - __SetPageSwapBacked(newpage); - if (PageSwapCache(page)) { - SetPageSwapCache(newpage); - set_page_private(newpage, page_private(page)); + newfolio->index =3D folio->index; + newfolio->mapping =3D folio->mapping; + folio_ref_add(newfolio, nr); /* add cache reference */ + if (folio_swapbacked(folio)) { + __folio_set_swapbacked_flag(newfolio); + if (folio_swapcache(folio)) { + folio_set_swapcache_flag(newfolio); + newfolio->private =3D folio_get_private(folio); } } else { - VM_BUG_ON_PAGE(PageSwapCache(page), page); + VM_BUG_ON_FOLIO(folio_swapcache(folio), folio); } =20 /* Move dirty while page refs frozen and newpage not yet exposed */ - dirty =3D PageDirty(page); + dirty =3D folio_dirty(folio); if (dirty) { - ClearPageDirty(page); - SetPageDirty(newpage); + folio_clear_dirty_flag(folio); + folio_set_dirty_flag(newfolio); } =20 - xas_store(&xas, newpage); - if (PageTransHuge(page)) { + xas_store(&xas, newfolio); + if (nr > 1) { int i; =20 for (i =3D 1; i < nr; i++) { xas_next(&xas); - xas_store(&xas, newpage); + xas_store(&xas, newfolio); } } =20 @@ -452,7 +453,7 @@ int migrate_page_move_mapping(struct address_space *m= apping, * to one less reference. * We know this isn't the last reference. */ - page_ref_unfreeze(page, expected_count - nr); + folio_ref_unfreeze(folio, expected_count - nr); =20 xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ @@ -471,18 +472,18 @@ int migrate_page_move_mapping(struct address_space = *mapping, struct lruvec *old_lruvec, *new_lruvec; struct mem_cgroup *memcg; =20 - memcg =3D page_memcg(page); + memcg =3D folio_memcg(folio); old_lruvec =3D mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec =3D mem_cgroup_lruvec(memcg, newzone->zone_pgdat); =20 __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); - if (PageSwapBacked(page) && !PageSwapCache(page)) { + if (folio_swapbacked(folio) && !folio_swapcache(folio)) { __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); } #ifdef CONFIG_SWAP - if (PageSwapCache(page)) { + if (folio_swapcache(folio)) { __mod_lruvec_state(old_lruvec, NR_SWAPCACHE, -nr); __mod_lruvec_state(new_lruvec, NR_SWAPCACHE, nr); } @@ -498,11 +499,11 @@ int migrate_page_move_mapping(struct address_space = *mapping, =20 return MIGRATEPAGE_SUCCESS; } -EXPORT_SYMBOL(migrate_page_move_mapping); +EXPORT_SYMBOL(folio_migrate_mapping); =20 /* * The expected number of remaining references is the same as that - * of migrate_page_move_mapping(). + * of folio_migrate_mapping(). */ int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page) @@ -611,7 +612,7 @@ void migrate_page_states(struct page *newpage, struct= page *page) if (PageMappedToDisk(page)) SetPageMappedToDisk(newpage); =20 - /* Move dirty on pages not done by migrate_page_move_mapping() */ + /* Move dirty on pages not done by folio_migrate_mapping() */ if (PageDirty(page)) SetPageDirty(newpage); =20 @@ -687,11 +688,13 @@ int migrate_page(struct address_space *mapping, struct page *newpage, struct page *page, enum migrate_mode mode) { + struct folio *newfolio =3D page_folio(newpage); + struct folio *folio =3D page_folio(page); int rc; =20 - BUG_ON(PageWriteback(page)); /* Writeback must be complete */ + BUG_ON(folio_writeback(folio)); /* Writeback must be complete */ =20 - rc =3D migrate_page_move_mapping(mapping, newpage, page, 0); + rc =3D folio_migrate_mapping(mapping, newfolio, folio, 0); =20 if (rc !=3D MIGRATEPAGE_SUCCESS) return rc; @@ -2435,7 +2438,7 @@ static void migrate_vma_collect(struct migrate_vma = *migrate) * @page: struct page to check * * Pinned pages cannot be migrated. This is the same test as in - * migrate_page_move_mapping(), except that here we allow migration of a + * folio_migrate_mapping(), except that here we allow migration of a * ZONE_DEVICE page. */ static bool migrate_vma_check_page(struct page *page) --=20 2.30.2