From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31DF8C433F5 for ; Fri, 4 Feb 2022 19:59:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97E3E6B0081; Fri, 4 Feb 2022 14:59:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 47F076B0083; Fri, 4 Feb 2022 14:59:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FA3D6B0073; Fri, 4 Feb 2022 14:59:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0076.hostedemail.com [216.40.44.76]) by kanga.kvack.org (Postfix) with ESMTP id E53B46B007D for ; Fri, 4 Feb 2022 14:59:04 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A8F221815138D for ; Fri, 4 Feb 2022 19:59:04 +0000 (UTC) X-FDA: 79106161008.25.119C224 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf15.hostedemail.com (Postfix) with ESMTP id 5E2C1A0002 for ; Fri, 4 Feb 2022 19:59:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Qp/7CV756EjRflFbilGAM2PWJ4Vh7vDEGJ1jBhnMB60=; b=FEl7wj2ny9VKEO6nnjsFB7DhPI c+Mtk+lRI+R1Ued1Wy+ESTXo7863L3btIBoNRphLP+Jw3wmpRiGei36BTGQEUedkSBsLV8sDQqzup +1AcDx3RPnCrmPNY2liHAnjVeaaVJL0raWdHL84Ov4kNe5M7AQ7V3d5+DMWqXD8x+YF3G/0+8mW7W QJZQ/FzI87aBXaPOPGWAG6YQHklqGPiAFzF3n/hz3ay048DaOyX080qLq2+SNNDMH7nLzsADFY+TZ eyWeCBPdr0YOV6gweWxCX7oMOGXcLktHpKXLA91GGuPPEBXWY+5GVpQYaGyWF+o4Fv9V/5DnXS09Y AjygAdOg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jW-007LmZ-SK; Fri, 04 Feb 2022 19:59:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 34/75] mm/vmscan: Turn page_check_dirty_writeback() into folio_check_dirty_writeback() Date: Fri, 4 Feb 2022 19:58:11 +0000 Message-Id: <20220204195852.1751729-35-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: je9phmrmkdawab51mr9u49nuzxpweyjb X-Rspam-User: nil Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=FEl7wj2n; spf=none (imf15.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 5E2C1A0002 X-HE-Tag: 1644004744-987031 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Saves a few calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/vmscan.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 15cbfae0d8ec..e8c5855bc38d 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1430,7 +1430,7 @@ static enum page_references page_check_references(s= truct page *page, } =20 /* Check if a page is dirty or under writeback */ -static void page_check_dirty_writeback(struct page *page, +static void folio_check_dirty_writeback(struct folio *folio, bool *dirty, bool *writeback) { struct address_space *mapping; @@ -1439,24 +1439,24 @@ static void page_check_dirty_writeback(struct pag= e *page, * Anonymous pages are not handled by flushers and must be written * from reclaim context. Do not stall reclaim based on them */ - if (!page_is_file_lru(page) || - (PageAnon(page) && !PageSwapBacked(page))) { + if (!folio_is_file_lru(folio) || + (folio_test_anon(folio) && !folio_test_swapbacked(folio))) { *dirty =3D false; *writeback =3D false; return; } =20 - /* By default assume that the page flags are accurate */ - *dirty =3D PageDirty(page); - *writeback =3D PageWriteback(page); + /* By default assume that the folio flags are accurate */ + *dirty =3D folio_test_dirty(folio); + *writeback =3D folio_test_writeback(folio); =20 /* Verify dirty/writeback state if the filesystem supports it */ - if (!page_has_private(page)) + if (!folio_test_private(folio)) return; =20 - mapping =3D page_mapping(page); + mapping =3D folio_mapping(folio); if (mapping && mapping->a_ops->is_dirty_writeback) - mapping->a_ops->is_dirty_writeback(page, dirty, writeback); + mapping->a_ops->is_dirty_writeback(&folio->page, dirty, writeback); } =20 static struct page *alloc_demote_page(struct page *page, unsigned long n= ode) @@ -1565,7 +1565,7 @@ static unsigned int shrink_page_list(struct list_he= ad *page_list, * reclaim_congested. kswapd will stall and start writing * pages if the tail of the LRU is all dirty unqueued pages. */ - page_check_dirty_writeback(page, &dirty, &writeback); + folio_check_dirty_writeback(folio, &dirty, &writeback); if (dirty || writeback) stat->nr_dirty++; =20 --=20 2.34.1