From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3920C433F5 for ; Fri, 4 Feb 2022 19:59:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 259B26B0089; Fri, 4 Feb 2022 14:59:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F1E506B0083; Fri, 4 Feb 2022 14:59:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3FF86B0098; Fri, 4 Feb 2022 14:59:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0123.hostedemail.com [216.40.44.123]) by kanga.kvack.org (Postfix) with ESMTP id 222BA6B007B for ; Fri, 4 Feb 2022 14:59:06 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DD021181E5176 for ; Fri, 4 Feb 2022 19:59:05 +0000 (UTC) X-FDA: 79106161050.17.83F1491 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id 896AA20007 for ; Fri, 4 Feb 2022 19:59:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=2mHVS7k36npFN25FhvkmSnAZo1mYjfPpfVgX9RGUKq8=; b=eicnxn7w0V/pR8d6uMIwjVD6Ok 77YRhLGQYq04iqZwutq5XjOgvr+7uRnoY2Cl+WTzj68vCg+oUP3ImeT7Si5h7ulgNMRfj2T78SQp5 +q8UGik6YLFcfzBACXLaQt2NkAnEMwChLqjdtV7vR8kQKXoZU0HKu2z7HcC0gARygClwcFBNd86A7 ZYmQqrNo4wWZjU25VqF96I+8Zl7uV5vEvOWrSWlzljGauvFnsQLCHY+Ka0UR6gF+AqrCqp/pZC4CH 5WMrus0c1eZQ0AYCi++lxFSuX5QEfPatfNyk/r60WsxPZwvCoTcYD6bFMgb8wD5pACfUyGopg+sVL PP0BCwag==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jX-007LnU-Vg; Fri, 04 Feb 2022 19:59:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 43/75] mm/page_idle: Convert page_idle_clear_pte_refs() to use a folio Date: Fri, 4 Feb 2022 19:58:20 +0000 Message-Id: <20220204195852.1751729-44-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 896AA20007 X-Rspam-User: nil Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=eicnxn7w; dmarc=none; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org X-Stat-Signature: phdc9xbzcku397zwf3perbmgb19qzyr1 X-Rspamd-Server: rspam08 X-HE-Tag: 1644004745-402125 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The PG_idle and PG_young bits are ignored if they're set on tail pages, so ensure we're passing a folio around. Signed-off-by: Matthew Wilcox (Oracle) --- mm/page_idle.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/mm/page_idle.c b/mm/page_idle.c index 20d35d720872..544814bd9e37 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -13,6 +13,8 @@ #include #include =20 +#include "internal.h" + #define BITMAP_CHUNK_SIZE sizeof(u64) #define BITMAP_CHUNK_BITS (BITMAP_CHUNK_SIZE * BITS_PER_BYTE) =20 @@ -48,6 +50,7 @@ static bool page_idle_clear_pte_refs_one(struct page *p= age, struct vm_area_struct *vma, unsigned long addr, void *arg) { + struct folio *folio =3D page_folio(page); struct page_vma_mapped_walk pvmw =3D { .vma =3D vma, .address =3D addr, @@ -74,19 +77,20 @@ static bool page_idle_clear_pte_refs_one(struct page = *page, } =20 if (referenced) { - clear_page_idle(page); + folio_clear_idle(folio); /* * We cleared the referenced bit in a mapping to this page. To * avoid interference with page reclaim, mark it young so that * page_referenced() will return > 0. */ - set_page_young(page); + folio_set_young(folio); } return true; } =20 static void page_idle_clear_pte_refs(struct page *page) { + struct folio *folio =3D page_folio(page); /* * Since rwc.arg is unused, rwc is effectively immutable, so we * can make it static const to save some cycles and stack. @@ -97,18 +101,17 @@ static void page_idle_clear_pte_refs(struct page *pa= ge) }; bool need_lock; =20 - if (!page_mapped(page) || - !page_rmapping(page)) + if (!folio_mapped(folio) || !folio_raw_mapping(folio)) return; =20 - need_lock =3D !PageAnon(page) || PageKsm(page); - if (need_lock && !trylock_page(page)) + need_lock =3D !folio_test_anon(folio) || folio_test_ksm(folio); + if (need_lock && !folio_trylock(folio)) return; =20 - rmap_walk(page, (struct rmap_walk_control *)&rwc); + rmap_walk(&folio->page, (struct rmap_walk_control *)&rwc); =20 if (need_lock) - unlock_page(page); + folio_unlock(folio); } =20 static ssize_t page_idle_bitmap_read(struct file *file, struct kobject *= kobj, --=20 2.34.1