From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 736FBC433EF for ; Fri, 4 Feb 2022 19:59:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF0688D0003; Fri, 4 Feb 2022 14:59:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A61956B0095; Fri, 4 Feb 2022 14:59:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 559316B007B; Fri, 4 Feb 2022 14:59:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 8CF2D6B0087 for ; Fri, 4 Feb 2022 14:59:06 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 39693181CC1AA for ; Fri, 4 Feb 2022 19:59:06 +0000 (UTC) X-FDA: 79106161092.24.56DC63A Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf07.hostedemail.com (Postfix) with ESMTP id D67FC40006 for ; Fri, 4 Feb 2022 19:59:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=pLnVedokY1Ob/m0v2fa0YulRbXFZgihTv65vJ0dIKko=; b=octtxBvPIrfnM0elg0Mbm4fihl KklKhXhWv6upis6fOOmar0Vr+b29vfISmwj3SMNkWX+j7YrJQWAg6QdnrPx0iTz/zRgkrlYKM0fyu YOOPMwfZS5RCoDCBKfQg4tr9IZNCNhObd8Fj09aCVSzFb+LXM8uN6sOMcKlqupcPUJ4hA77V2nIkk DgFal7d6w3DP9y47BjziJ30pS7vh9ssMQebXVenuxFPxsG9LR/bImJOKcbT8eACuGFCDTvaj2TAvy o2QVg2Sm+U6K0E9BIAkSVEcJb4U2NmBFC/U6itIBTuOT1N5dXQH06pOaJ0XHoslvuoOYaRh8C22eB F4jw+Mbw==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1nG4jY-007Lnp-Co; Fri, 04 Feb 2022 19:59:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH 46/75] mm/mlock: Turn clear_page_mlock() into folio_end_mlock() Date: Fri, 4 Feb 2022 19:58:23 +0000 Message-Id: <20220204195852.1751729-47-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20220204195852.1751729-1-willy@infradead.org> References: <20220204195852.1751729-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: nil X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D67FC40006 X-Stat-Signature: hjnkc5uzipy9mpdzt3xk1ff1uofm8muz Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=octtxBvP; spf=none (imf07.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1644004745-342032 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a clear_page_mlock() wrapper function. It looks like all callers were already passing a head page, but if they weren't, this will fix an accounting bug. Signed-off-by: Matthew Wilcox (Oracle) --- mm/folio-compat.c | 5 +++++ mm/internal.h | 15 +++------------ mm/mlock.c | 28 +++++++++++++++++----------- 3 files changed, 25 insertions(+), 23 deletions(-) diff --git a/mm/folio-compat.c b/mm/folio-compat.c index 46fa179e32fb..bcb037d9cec3 100644 --- a/mm/folio-compat.c +++ b/mm/folio-compat.c @@ -164,3 +164,8 @@ void putback_lru_page(struct page *page) { folio_putback_lru(page_folio(page)); } + +void clear_page_mlock(struct page *page) +{ + folio_end_mlock(page_folio(page)); +} diff --git a/mm/internal.h b/mm/internal.h index 7f1db0f1a8bc..041c76a4c284 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -416,17 +416,8 @@ extern unsigned int munlock_vma_page(struct page *pa= ge); =20 extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, unsigned long len); - -/* - * Clear the page's PageMlocked(). This can be useful in a situation wh= ere - * we want to unconditionally remove a page from the pagecache -- e.g., - * on truncation or freeing. - * - * It is legal to call this function for any page, mlocked or not. - * If called for a page that is still mapped by mlocked vmas, all we do - * is revert to lazy LRU behaviour -- semantics are not broken. - */ -extern void clear_page_mlock(struct page *page); +void folio_end_mlock(struct folio *folio); +void clear_page_mlock(struct page *page); =20 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); =20 @@ -503,7 +494,7 @@ static inline struct file *maybe_unlock_mmap_for_io(s= truct vm_fault *vmf, } #else /* !CONFIG_MMU */ static inline void unmap_mapping_folio(struct folio *folio) { } -static inline void clear_page_mlock(struct page *page) { } +static inline void folio_end_mlock(struct folio *folio) { } static inline void mlock_vma_page(struct page *page) { } static inline void vunmap_range_noflush(unsigned long start, unsigned lo= ng end) { diff --git a/mm/mlock.c b/mm/mlock.c index 24d0809cacba..ff067d64acc5 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -55,31 +55,37 @@ EXPORT_SYMBOL(can_do_mlock); */ =20 /* - * LRU accounting for clear_page_mlock() + * Clear the folio's PageMlocked(). This can be useful in a situation w= here + * we want to unconditionally remove a folio from the pagecache -- e.g., + * on truncation or freeing. + * + * It is legal to call this function for any folio, mlocked or not. + * If called for a folio that is still mapped by mlocked vmas, all we do + * is revert to lazy LRU behaviour -- semantics are not broken. */ -void clear_page_mlock(struct page *page) +void folio_end_mlock(struct folio *folio) { - int nr_pages; + long nr_pages; =20 - if (!TestClearPageMlocked(page)) + if (!folio_test_clear_mlocked(folio)) return; =20 - nr_pages =3D thp_nr_pages(page); - mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); + nr_pages =3D folio_nr_pages(folio); + zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages); count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages); /* - * The previous TestClearPageMlocked() corresponds to the smp_mb() + * The previous folio_test_clear_mlocked() corresponds to the smp_mb() * in __pagevec_lru_add_fn(). * * See __pagevec_lru_add_fn for more explanation. */ - if (!isolate_lru_page(page)) { - putback_lru_page(page); + if (!folio_isolate_lru(folio)) { + folio_putback_lru(folio); } else { /* - * We lost the race. the page already moved to evictable list. + * We lost the race. the folio already moved to evictable list. */ - if (PageUnevictable(page)) + if (folio_test_unevictable(folio)) count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages); } } --=20 2.34.1