From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF555C07E96 for ; Thu, 15 Jul 2021 04:00:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 50F2161278 for ; Thu, 15 Jul 2021 04:00:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 50F2161278 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A28E76B00F6; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A002D8D0028; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A07E8D0027; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 5F5876B00F6 for ; Thu, 15 Jul 2021 00:00:17 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4CB6D8248047 for ; Thu, 15 Jul 2021 04:00:16 +0000 (UTC) X-FDA: 78363469632.10.FE6B24B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf21.hostedemail.com (Postfix) with ESMTP id E673AD006E35 for ; Thu, 15 Jul 2021 04:00:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=32CNIpiU8NZS3vYPiHzP/+47yNy0tCS/bqcbwVuEEAk=; b=XVS9N2Nk8FhRkoWmFmtin+srdx FxYAJpnjwQdhGTX67K3kyVJKBxQH8UtuW8PcdT+oxzLMy3PHzvs7an/kLf/D5ouHLp9HElY0O9WpU qUEuOSgrZe4yTgZ7gdv9XaEzei9NrDx1zSlurTCv6PKAQgqRPhWVvaRfCPKdTcBJxDcLU9oGszULP BLcZmRN8can73pxQbtGdUNR9UTjjDDoSZnN9ttbaULzLfbhDXSDNjEWi52htWJwfjP9Ceg8mrm1Nn mK8kE0SmPkHtBPOH7Cl0cK6VmpWvOi4TPEvpSaIFvInmb/YJP1XWS2NgMO2gs5q/WctXGcnpZgAgh OMw1RqeQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1m3sWY-002vZw-4E; Thu, 15 Jul 2021 03:59:11 +0000 From: "Matthew Wilcox (Oracle)" To: linux-kernel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christoph Hellwig , Jeff Layton , "Kirill A . Shutemov" , Vlastimil Babka , William Kucharski , David Howells Subject: [PATCH v14 027/138] mm/filemap: Add folio_wait_bit() Date: Thu, 15 Jul 2021 04:35:13 +0100 Message-Id: <20210715033704.692967-28-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715033704.692967-1-willy@infradead.org> References: <20210715033704.692967-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E673AD006E35 X-Stat-Signature: beq7b9touz8oai9preqmbhbhjd6p6bd1 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=XVS9N2Nk; spf=none (imf21.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1626321615-221798 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Rename wait_on_page_bit() to folio_wait_bit(). We must always wait on the folio, otherwise we won't be woken up due to the tail page hashing to a different bucket from the head page. This commit shrinks the kernel by 770 bytes, mostly due to moving the page waitqueue lookup into folio_wait_bit_common(). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Jeff Layton Acked-by: Kirill A. Shutemov Acked-by: Vlastimil Babka Reviewed-by: William Kucharski Reviewed-by: David Howells --- include/linux/pagemap.h | 10 +++--- mm/filemap.c | 77 +++++++++++++++++++---------------------- mm/page-writeback.c | 4 +-- 3 files changed, 43 insertions(+), 48 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 96b62a2331fb..7eb02baf6f9f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -729,11 +729,11 @@ static inline bool lock_page_or_retry(struct page *= page, struct mm_struct *mm, } =20 /* - * This is exported only for wait_on_page_locked/wait_on_page_writeback,= etc., + * This is exported only for folio_wait_locked/folio_wait_writeback, etc= ., * and should not be used directly. */ -extern void wait_on_page_bit(struct page *page, int bit_nr); -extern int wait_on_page_bit_killable(struct page *page, int bit_nr); +void folio_wait_bit(struct folio *folio, int bit_nr); +int folio_wait_bit_killable(struct folio *folio, int bit_nr); =20 /*=20 * Wait for a folio to be unlocked. @@ -745,14 +745,14 @@ extern int wait_on_page_bit_killable(struct page *p= age, int bit_nr); static inline void folio_wait_locked(struct folio *folio) { if (folio_test_locked(folio)) - wait_on_page_bit(&folio->page, PG_locked); + folio_wait_bit(folio, PG_locked); } =20 static inline int folio_wait_locked_killable(struct folio *folio) { if (!folio_test_locked(folio)) return 0; - return wait_on_page_bit_killable(&folio->page, PG_locked); + return folio_wait_bit_killable(folio, PG_locked); } =20 static inline void wait_on_page_locked(struct page *page) diff --git a/mm/filemap.c b/mm/filemap.c index b5a0d546e436..b55c89d7997f 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1102,7 +1102,7 @@ static int wake_page_function(wait_queue_entry_t *w= ait, unsigned mode, int sync, * * So update the flags atomically, and wake up the waiter * afterwards to avoid any races. This store-release pairs - * with the load-acquire in wait_on_page_bit_common(). + * with the load-acquire in folio_wait_bit_common(). */ smp_store_release(&wait->flags, flags | WQ_FLAG_WOKEN); wake_up_state(wait->private, mode); @@ -1183,7 +1183,7 @@ static void folio_wake(struct folio *folio, int bit= ) } =20 /* - * A choice of three behaviors for wait_on_page_bit_common(): + * A choice of three behaviors for folio_wait_bit_common(): */ enum behavior { EXCLUSIVE, /* Hold ref to page and take the bit when woken, like @@ -1198,16 +1198,16 @@ enum behavior { }; =20 /* - * Attempt to check (or get) the page bit, and mark us done + * Attempt to check (or get) the folio flag, and mark us done * if successful. */ -static inline bool trylock_page_bit_common(struct page *page, int bit_nr= , +static inline bool folio_trylock_flag(struct folio *folio, int bit_nr, struct wait_queue_entry *wait) { if (wait->flags & WQ_FLAG_EXCLUSIVE) { - if (test_and_set_bit(bit_nr, &page->flags)) + if (test_and_set_bit(bit_nr, &folio->flags)) return false; - } else if (test_bit(bit_nr, &page->flags)) + } else if (test_bit(bit_nr, &folio->flags)) return false; =20 wait->flags |=3D WQ_FLAG_WOKEN | WQ_FLAG_DONE; @@ -1217,9 +1217,10 @@ static inline bool trylock_page_bit_common(struct = page *page, int bit_nr, /* How many times do we accept lock stealing from under a waiter? */ int sysctl_page_lock_unfairness =3D 5; =20 -static inline int wait_on_page_bit_common(wait_queue_head_t *q, - struct page *page, int bit_nr, int state, enum behavior behavior) +static inline int folio_wait_bit_common(struct folio *folio, int bit_nr, + int state, enum behavior behavior) { + wait_queue_head_t *q =3D page_waitqueue(&folio->page); int unfairness =3D sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait =3D &wait_page.wait; @@ -1228,8 +1229,8 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, unsigned long pflags; =20 if (bit_nr =3D=3D PG_locked && - !PageUptodate(page) && PageWorkingset(page)) { - if (!PageSwapBacked(page)) { + !folio_test_uptodate(folio) && folio_test_workingset(folio)) { + if (!folio_test_swapbacked(folio)) { delayacct_thrashing_start(); delayacct =3D true; } @@ -1239,7 +1240,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, =20 init_wait(wait); wait->func =3D wake_page_function; - wait_page.page =3D page; + wait_page.page =3D &folio->page; wait_page.bit_nr =3D bit_nr; =20 repeat: @@ -1254,7 +1255,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, * Do one last check whether we can get the * page bit synchronously. * - * Do the SetPageWaiters() marking before that + * Do the folio_set_waiters() marking before that * to let any waker we _just_ missed know they * need to wake us up (otherwise they'll never * even go to the slow case that looks at the @@ -1265,8 +1266,8 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, * lock to avoid races. */ spin_lock_irq(&q->lock); - SetPageWaiters(page); - if (!trylock_page_bit_common(page, bit_nr, wait)) + folio_set_waiters(folio); + if (!folio_trylock_flag(folio, bit_nr, wait)) __add_wait_queue_entry_tail(q, wait); spin_unlock_irq(&q->lock); =20 @@ -1276,10 +1277,10 @@ static inline int wait_on_page_bit_common(wait_qu= eue_head_t *q, * see whether the page bit testing has already * been done by the wake function. * - * We can drop our reference to the page. + * We can drop our reference to the folio. */ if (behavior =3D=3D DROP) - put_page(page); + folio_put(folio); =20 /* * Note that until the "finish_wait()", or until @@ -1316,7 +1317,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, * * And if that fails, we'll have to retry this all. */ - if (unlikely(test_and_set_bit(bit_nr, &page->flags))) + if (unlikely(test_and_set_bit(bit_nr, folio_flags(folio, 0)))) goto repeat; =20 wait->flags |=3D WQ_FLAG_DONE; @@ -1325,7 +1326,7 @@ static inline int wait_on_page_bit_common(wait_queu= e_head_t *q, =20 /* * If a signal happened, this 'finish_wait()' may remove the last - * waiter from the wait-queues, but the PageWaiters bit will remain + * waiter from the wait-queues, but the folio waiters bit will remain * set. That's ok. The next wakeup will take care of it, and trying * to do it here would be difficult and prone to races. */ @@ -1356,19 +1357,17 @@ static inline int wait_on_page_bit_common(wait_qu= eue_head_t *q, return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR; } =20 -void wait_on_page_bit(struct page *page, int bit_nr) +void folio_wait_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q =3D page_waitqueue(page); - wait_on_page_bit_common(q, page, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); + folio_wait_bit_common(folio, bit_nr, TASK_UNINTERRUPTIBLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit); +EXPORT_SYMBOL(folio_wait_bit); =20 -int wait_on_page_bit_killable(struct page *page, int bit_nr) +int folio_wait_bit_killable(struct folio *folio, int bit_nr) { - wait_queue_head_t *q =3D page_waitqueue(page); - return wait_on_page_bit_common(q, page, bit_nr, TASK_KILLABLE, SHARED); + return folio_wait_bit_common(folio, bit_nr, TASK_KILLABLE, SHARED); } -EXPORT_SYMBOL(wait_on_page_bit_killable); +EXPORT_SYMBOL(folio_wait_bit_killable); =20 /** * put_and_wait_on_page_locked - Drop a reference and wait for it to be = unlocked @@ -1385,11 +1384,8 @@ EXPORT_SYMBOL(wait_on_page_bit_killable); */ int put_and_wait_on_page_locked(struct page *page, int state) { - wait_queue_head_t *q; - - page =3D compound_head(page); - q =3D page_waitqueue(page); - return wait_on_page_bit_common(q, page, PG_locked, state, DROP); + return folio_wait_bit_common(page_folio(page), PG_locked, state, + DROP); } =20 /** @@ -1483,9 +1479,10 @@ EXPORT_SYMBOL(end_page_private_2); */ void wait_on_page_private_2(struct page *page) { - page =3D compound_head(page); - while (PagePrivate2(page)) - wait_on_page_bit(page, PG_private_2); + struct folio *folio =3D page_folio(page); + + while (folio_test_private_2(folio)) + folio_wait_bit(folio, PG_private_2); } EXPORT_SYMBOL(wait_on_page_private_2); =20 @@ -1502,11 +1499,11 @@ EXPORT_SYMBOL(wait_on_page_private_2); */ int wait_on_page_private_2_killable(struct page *page) { + struct folio *folio =3D page_folio(page); int ret =3D 0; =20 - page =3D compound_head(page); - while (PagePrivate2(page)) { - ret =3D wait_on_page_bit_killable(page, PG_private_2); + while (folio_test_private_2(folio)) { + ret =3D folio_wait_bit_killable(folio, PG_private_2); if (ret < 0) break; } @@ -1583,16 +1580,14 @@ EXPORT_SYMBOL_GPL(page_endio); */ void __folio_lock(struct folio *folio) { - wait_queue_head_t *q =3D page_waitqueue(&folio->page); - wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_UNINTERRUPTIBL= E, + folio_wait_bit_common(folio, PG_locked, TASK_UNINTERRUPTIBLE, EXCLUSIVE); } EXPORT_SYMBOL(__folio_lock); =20 int __folio_lock_killable(struct folio *folio) { - wait_queue_head_t *q =3D page_waitqueue(&folio->page); - return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABL= E, + return folio_wait_bit_common(folio, PG_locked, TASK_KILLABLE, EXCLUSIVE); } EXPORT_SYMBOL_GPL(__folio_lock_killable); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index a078e9786cc4..b34278d05395 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2846,7 +2846,7 @@ void folio_wait_writeback(struct folio *folio) { while (folio_test_writeback(folio)) { trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); - wait_on_page_bit(&folio->page, PG_writeback); + folio_wait_bit(folio, PG_writeback); } } EXPORT_SYMBOL_GPL(folio_wait_writeback); @@ -2868,7 +2868,7 @@ int folio_wait_writeback_killable(struct folio *fol= io) { while (folio_test_writeback(folio)) { trace_wait_on_page_writeback(&folio->page, folio_mapping(folio)); - if (wait_on_page_bit_killable(&folio->page, PG_writeback)) + if (folio_wait_bit_killable(folio, PG_writeback)) return -EINTR; } =20 --=20 2.30.2