From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BCDDC433B4 for ; Wed, 31 Mar 2021 18:57:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30AAA60FD9 for ; Wed, 31 Mar 2021 18:57:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 30AAA60FD9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BBAC46B00A4; Wed, 31 Mar 2021 14:57:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B6CCE6B00A5; Wed, 31 Mar 2021 14:57:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A32B76B00A6; Wed, 31 Mar 2021 14:57:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 8638E6B00A4 for ; Wed, 31 Mar 2021 14:57:37 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 451688249980 for ; Wed, 31 Mar 2021 18:57:37 +0000 (UTC) X-FDA: 77981078154.29.B4A2FFC Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf26.hostedemail.com (Postfix) with ESMTP id B20B440002D7 for ; Wed, 31 Mar 2021 18:57:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=zx4icNpvsN7VciIRv1z3fo+BROzFb0OYPCXvOmpopcw=; b=lvrigCRFPLUesHVbEWDybBWreP Wqc/2Z6q3VVlBDMv98DgjCKdiy/gcRGrwlDk6Us7+WJcgNlfxl4B52FiYz/jtF0iMwfh5vXee5f7H M+19BMc1IY4CBdpmWxnR7taEXoEczYFn5vOYEs/shZKPXTpqE0wiMi5UbMR4mT1JjJbfEt5jtPk15 syDq6a4H5f03SG+yNrdXYUJ+nKFDqpzsuQWGr1dFtxUiaEWALRLbnqVnRhlTmdNxF9Wcd5ON1aDzT PyALfM2Tv1D5Hfnv1TU0NRkrxBzcCSBdh4fwCL/hViV8JLs6wb72jfDFNunS6/0//wCzMEKfZE3D4 /oIC0m1g==; Received: from willy by casper.infradead.org with local (Exim 4.94 #2 (Red Hat Linux)) id 1lRg01-00502h-Qo; Wed, 31 Mar 2021 18:55:44 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-cachefs@redhat.com, linux-afs@lists.infradead.org Subject: [PATCH v6 27/27] mm/filemap: Convert page wait queues to be folios Date: Wed, 31 Mar 2021 19:47:28 +0100 Message-Id: <20210331184728.1188084-28-willy@infradead.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210331184728.1188084-1-willy@infradead.org> References: <20210331184728.1188084-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: B20B440002D7 X-Stat-Signature: 7kmib85imp5f3i1hb8eek1kki35ycccn Received-SPF: none (infradead.org>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=casper.infradead.org; client-ip=90.155.50.34 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1617217055-8040 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Reinforce that if we're waiting for a bit in a struct page, that's actually in the head page by changing the type from page to folio. Increases the size of cachefiles by two bytes, but the kernel core is unchanged in size. Signed-off-by: Matthew Wilcox (Oracle) --- fs/cachefiles/rdwr.c | 16 ++++++++-------- include/linux/pagemap.h | 8 ++++---- mm/filemap.c | 38 +++++++++++++++++++------------------- 3 files changed, 31 insertions(+), 31 deletions(-) diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c index 8ffc40e84a59..364af267ebaa 100644 --- a/fs/cachefiles/rdwr.c +++ b/fs/cachefiles/rdwr.c @@ -25,20 +25,20 @@ static int cachefiles_read_waiter(wait_queue_entry_t = *wait, unsigned mode, struct cachefiles_object *object; struct fscache_retrieval *op =3D monitor->op; struct wait_page_key *key =3D _key; - struct page *page =3D wait->private; + struct folio *folio =3D wait->private; =20 ASSERT(key); =20 _enter("{%lu},%u,%d,{%p,%u}", monitor->netfs_page->index, mode, sync, - key->page, key->bit_nr); + key->folio, key->bit_nr); =20 - if (key->page !=3D page || key->bit_nr !=3D PG_locked) + if (key->folio !=3D folio || key->bit_nr !=3D PG_locked) return 0; =20 - _debug("--- monitor %p %lx ---", page, page->flags); + _debug("--- monitor %p %lx ---", folio, folio->flags); =20 - if (!PageUptodate(page) && !PageError(page)) { + if (!FolioUptodate(folio) && !FolioError(folio)) { /* unlocked, not uptodate and not erronous? */ _debug("page probably truncated"); } @@ -107,7 +107,7 @@ static int cachefiles_read_reissue(struct cachefiles_= object *object, put_page(backpage2); =20 INIT_LIST_HEAD(&monitor->op_link); - add_page_wait_queue(backpage, &monitor->monitor); + add_folio_wait_queue(page_folio(backpage), &monitor->monitor); =20 if (trylock_page(backpage)) { ret =3D -EIO; @@ -294,7 +294,7 @@ static int cachefiles_read_backing_file_one(struct ca= chefiles_object *object, get_page(backpage); monitor->back_page =3D backpage; monitor->monitor.private =3D backpage; - add_page_wait_queue(backpage, &monitor->monitor); + add_folio_wait_queue(page_folio(backpage), &monitor->monitor); monitor =3D NULL; =20 /* but the page may have been read before the monitor was installed, so @@ -548,7 +548,7 @@ static int cachefiles_read_backing_file(struct cachef= iles_object *object, get_page(backpage); monitor->back_page =3D backpage; monitor->monitor.private =3D backpage; - add_page_wait_queue(backpage, &monitor->monitor); + add_folio_wait_queue(page_folio(backpage), &monitor->monitor); monitor =3D NULL; =20 /* but the page may have been read before the monitor was diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index d800fae55f98..bf38ce40694d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -690,13 +690,13 @@ static inline pgoff_t linear_page_index(struct vm_a= rea_struct *vma, } =20 struct wait_page_key { - struct page *page; + struct folio *folio; int bit_nr; int page_match; }; =20 struct wait_page_queue { - struct page *page; + struct folio *folio; int bit_nr; wait_queue_entry_t wait; }; @@ -704,7 +704,7 @@ struct wait_page_queue { static inline bool wake_page_match(struct wait_page_queue *wait_page, struct wait_page_key *key) { - if (wait_page->page !=3D key->page) + if (wait_page->folio !=3D key->folio) return false; key->page_match =3D 1; =20 @@ -841,7 +841,7 @@ void page_endio(struct page *page, bool is_write, int= err); /* * Add an arbitrary waiter to a page's wait queue */ -extern void add_page_wait_queue(struct page *page, wait_queue_entry_t *w= aiter); +void add_folio_wait_queue(struct folio *folio, wait_queue_entry_t *waite= r); =20 /* * Fault everything in given userspace address range in. diff --git a/mm/filemap.c b/mm/filemap.c index 51b2091d402c..b93ea19afd89 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1019,11 +1019,11 @@ EXPORT_SYMBOL(__page_cache_alloc); */ #define PAGE_WAIT_TABLE_BITS 8 #define PAGE_WAIT_TABLE_SIZE (1 << PAGE_WAIT_TABLE_BITS) -static wait_queue_head_t page_wait_table[PAGE_WAIT_TABLE_SIZE] __cacheli= ne_aligned; +static wait_queue_head_t folio_wait_table[PAGE_WAIT_TABLE_SIZE] __cachel= ine_aligned; =20 -static wait_queue_head_t *page_waitqueue(struct page *page) +static wait_queue_head_t *folio_waitqueue(struct folio *folio) { - return &page_wait_table[hash_ptr(page, PAGE_WAIT_TABLE_BITS)]; + return &folio_wait_table[hash_ptr(folio, PAGE_WAIT_TABLE_BITS)]; } =20 void __init pagecache_init(void) @@ -1031,7 +1031,7 @@ void __init pagecache_init(void) int i; =20 for (i =3D 0; i < PAGE_WAIT_TABLE_SIZE; i++) - init_waitqueue_head(&page_wait_table[i]); + init_waitqueue_head(&folio_wait_table[i]); =20 page_writeback_init(); } @@ -1086,10 +1086,10 @@ static int wake_page_function(wait_queue_entry_t = *wait, unsigned mode, int sync, */ flags =3D wait->flags; if (flags & WQ_FLAG_EXCLUSIVE) { - if (test_bit(key->bit_nr, &key->page->flags)) + if (test_bit(key->bit_nr, &key->folio->flags)) return -1; if (flags & WQ_FLAG_CUSTOM) { - if (test_and_set_bit(key->bit_nr, &key->page->flags)) + if (test_and_set_bit(key->bit_nr, &key->folio->flags)) return -1; flags |=3D WQ_FLAG_DONE; } @@ -1123,12 +1123,12 @@ static int wake_page_function(wait_queue_entry_t = *wait, unsigned mode, int sync, =20 static void wake_up_folio_bit(struct folio *folio, int bit_nr) { - wait_queue_head_t *q =3D page_waitqueue(&folio->page); + wait_queue_head_t *q =3D folio_waitqueue(folio); struct wait_page_key key; unsigned long flags; wait_queue_entry_t bookmark; =20 - key.page =3D &folio->page; + key.folio =3D folio; key.bit_nr =3D bit_nr; key.page_match =3D 0; =20 @@ -1220,7 +1220,7 @@ int sysctl_page_lock_unfairness =3D 5; static inline int wait_on_folio_bit_common(struct folio *folio, int bit_= nr, int state, enum behavior behavior) { - wait_queue_head_t *q =3D page_waitqueue(&folio->page); + wait_queue_head_t *q =3D folio_waitqueue(folio); int unfairness =3D sysctl_page_lock_unfairness; struct wait_page_queue wait_page; wait_queue_entry_t *wait =3D &wait_page.wait; @@ -1240,7 +1240,7 @@ static inline int wait_on_folio_bit_common(struct f= olio *folio, int bit_nr, =20 init_wait(wait); wait->func =3D wake_page_function; - wait_page.page =3D &folio->page; + wait_page.folio =3D folio; wait_page.bit_nr =3D bit_nr; =20 repeat: @@ -1389,23 +1389,23 @@ int put_and_wait_on_page_locked(struct page *page= , int state) } =20 /** - * add_page_wait_queue - Add an arbitrary waiter to a page's wait queue - * @page: Page defining the wait queue of interest + * add_folio_wait_queue - Add an arbitrary waiter to a folio's wait queu= e + * @folio: Folio defining the wait queue of interest * @waiter: Waiter to add to the queue * - * Add an arbitrary @waiter to the wait queue for the nominated @page. + * Add an arbitrary @waiter to the wait queue for the nominated @folio. */ -void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter) +void add_folio_wait_queue(struct folio *folio, wait_queue_entry_t *waite= r) { - wait_queue_head_t *q =3D page_waitqueue(page); + wait_queue_head_t *q =3D folio_waitqueue(folio); unsigned long flags; =20 spin_lock_irqsave(&q->lock, flags); __add_wait_queue_entry_tail(q, waiter); - SetPageWaiters(page); + SetFolioWaiters(folio); spin_unlock_irqrestore(&q->lock, flags); } -EXPORT_SYMBOL_GPL(add_page_wait_queue); +EXPORT_SYMBOL_GPL(add_folio_wait_queue); =20 #ifndef clear_bit_unlock_is_negative_byte =20 @@ -1550,10 +1550,10 @@ EXPORT_SYMBOL_GPL(__lock_folio_killable); =20 static int __lock_folio_async(struct folio *folio, struct wait_page_queu= e *wait) { - struct wait_queue_head *q =3D page_waitqueue(&folio->page); + struct wait_queue_head *q =3D folio_waitqueue(folio); int ret =3D 0; =20 - wait->page =3D &folio->page; + wait->folio =3D folio; wait->bit_nr =3D PG_locked; =20 spin_lock_irq(&q->lock); --=20 2.30.2