All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/26] Converting release_page to release_folio
@ 2022-05-02  5:55 Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 01/26] fs: Add aops->release_folio Matthew Wilcox (Oracle)
                   ` (25 more replies)
  0 siblings, 26 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Continuing my quest to convert all the aops from pages to folios
... release_folio.

Matthew Wilcox (Oracle) (26):
  fs: Add aops->release_folio
  iomap: Convert to release_folio
  9p: Convert to release_folio
  afs: Convert to release_folio
  btrfs: Convert to release_folio
  ceph: Convert to release_folio
  cifs: Convert to release_folio
  erofs: Convert to release_folio
  ext4: Convert to release_folio
  f2fs: Convert to release_folio
  gfs2: Convert to release_folio
  hfs: Convert to release_folio
  hfsplus: Convert to release_folio
  jfs: Convert to release_folio
  nfs: Convert to release_folio
  nilfs2: Remove comment about releasepage
  ocfs2: Convert to release_folio
  orangefs: Convert to release_folio
  reiserfs: Convert to release_folio
  ubifs: Convert to release_folio
  fs: Remove last vestiges of releasepage
  reiserfs: Convert release_buffer_page() to use a folio
  jbd2: Convert jbd2_journal_try_to_free_buffers to take a folio
  jbd2: Convert release_buffer_page() to use a folio
  fs: Change try_to_free_buffers() to take a folio
  fs: Convert drop_buffers() to use a folio

 .../filesystems/caching/netfs-api.rst         |  4 +-
 Documentation/filesystems/locking.rst         | 14 ++---
 Documentation/filesystems/vfs.rst             | 45 ++++++++--------
 fs/9p/vfs_addr.c                              | 17 +++---
 fs/afs/dir.c                                  |  7 ++-
 fs/afs/file.c                                 | 11 ++--
 fs/afs/internal.h                             |  2 +-
 fs/btrfs/disk-io.c                            | 12 ++---
 fs/btrfs/extent_io.c                          | 14 ++---
 fs/btrfs/file.c                               |  2 +-
 fs/btrfs/inode.c                              | 24 ++++-----
 fs/buffer.c                                   | 54 +++++++++----------
 fs/ceph/addr.c                                | 24 ++++-----
 fs/cifs/file.c                                | 14 ++---
 fs/erofs/super.c                              | 16 +++---
 fs/ext4/inode.c                               | 20 +++----
 fs/f2fs/checkpoint.c                          |  2 +-
 fs/f2fs/compress.c                            |  2 +-
 fs/f2fs/data.c                                | 32 +++++------
 fs/f2fs/f2fs.h                                |  2 +-
 fs/f2fs/node.c                                |  2 +-
 fs/gfs2/aops.c                                | 44 +++++++--------
 fs/gfs2/inode.h                               |  2 +-
 fs/gfs2/meta_io.c                             |  4 +-
 fs/hfs/inode.c                                | 23 ++++----
 fs/hfsplus/inode.c                            | 23 ++++----
 fs/iomap/buffered-io.c                        | 22 ++++----
 fs/iomap/trace.h                              |  2 +-
 fs/jbd2/commit.c                              | 14 ++---
 fs/jbd2/transaction.c                         | 14 ++---
 fs/jfs/jfs_metapage.c                         | 16 +++---
 fs/mpage.c                                    |  2 +-
 fs/nfs/file.c                                 | 22 ++++----
 fs/nfs/fscache.h                              | 14 ++---
 fs/nilfs2/inode.c                             |  1 -
 fs/ocfs2/aops.c                               | 10 ++--
 fs/orangefs/inode.c                           |  6 +--
 fs/reiserfs/inode.c                           | 20 +++----
 fs/reiserfs/journal.c                         | 14 ++---
 fs/ubifs/file.c                               | 18 +++----
 fs/xfs/xfs_aops.c                             |  2 +-
 fs/zonefs/super.c                             |  2 +-
 include/linux/buffer_head.h                   |  4 +-
 include/linux/fs.h                            |  2 +-
 include/linux/iomap.h                         |  2 +-
 include/linux/jbd2.h                          |  2 +-
 include/linux/page-flags.h                    |  2 +-
 include/linux/pagemap.h                       |  4 --
 mm/filemap.c                                  |  6 +--
 mm/migrate.c                                  |  2 +-
 mm/vmscan.c                                   |  2 +-
 51 files changed, 309 insertions(+), 312 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 01/26] fs: Add aops->release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02 15:19   ` Jeff Layton
  2022-05-02  5:55 ` [PATCH 02/26] iomap: Convert to release_folio Matthew Wilcox (Oracle)
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

This replaces aops->releasepage.  Update the documentation, and call it
if it exists.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 .../filesystems/caching/netfs-api.rst         |  4 +-
 Documentation/filesystems/locking.rst         | 14 +++---
 Documentation/filesystems/vfs.rst             | 45 +++++++++----------
 include/linux/fs.h                            |  1 +
 mm/filemap.c                                  |  2 +
 5 files changed, 34 insertions(+), 32 deletions(-)

diff --git a/Documentation/filesystems/caching/netfs-api.rst b/Documentation/filesystems/caching/netfs-api.rst
index 7308d76a29dc..1d18e9def183 100644
--- a/Documentation/filesystems/caching/netfs-api.rst
+++ b/Documentation/filesystems/caching/netfs-api.rst
@@ -433,11 +433,11 @@ has done a write and then the page it wrote from has been released by the VM,
 after which it *has* to look in the cache.
 
 To inform fscache that a page might now be in the cache, the following function
-should be called from the ``releasepage`` address space op::
+should be called from the ``release_folio`` address space op::
 
 	void fscache_note_page_release(struct fscache_cookie *cookie);
 
-if the page has been released (ie. releasepage returned true).
+if the page has been released (ie. release_folio returned true).
 
 Page release and page invalidation should also wait for any mark left on the
 page to say that a DIO write is underway from that page::
diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index aeba2475a53c..2a295bb72dbc 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -249,7 +249,7 @@ prototypes::
 				struct page *page, void *fsdata);
 	sector_t (*bmap)(struct address_space *, sector_t);
 	void (*invalidate_folio) (struct folio *, size_t start, size_t len);
-	int (*releasepage) (struct page *, int);
+	int (*release_folio)(struct folio *, gfp_t);
 	void (*freepage)(struct page *);
 	int (*direct_IO)(struct kiocb *, struct iov_iter *iter);
 	bool (*isolate_page) (struct page *, isolate_mode_t);
@@ -270,13 +270,13 @@ ops			PageLocked(page)	 i_rwsem	invalidate_lock
 writepage:		yes, unlocks (see below)
 read_folio:		yes, unlocks				shared
 writepages:
-dirty_folio		maybe
+dirty_folio:		maybe
 readahead:		yes, unlocks				shared
 write_begin:		locks the page		 exclusive
 write_end:		yes, unlocks		 exclusive
 bmap:
 invalidate_folio:	yes					exclusive
-releasepage:		yes
+release_folio:		yes
 freepage:		yes
 direct_IO:
 isolate_page:		yes
@@ -372,10 +372,10 @@ invalidate_lock before invalidating page cache in truncate / hole punch
 path (and thus calling into ->invalidate_folio) to block races between page
 cache invalidation and page cache filling functions (fault, read, ...).
 
-->releasepage() is called when the kernel is about to try to drop the
-buffers from the page in preparation for freeing it.  It returns zero to
-indicate that the buffers are (or may be) freeable.  If ->releasepage is zero,
-the kernel assumes that the fs has no private interest in the buffers.
+->release_folio() is called when the kernel is about to try to drop the
+buffers from the folio in preparation for freeing it.  It returns false to
+indicate that the buffers are (or may be) freeable.  If ->release_folio is
+NULL, the kernel assumes that the fs has no private interest in the buffers.
 
 ->freepage() is called when the kernel is done dropping the page
 from the page cache.
diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
index 0919a4ad973a..679887b5c8fc 100644
--- a/Documentation/filesystems/vfs.rst
+++ b/Documentation/filesystems/vfs.rst
@@ -620,9 +620,9 @@ Writeback.
 The first can be used independently to the others.  The VM can try to
 either write dirty pages in order to clean them, or release clean pages
 in order to reuse them.  To do this it can call the ->writepage method
-on dirty pages, and ->releasepage on clean pages with PagePrivate set.
-Clean pages without PagePrivate and with no external references will be
-released without notice being given to the address_space.
+on dirty pages, and ->release_folio on clean folios with the private
+flag set.  Clean pages without PagePrivate and with no external references
+will be released without notice being given to the address_space.
 
 To achieve this functionality, pages need to be placed on an LRU with
 lru_cache_add and mark_page_active needs to be called whenever the page
@@ -734,7 +734,7 @@ cache in your filesystem.  The following members are defined:
 				 struct page *page, void *fsdata);
 		sector_t (*bmap)(struct address_space *, sector_t);
 		void (*invalidate_folio) (struct folio *, size_t start, size_t len);
-		int (*releasepage) (struct page *, int);
+		bool (*release_folio)(struct folio *, gfp_t);
 		void (*freepage)(struct page *);
 		ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
 		/* isolate a page for migration */
@@ -864,33 +864,32 @@ cache in your filesystem.  The following members are defined:
 	address space.  This generally corresponds to either a
 	truncation, punch hole or a complete invalidation of the address
 	space (in the latter case 'offset' will always be 0 and 'length'
-	will be folio_size()).  Any private data associated with the page
+	will be folio_size()).  Any private data associated with the folio
 	should be updated to reflect this truncation.  If offset is 0
 	and length is folio_size(), then the private data should be
-	released, because the page must be able to be completely
-	discarded.  This may be done by calling the ->releasepage
+	released, because the folio must be able to be completely
+	discarded.  This may be done by calling the ->release_folio
 	function, but in this case the release MUST succeed.
 
-``releasepage``
-	releasepage is called on PagePrivate pages to indicate that the
-	page should be freed if possible.  ->releasepage should remove
-	any private data from the page and clear the PagePrivate flag.
-	If releasepage() fails for some reason, it must indicate failure
-	with a 0 return value.  releasepage() is used in two distinct
-	though related cases.  The first is when the VM finds a clean
-	page with no active users and wants to make it a free page.  If
-	->releasepage succeeds, the page will be removed from the
-	address_space and become free.
+``release_folio``
+	release_folio is called on folios with private data to tell the
+	filesystem that the folio is about to be freed.  ->release_folio
+	should remove any private data from the folio and clear the
+	private flag.  If release_folio() fails, it should return false.
+	release_folio() is used in two distinct though related cases.
+	The first is when the VM wants to free a clean folio with no
+	active users.  If ->release_folio succeeds, the folio will be
+	removed from the address_space and be freed.
 
 	The second case is when a request has been made to invalidate
-	some or all pages in an address_space.  This can happen through
-	the fadvise(POSIX_FADV_DONTNEED) system call or by the
-	filesystem explicitly requesting it as nfs and 9fs do (when they
+	some or all folios in an address_space.  This can happen
+	through the fadvise(POSIX_FADV_DONTNEED) system call or by the
+	filesystem explicitly requesting it as nfs and 9p do (when they
 	believe the cache may be out of date with storage) by calling
 	invalidate_inode_pages2().  If the filesystem makes such a call,
-	and needs to be certain that all pages are invalidated, then its
-	releasepage will need to ensure this.  Possibly it can clear the
-	PageUptodate bit if it cannot free private data yet.
+	and needs to be certain that all folios are invalidated, then
+	its release_folio will need to ensure this.  Possibly it can
+	clear the uptodate flag if it cannot free private data yet.
 
 ``freepage``
 	freepage is called once the page is no longer visible in the
diff --git a/include/linux/fs.h b/include/linux/fs.h
index f812f5aa07dd..ad768f13f485 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -355,6 +355,7 @@ struct address_space_operations {
 	/* Unfortunately this kludge is needed for FIBMAP. Don't use it */
 	sector_t (*bmap)(struct address_space *, sector_t);
 	void (*invalidate_folio) (struct folio *, size_t offset, size_t len);
+	bool (*release_folio)(struct folio *, gfp_t);
 	int (*releasepage) (struct page *, gfp_t);
 	void (*freepage)(struct page *);
 	ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
diff --git a/mm/filemap.c b/mm/filemap.c
index 81a0ed08a82c..40df5704ec39 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3956,6 +3956,8 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
 	if (folio_test_writeback(folio))
 		return false;
 
+	if (mapping && mapping->a_ops->release_folio)
+		return mapping->a_ops->release_folio(folio, gfp);
 	if (mapping && mapping->a_ops->releasepage)
 		return mapping->a_ops->releasepage(&folio->page, gfp);
 	return try_to_free_buffers(&folio->page);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 02/26] iomap: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 01/26] fs: Add aops->release_folio Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 03/26] 9p: " Matthew Wilcox (Oracle)
                   ` (23 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Change all the filesystems which used iomap_releasepage to use the
new function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/gfs2/aops.c         |  2 +-
 fs/iomap/buffered-io.c | 22 ++++++++++------------
 fs/iomap/trace.h       |  2 +-
 fs/xfs/xfs_aops.c      |  2 +-
 fs/zonefs/super.c      |  2 +-
 include/linux/iomap.h  |  2 +-
 6 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 1016631bcbdc..3d6c5c5eb4f1 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -768,7 +768,7 @@ static const struct address_space_operations gfs2_aops = {
 	.read_folio = gfs2_read_folio,
 	.readahead = gfs2_readahead,
 	.dirty_folio = filemap_dirty_folio,
-	.releasepage = iomap_releasepage,
+	.release_folio = iomap_release_folio,
 	.invalidate_folio = iomap_invalidate_folio,
 	.bmap = gfs2_bmap,
 	.direct_IO = noop_direct_IO,
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 2de087ac87b6..8532f0e2e2d6 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -452,25 +452,23 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count)
 }
 EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate);
 
-int
-iomap_releasepage(struct page *page, gfp_t gfp_mask)
+bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags)
 {
-	struct folio *folio = page_folio(page);
-
-	trace_iomap_releasepage(folio->mapping->host, folio_pos(folio),
+	trace_iomap_release_folio(folio->mapping->host, folio_pos(folio),
 			folio_size(folio));
 
 	/*
-	 * mm accommodates an old ext3 case where clean pages might not have had
-	 * the dirty bit cleared. Thus, it can send actual dirty pages to
-	 * ->releasepage() via shrink_active_list(); skip those here.
+	 * mm accommodates an old ext3 case where clean folios might
+	 * not have had the dirty bit cleared.  Thus, it can send actual
+	 * dirty folios to ->release_folio() via shrink_active_list();
+	 * skip those here.
 	 */
 	if (folio_test_dirty(folio) || folio_test_writeback(folio))
-		return 0;
+		return false;
 	iomap_page_release(folio);
-	return 1;
+	return true;
 }
-EXPORT_SYMBOL_GPL(iomap_releasepage);
+EXPORT_SYMBOL_GPL(iomap_release_folio);
 
 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len)
 {
@@ -1483,7 +1481,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 		 * Skip the page if it's fully outside i_size, e.g. due to a
 		 * truncate operation that's in progress. We must redirty the
 		 * page so that reclaim stops reclaiming it. Otherwise
-		 * iomap_vm_releasepage() is called on it and gets confused.
+		 * iomap_release_folio() is called on it and gets confused.
 		 *
 		 * Note that the end_index is unsigned long.  If the given
 		 * offset is greater than 16TB on a 32-bit system then if we
diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
index a6689a563c6e..d48868fc40d7 100644
--- a/fs/iomap/trace.h
+++ b/fs/iomap/trace.h
@@ -80,7 +80,7 @@ DEFINE_EVENT(iomap_range_class, name,	\
 	TP_PROTO(struct inode *inode, loff_t off, u64 len),\
 	TP_ARGS(inode, off, len))
 DEFINE_RANGE_EVENT(iomap_writepage);
-DEFINE_RANGE_EVENT(iomap_releasepage);
+DEFINE_RANGE_EVENT(iomap_release_folio);
 DEFINE_RANGE_EVENT(iomap_invalidate_folio);
 DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail);
 
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index a9c4bb500d53..2acbfc6925dd 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -568,7 +568,7 @@ const struct address_space_operations xfs_address_space_operations = {
 	.readahead		= xfs_vm_readahead,
 	.writepages		= xfs_vm_writepages,
 	.dirty_folio		= filemap_dirty_folio,
-	.releasepage		= iomap_releasepage,
+	.release_folio		= iomap_release_folio,
 	.invalidate_folio	= iomap_invalidate_folio,
 	.bmap			= xfs_vm_bmap,
 	.direct_IO		= noop_direct_IO,
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index c3a38f711b24..b1a428f860b3 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -197,7 +197,7 @@ static const struct address_space_operations zonefs_file_aops = {
 	.writepage		= zonefs_writepage,
 	.writepages		= zonefs_writepages,
 	.dirty_folio		= filemap_dirty_folio,
-	.releasepage		= iomap_releasepage,
+	.release_folio		= iomap_release_folio,
 	.invalidate_folio	= iomap_invalidate_folio,
 	.migratepage		= iomap_migrate_page,
 	.is_partially_uptodate	= iomap_is_partially_uptodate,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 5b2aa45ddda3..0d674695b6d3 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -228,7 +228,7 @@ ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
 int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops);
 void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops);
 bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count);
-int iomap_releasepage(struct page *page, gfp_t gfp_mask);
+bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags);
 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len);
 #ifdef CONFIG_MIGRATION
 int iomap_migrate_page(struct address_space *mapping, struct page *newpage,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 03/26] 9p: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 01/26] fs: Add aops->release_folio Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 02/26] iomap: Convert to release_folio Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 04/26] afs: " Matthew Wilcox (Oracle)
                   ` (22 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

A straightforward conversion as it already works in terms of folios.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/9p/vfs_addr.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index 3a84167f4893..8ce82ff1e40a 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -100,29 +100,28 @@ const struct netfs_request_ops v9fs_req_ops = {
 };
 
 /**
- * v9fs_release_page - release the private state associated with a page
- * @page: The page to be released
+ * v9fs_release_folio - release the private state associated with a folio
+ * @folio: The folio to be released
  * @gfp: The caller's allocation restrictions
  *
- * Returns 1 if the page can be released, false otherwise.
+ * Returns true if the page can be released, false otherwise.
  */
 
-static int v9fs_release_page(struct page *page, gfp_t gfp)
+static bool v9fs_release_folio(struct folio *folio, gfp_t gfp)
 {
-	struct folio *folio = page_folio(page);
 	struct inode *inode = folio_inode(folio);
 
 	if (folio_test_private(folio))
-		return 0;
+		return false;
 #ifdef CONFIG_9P_FSCACHE
 	if (folio_test_fscache(folio)) {
 		if (current_is_kswapd() || !(gfp & __GFP_FS))
-			return 0;
+			return false;
 		folio_wait_fscache(folio);
 	}
 #endif
 	fscache_note_page_release(v9fs_inode_cookie(V9FS_I(inode)));
-	return 1;
+	return true;
 }
 
 static void v9fs_invalidate_folio(struct folio *folio, size_t offset,
@@ -342,7 +341,7 @@ const struct address_space_operations v9fs_addr_operations = {
 	.writepage = v9fs_vfs_writepage,
 	.write_begin = v9fs_write_begin,
 	.write_end = v9fs_write_end,
-	.releasepage = v9fs_release_page,
+	.release_folio = v9fs_release_folio,
 	.invalidate_folio = v9fs_invalidate_folio,
 	.launder_folio = v9fs_launder_folio,
 	.direct_IO = v9fs_direct_IO,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 04/26] afs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 03/26] 9p: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 05/26] btrfs: " Matthew Wilcox (Oracle)
                   ` (21 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

A straightforward conversion as they already work in terms of folios.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/afs/dir.c      |  7 +++----
 fs/afs/file.c     | 11 +++++------
 fs/afs/internal.h |  2 +-
 3 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/fs/afs/dir.c b/fs/afs/dir.c
index 932e61e28e5d..94aa7356248e 100644
--- a/fs/afs/dir.c
+++ b/fs/afs/dir.c
@@ -41,7 +41,7 @@ static int afs_symlink(struct user_namespace *mnt_userns, struct inode *dir,
 static int afs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
 		      struct dentry *old_dentry, struct inode *new_dir,
 		      struct dentry *new_dentry, unsigned int flags);
-static int afs_dir_releasepage(struct page *page, gfp_t gfp_flags);
+static bool afs_dir_release_folio(struct folio *folio, gfp_t gfp_flags);
 static void afs_dir_invalidate_folio(struct folio *folio, size_t offset,
 				   size_t length);
 
@@ -75,7 +75,7 @@ const struct inode_operations afs_dir_inode_operations = {
 
 const struct address_space_operations afs_dir_aops = {
 	.dirty_folio	= afs_dir_dirty_folio,
-	.releasepage	= afs_dir_releasepage,
+	.release_folio	= afs_dir_release_folio,
 	.invalidate_folio = afs_dir_invalidate_folio,
 };
 
@@ -2002,9 +2002,8 @@ static int afs_rename(struct user_namespace *mnt_userns, struct inode *old_dir,
  * Release a directory folio and clean up its private state if it's not busy
  * - return true if the folio can now be released, false if not
  */
-static int afs_dir_releasepage(struct page *subpage, gfp_t gfp_flags)
+static bool afs_dir_release_folio(struct folio *folio, gfp_t gfp_flags)
 {
-	struct folio *folio = page_folio(subpage);
 	struct afs_vnode *dvnode = AFS_FS_I(folio_inode(folio));
 
 	_enter("{{%llx:%llu}[%lu]}", dvnode->fid.vid, dvnode->fid.vnode, folio_index(folio));
diff --git a/fs/afs/file.c b/fs/afs/file.c
index 65ef69a1f78e..a8e8832179e4 100644
--- a/fs/afs/file.c
+++ b/fs/afs/file.c
@@ -22,7 +22,7 @@ static int afs_file_mmap(struct file *file, struct vm_area_struct *vma);
 static int afs_symlink_read_folio(struct file *file, struct folio *folio);
 static void afs_invalidate_folio(struct folio *folio, size_t offset,
 			       size_t length);
-static int afs_releasepage(struct page *page, gfp_t gfp_flags);
+static bool afs_release_folio(struct folio *folio, gfp_t gfp_flags);
 
 static ssize_t afs_file_read_iter(struct kiocb *iocb, struct iov_iter *iter);
 static void afs_vm_open(struct vm_area_struct *area);
@@ -54,7 +54,7 @@ const struct address_space_operations afs_file_aops = {
 	.readahead	= netfs_readahead,
 	.dirty_folio	= afs_dirty_folio,
 	.launder_folio	= afs_launder_folio,
-	.releasepage	= afs_releasepage,
+	.release_folio	= afs_release_folio,
 	.invalidate_folio = afs_invalidate_folio,
 	.write_begin	= afs_write_begin,
 	.write_end	= afs_write_end,
@@ -64,7 +64,7 @@ const struct address_space_operations afs_file_aops = {
 
 const struct address_space_operations afs_symlink_aops = {
 	.read_folio	= afs_symlink_read_folio,
-	.releasepage	= afs_releasepage,
+	.release_folio	= afs_release_folio,
 	.invalidate_folio = afs_invalidate_folio,
 };
 
@@ -481,16 +481,15 @@ static void afs_invalidate_folio(struct folio *folio, size_t offset,
  * release a page and clean up its private state if it's not busy
  * - return true if the page can now be released, false if not
  */
-static int afs_releasepage(struct page *page, gfp_t gfp)
+static bool afs_release_folio(struct folio *folio, gfp_t gfp)
 {
-	struct folio *folio = page_folio(page);
 	struct afs_vnode *vnode = AFS_FS_I(folio_inode(folio));
 
 	_enter("{{%llx:%llu}[%lu],%lx},%x",
 	       vnode->fid.vid, vnode->fid.vnode, folio_index(folio), folio->flags,
 	       gfp);
 
-	/* deny if page is being written to the cache and the caller hasn't
+	/* deny if folio is being written to the cache and the caller hasn't
 	 * elected to wait */
 #ifdef CONFIG_AFS_FSCACHE
 	if (folio_test_fscache(folio)) {
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 7a72e9c60423..a30995901266 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -311,7 +311,7 @@ struct afs_net {
 	atomic_t		n_lookup;	/* Number of lookups done */
 	atomic_t		n_reval;	/* Number of dentries needing revalidation */
 	atomic_t		n_inval;	/* Number of invalidations by the server */
-	atomic_t		n_relpg;	/* Number of invalidations by releasepage */
+	atomic_t		n_relpg;	/* Number of invalidations by release_folio */
 	atomic_t		n_read_dir;	/* Number of directory pages read */
 	atomic_t		n_dir_cr;	/* Number of directory entry creation edits */
 	atomic_t		n_dir_rm;	/* Number of directory entry removal edits */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 05/26] btrfs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 04/26] afs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 06/26] ceph: " Matthew Wilcox (Oracle)
                   ` (20 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

I've only converted the outer layers of the btrfs release_folio paths
to use folios; the use of folios should be pushed further down into
btrfs from here.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/btrfs/disk-io.c   | 12 ++++++------
 fs/btrfs/extent_io.c | 14 +++++++-------
 fs/btrfs/file.c      |  2 +-
 fs/btrfs/inode.c     | 24 ++++++++++++------------
 4 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index ed8e288cc369..7b8b86c1e3a9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1005,12 +1005,12 @@ static int btree_writepages(struct address_space *mapping,
 	return btree_write_cache_pages(mapping, wbc);
 }
 
-static int btree_releasepage(struct page *page, gfp_t gfp_flags)
+static bool btree_release_folio(struct folio *folio, gfp_t gfp_flags)
 {
-	if (PageWriteback(page) || PageDirty(page))
-		return 0;
+	if (folio_test_writeback(folio) || folio_test_dirty(folio))
+		return false;
 
-	return try_release_extent_buffer(page);
+	return try_release_extent_buffer(&folio->page);
 }
 
 static void btree_invalidate_folio(struct folio *folio, size_t offset,
@@ -1019,7 +1019,7 @@ static void btree_invalidate_folio(struct folio *folio, size_t offset,
 	struct extent_io_tree *tree;
 	tree = &BTRFS_I(folio->mapping->host)->io_tree;
 	extent_invalidate_folio(tree, folio, offset);
-	btree_releasepage(&folio->page, GFP_NOFS);
+	btree_release_folio(folio, GFP_NOFS);
 	if (folio_get_private(folio)) {
 		btrfs_warn(BTRFS_I(folio->mapping->host)->root->fs_info,
 			   "folio private not zero on folio %llu",
@@ -1080,7 +1080,7 @@ static bool btree_dirty_folio(struct address_space *mapping,
 
 static const struct address_space_operations btree_aops = {
 	.writepages	= btree_writepages,
-	.releasepage	= btree_releasepage,
+	.release_folio	= btree_release_folio,
 	.invalidate_folio = btree_invalidate_folio,
 #ifdef CONFIG_MIGRATION
 	.migratepage	= btree_migratepage,
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 33c19f51d79b..e7a6e8757859 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -5271,7 +5271,7 @@ int extent_invalidate_folio(struct extent_io_tree *tree,
 }
 
 /*
- * a helper for releasepage, this tests for areas of the page that
+ * a helper for release_folio, this tests for areas of the page that
  * are locked or under IO and drops the related state bits if it is safe
  * to drop the page.
  */
@@ -5307,7 +5307,7 @@ static int try_release_extent_state(struct extent_io_tree *tree,
 }
 
 /*
- * a helper for releasepage.  As long as there are no locked extents
+ * a helper for release_folio.  As long as there are no locked extents
  * in the range corresponding to the page, both state records and extent
  * map records are removed
  */
@@ -6001,10 +6001,10 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
 	 *
 	 * It is only cleared in two cases: freeing the last non-tree
 	 * reference to the extent_buffer when its STALE bit is set or
-	 * calling releasepage when the tree reference is the only reference.
+	 * calling release_folio when the tree reference is the only reference.
 	 *
 	 * In both cases, care is taken to ensure that the extent_buffer's
-	 * pages are not under io. However, releasepage can be concurrently
+	 * pages are not under io. However, release_folio can be concurrently
 	 * called with creating new references, which is prone to race
 	 * conditions between the calls to check_buffer_tree_ref in those
 	 * codepaths and clearing TREE_REF in try_release_extent_buffer.
@@ -6257,7 +6257,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
 		/*
 		 * We can't unlock the pages just yet since the extent buffer
 		 * hasn't been properly inserted in the radix tree, this
-		 * opens a race with btree_releasepage which can free a page
+		 * opens a race with btree_release_folio which can free a page
 		 * while we are still filling in all pages for the buffer and
 		 * we could crash.
 		 */
@@ -6289,7 +6289,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
 
 	/*
 	 * Now it's safe to unlock the pages because any calls to
-	 * btree_releasepage will correctly detect that a page belongs to a
+	 * btree_release_folio will correctly detect that a page belongs to a
 	 * live buffer and won't free them prematurely.
 	 */
 	for (i = 0; i < num_pages; i++)
@@ -6659,7 +6659,7 @@ int read_extent_buffer_pages(struct extent_buffer *eb, int wait, int mirror_num)
 	eb->read_mirror = 0;
 	atomic_set(&eb->io_pages, num_reads);
 	/*
-	 * It is possible for releasepage to clear the TREE_REF bit before we
+	 * It is possible for release_folio to clear the TREE_REF bit before we
 	 * set io_pages. See check_buffer_tree_ref for a more detailed comment.
 	 */
 	check_buffer_tree_ref(eb);
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 57fba5abb059..c1eadb3f715c 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1323,7 +1323,7 @@ static int prepare_uptodate_page(struct inode *inode,
 
 		/*
 		 * Since btrfs_read_folio() will unlock the folio before it
-		 * returns, there is a window where btrfs_releasepage() can be
+		 * returns, there is a window where btrfs_release_folio() can be
 		 * called to release the page.  Here we check both inode
 		 * mapping and PagePrivate() to make sure the page was not
 		 * released.
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 34d452d350d6..4e1c3af82b35 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -8183,7 +8183,7 @@ static void btrfs_readahead(struct readahead_control *rac)
 }
 
 /*
- * For releasepage() and invalidate_folio() we have a race window where
+ * For release_folio() and invalidate_folio() we have a race window where
  * folio_end_writeback() is called but the subpage spinlock is not yet released.
  * If we continue to release/invalidate the page, we could cause use-after-free
  * for subpage spinlock.  So this function is to spin and wait for subpage
@@ -8215,22 +8215,22 @@ static void wait_subpage_spinlock(struct page *page)
 	spin_unlock_irq(&subpage->lock);
 }
 
-static int __btrfs_releasepage(struct page *page, gfp_t gfp_flags)
+static bool __btrfs_release_folio(struct folio *folio, gfp_t gfp_flags)
 {
-	int ret = try_release_extent_mapping(page, gfp_flags);
+	int ret = try_release_extent_mapping(&folio->page, gfp_flags);
 
 	if (ret == 1) {
-		wait_subpage_spinlock(page);
-		clear_page_extent_mapped(page);
+		wait_subpage_spinlock(&folio->page);
+		clear_page_extent_mapped(&folio->page);
 	}
 	return ret;
 }
 
-static int btrfs_releasepage(struct page *page, gfp_t gfp_flags)
+static bool btrfs_release_folio(struct folio *folio, gfp_t gfp_flags)
 {
-	if (PageWriteback(page) || PageDirty(page))
-		return 0;
-	return __btrfs_releasepage(page, gfp_flags);
+	if (folio_test_writeback(folio) || folio_test_dirty(folio))
+		return false;
+	return __btrfs_release_folio(folio, gfp_flags);
 }
 
 #ifdef CONFIG_MIGRATION
@@ -8301,7 +8301,7 @@ static void btrfs_invalidate_folio(struct folio *folio, size_t offset,
 	 * still safe to wait for ordered extent to finish.
 	 */
 	if (!(offset == 0 && length == folio_size(folio))) {
-		btrfs_releasepage(&folio->page, GFP_NOFS);
+		btrfs_release_folio(folio, GFP_NOFS);
 		return;
 	}
 
@@ -8425,7 +8425,7 @@ static void btrfs_invalidate_folio(struct folio *folio, size_t offset,
 	ASSERT(!folio_test_ordered(folio));
 	btrfs_page_clear_checked(fs_info, &folio->page, folio_pos(folio), folio_size(folio));
 	if (!inode_evicting)
-		__btrfs_releasepage(&folio->page, GFP_NOFS);
+		__btrfs_release_folio(folio, GFP_NOFS);
 	clear_page_extent_mapped(&folio->page);
 }
 
@@ -11375,7 +11375,7 @@ static const struct address_space_operations btrfs_aops = {
 	.readahead	= btrfs_readahead,
 	.direct_IO	= noop_direct_IO,
 	.invalidate_folio = btrfs_invalidate_folio,
-	.releasepage	= btrfs_releasepage,
+	.release_folio	= btrfs_release_folio,
 #ifdef CONFIG_MIGRATION
 	.migratepage	= btrfs_migratepage,
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 06/26] ceph: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 05/26] btrfs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 07/26] cifs: " Matthew Wilcox (Oracle)
                   ` (19 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use a folio throughout ceph_release_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ceph/addr.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e040b92bb17c..737d13931d52 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -162,24 +162,24 @@ static void ceph_invalidate_folio(struct folio *folio, size_t offset,
 	folio_wait_fscache(folio);
 }
 
-static int ceph_releasepage(struct page *page, gfp_t gfp)
+static bool ceph_release_folio(struct folio *folio, gfp_t gfp)
 {
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 
-	dout("%llx:%llx releasepage %p idx %lu (%sdirty)\n",
-	     ceph_vinop(inode), page,
-	     page->index, PageDirty(page) ? "" : "not ");
+	dout("%llx:%llx release_folio idx %lu (%sdirty)\n",
+	     ceph_vinop(inode),
+	     folio->index, folio_test_dirty(folio) ? "" : "not ");
 
-	if (PagePrivate(page))
-		return 0;
+	if (folio_test_private(folio))
+		return false;
 
-	if (PageFsCache(page)) {
+	if (folio_test_fscache(folio)) {
 		if (current_is_kswapd() || !(gfp & __GFP_FS))
-			return 0;
-		wait_on_page_fscache(page);
+			return false;
+		folio_wait_fscache(folio);
 	}
 	ceph_fscache_note_page_release(inode);
-	return 1;
+	return true;
 }
 
 static void ceph_netfs_expand_readahead(struct netfs_io_request *rreq)
@@ -1380,7 +1380,7 @@ const struct address_space_operations ceph_aops = {
 	.write_end = ceph_write_end,
 	.dirty_folio = ceph_dirty_folio,
 	.invalidate_folio = ceph_invalidate_folio,
-	.releasepage = ceph_releasepage,
+	.release_folio = ceph_release_folio,
 	.direct_IO = noop_direct_IO,
 };
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 07/26] cifs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 06/26] ceph: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 08/26] erofs: " Matthew Wilcox (Oracle)
                   ` (18 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use a folio throughout cifs_release_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/cifs/file.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index bc6d88e2e672..06003bb9cbe9 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -4758,16 +4758,16 @@ static int cifs_write_begin(struct file *file, struct address_space *mapping,
 	return rc;
 }
 
-static int cifs_release_page(struct page *page, gfp_t gfp)
+static bool cifs_release_folio(struct folio *folio, gfp_t gfp)
 {
-	if (PagePrivate(page))
+	if (folio_test_private(folio))
 		return 0;
-	if (PageFsCache(page)) {
+	if (folio_test_fscache(folio)) {
 		if (current_is_kswapd() || !(gfp & __GFP_FS))
 			return false;
-		wait_on_page_fscache(page);
+		folio_wait_fscache(folio);
 	}
-	fscache_note_page_release(cifs_inode_cookie(page->mapping->host));
+	fscache_note_page_release(cifs_inode_cookie(folio->mapping->host));
 	return true;
 }
 
@@ -4973,7 +4973,7 @@ const struct address_space_operations cifs_addr_ops = {
 	.write_begin = cifs_write_begin,
 	.write_end = cifs_write_end,
 	.dirty_folio = cifs_dirty_folio,
-	.releasepage = cifs_release_page,
+	.release_folio = cifs_release_folio,
 	.direct_IO = cifs_direct_io,
 	.invalidate_folio = cifs_invalidate_folio,
 	.launder_folio = cifs_launder_folio,
@@ -4998,7 +4998,7 @@ const struct address_space_operations cifs_addr_ops_smallbuf = {
 	.write_begin = cifs_write_begin,
 	.write_end = cifs_write_end,
 	.dirty_folio = cifs_dirty_folio,
-	.releasepage = cifs_release_page,
+	.release_folio = cifs_release_folio,
 	.invalidate_folio = cifs_invalidate_folio,
 	.launder_folio = cifs_launder_folio,
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 08/26] erofs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 07/26] cifs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 09/26] ext4: " Matthew Wilcox (Oracle)
                   ` (17 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use a folio in erofs_managed_cache_release_folio(), but use of folios
should be pushed into erofs_try_to_free_cached_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/erofs/super.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index 0c4b41130c2f..0e3862a72bfe 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -518,16 +518,16 @@ static int erofs_fc_parse_param(struct fs_context *fc,
 #ifdef CONFIG_EROFS_FS_ZIP
 static const struct address_space_operations managed_cache_aops;
 
-static int erofs_managed_cache_releasepage(struct page *page, gfp_t gfp_mask)
+static bool erofs_managed_cache_release_folio(struct folio *folio, gfp_t gfp)
 {
-	int ret = 1;	/* 0 - busy */
-	struct address_space *const mapping = page->mapping;
+	bool ret = true;
+	struct address_space *const mapping = folio->mapping;
 
-	DBG_BUGON(!PageLocked(page));
+	DBG_BUGON(!folio_test_locked(folio));
 	DBG_BUGON(mapping->a_ops != &managed_cache_aops);
 
-	if (PagePrivate(page))
-		ret = erofs_try_to_free_cached_page(page);
+	if (folio_test_private(folio))
+		ret = erofs_try_to_free_cached_page(&folio->page);
 
 	return ret;
 }
@@ -548,12 +548,12 @@ static void erofs_managed_cache_invalidate_folio(struct folio *folio,
 	DBG_BUGON(stop > folio_size(folio) || stop < length);
 
 	if (offset == 0 && stop == folio_size(folio))
-		while (!erofs_managed_cache_releasepage(&folio->page, GFP_NOFS))
+		while (!erofs_managed_cache_release_folio(folio, GFP_NOFS))
 			cond_resched();
 }
 
 static const struct address_space_operations managed_cache_aops = {
-	.releasepage = erofs_managed_cache_releasepage,
+	.release_folio = erofs_managed_cache_release_folio,
 	.invalidate_folio = erofs_managed_cache_invalidate_folio,
 };
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 09/26] ext4: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 08/26] erofs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 10/26] f2fs: " Matthew Wilcox (Oracle)
                   ` (16 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

The use of folios should be pushed deeper into ext4 from here.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index c6b8cb4949f1..52c46ac5bc8a 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3243,19 +3243,19 @@ static void ext4_journalled_invalidate_folio(struct folio *folio,
 	WARN_ON(__ext4_journalled_invalidate_folio(folio, offset, length) < 0);
 }
 
-static int ext4_releasepage(struct page *page, gfp_t wait)
+static bool ext4_release_folio(struct folio *folio, gfp_t wait)
 {
-	journal_t *journal = EXT4_JOURNAL(page->mapping->host);
+	journal_t *journal = EXT4_JOURNAL(folio->mapping->host);
 
-	trace_ext4_releasepage(page);
+	trace_ext4_releasepage(&folio->page);
 
 	/* Page has dirty journalled data -> cannot release */
-	if (PageChecked(page))
-		return 0;
+	if (folio_test_checked(folio))
+		return false;
 	if (journal)
-		return jbd2_journal_try_to_free_buffers(journal, page);
+		return jbd2_journal_try_to_free_buffers(journal, &folio->page);
 	else
-		return try_to_free_buffers(page);
+		return try_to_free_buffers(&folio->page);
 }
 
 static bool ext4_inode_datasync_dirty(struct inode *inode)
@@ -3618,7 +3618,7 @@ static const struct address_space_operations ext4_aops = {
 	.dirty_folio		= ext4_dirty_folio,
 	.bmap			= ext4_bmap,
 	.invalidate_folio	= ext4_invalidate_folio,
-	.releasepage		= ext4_releasepage,
+	.release_folio		= ext4_release_folio,
 	.direct_IO		= noop_direct_IO,
 	.migratepage		= buffer_migrate_page,
 	.is_partially_uptodate  = block_is_partially_uptodate,
@@ -3636,7 +3636,7 @@ static const struct address_space_operations ext4_journalled_aops = {
 	.dirty_folio		= ext4_journalled_dirty_folio,
 	.bmap			= ext4_bmap,
 	.invalidate_folio	= ext4_journalled_invalidate_folio,
-	.releasepage		= ext4_releasepage,
+	.release_folio		= ext4_release_folio,
 	.direct_IO		= noop_direct_IO,
 	.is_partially_uptodate  = block_is_partially_uptodate,
 	.error_remove_page	= generic_error_remove_page,
@@ -3653,7 +3653,7 @@ static const struct address_space_operations ext4_da_aops = {
 	.dirty_folio		= ext4_dirty_folio,
 	.bmap			= ext4_bmap,
 	.invalidate_folio	= ext4_invalidate_folio,
-	.releasepage		= ext4_releasepage,
+	.release_folio		= ext4_release_folio,
 	.direct_IO		= noop_direct_IO,
 	.migratepage		= buffer_migrate_page,
 	.is_partially_uptodate  = block_is_partially_uptodate,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 10/26] f2fs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (8 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 09/26] ext4: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:55 ` [PATCH 11/26] gfs2: " Matthew Wilcox (Oracle)
                   ` (15 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

While converting f2fs_release_page() to f2fs_release_folio(), cache the
sb_info so we don't need to retrieve it twice, and remove the redundant
call to set_page_private().  The use of folios should be pushed further
into f2fs from here.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/f2fs/checkpoint.c |  2 +-
 fs/f2fs/compress.c   |  2 +-
 fs/f2fs/data.c       | 32 +++++++++++++++++---------------
 fs/f2fs/f2fs.h       |  2 +-
 fs/f2fs/node.c       |  2 +-
 5 files changed, 21 insertions(+), 19 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 909085a78f9c..456c1e89386a 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -468,7 +468,7 @@ const struct address_space_operations f2fs_meta_aops = {
 	.writepages	= f2fs_write_meta_pages,
 	.dirty_folio	= f2fs_dirty_meta_folio,
 	.invalidate_folio = f2fs_invalidate_folio,
-	.releasepage	= f2fs_release_page,
+	.release_folio	= f2fs_release_folio,
 #ifdef CONFIG_MIGRATION
 	.migratepage    = f2fs_migrate_page,
 #endif
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index 12a56f9e1572..24824cd96f36 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1746,7 +1746,7 @@ unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn)
 }
 
 const struct address_space_operations f2fs_compress_aops = {
-	.releasepage = f2fs_release_page,
+	.release_folio = f2fs_release_folio,
 	.invalidate_folio = f2fs_invalidate_folio,
 };
 
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index f894267f0722..8f38c26bb16c 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -3528,28 +3528,30 @@ void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length)
 	folio_detach_private(folio);
 }
 
-int f2fs_release_page(struct page *page, gfp_t wait)
+bool f2fs_release_folio(struct folio *folio, gfp_t wait)
 {
-	/* If this is dirty page, keep PagePrivate */
-	if (PageDirty(page))
-		return 0;
+	struct f2fs_sb_info *sbi;
+
+	/* If this is dirty folio, keep private data */
+	if (folio_test_dirty(folio))
+		return false;
 
 	/* This is atomic written page, keep Private */
-	if (page_private_atomic(page))
-		return 0;
+	if (page_private_atomic(&folio->page))
+		return false;
 
-	if (test_opt(F2FS_P_SB(page), COMPRESS_CACHE)) {
-		struct inode *inode = page->mapping->host;
+	sbi = F2FS_M_SB(folio->mapping);
+	if (test_opt(sbi, COMPRESS_CACHE)) {
+		struct inode *inode = folio->mapping->host;
 
-		if (inode->i_ino == F2FS_COMPRESS_INO(F2FS_I_SB(inode)))
-			clear_page_private_data(page);
+		if (inode->i_ino == F2FS_COMPRESS_INO(sbi))
+			clear_page_private_data(&folio->page);
 	}
 
-	clear_page_private_gcing(page);
+	clear_page_private_gcing(&folio->page);
 
-	detach_page_private(page);
-	set_page_private(page, 0);
-	return 1;
+	folio_detach_private(folio);
+	return true;
 }
 
 static bool f2fs_dirty_data_folio(struct address_space *mapping,
@@ -3944,7 +3946,7 @@ const struct address_space_operations f2fs_dblock_aops = {
 	.write_end	= f2fs_write_end,
 	.dirty_folio	= f2fs_dirty_data_folio,
 	.invalidate_folio = f2fs_invalidate_folio,
-	.releasepage	= f2fs_release_page,
+	.release_folio	= f2fs_release_folio,
 	.direct_IO	= noop_direct_IO,
 	.bmap		= f2fs_bmap,
 	.swap_activate  = f2fs_swap_activate,
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index 18df53ef3d7e..73ebac078884 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -3768,7 +3768,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted,
 				int compr_blocks, bool allow_balance);
 void f2fs_write_failed(struct inode *inode, loff_t to);
 void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length);
-int f2fs_release_page(struct page *page, gfp_t wait);
+bool f2fs_release_folio(struct folio *folio, gfp_t wait);
 #ifdef CONFIG_MIGRATION
 int f2fs_migrate_page(struct address_space *mapping, struct page *newpage,
 			struct page *page, enum migrate_mode mode);
diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index c45d341dcf6e..8ccff18560ff 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -2165,7 +2165,7 @@ const struct address_space_operations f2fs_node_aops = {
 	.writepages	= f2fs_write_node_pages,
 	.dirty_folio	= f2fs_dirty_node_folio,
 	.invalidate_folio = f2fs_invalidate_folio,
-	.releasepage	= f2fs_release_page,
+	.release_folio	= f2fs_release_folio,
 #ifdef CONFIG_MIGRATION
 	.migratepage	= f2fs_migrate_page,
 #endif
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 11/26] gfs2: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (9 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 10/26] f2fs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:55 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 12/26] hfs: " Matthew Wilcox (Oracle)
                   ` (14 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:55 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use a folio throughout gfs2_release_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/gfs2/aops.c    | 42 ++++++++++++++++++++++--------------------
 fs/gfs2/inode.h   |  2 +-
 fs/gfs2/meta_io.c |  4 ++--
 3 files changed, 25 insertions(+), 23 deletions(-)

diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 3d6c5c5eb4f1..95a674d70c04 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -691,38 +691,40 @@ static void gfs2_invalidate_folio(struct folio *folio, size_t offset,
 }
 
 /**
- * gfs2_releasepage - free the metadata associated with a page
- * @page: the page that's being released
+ * gfs2_release_folio - free the metadata associated with a folio
+ * @folio: the folio that's being released
  * @gfp_mask: passed from Linux VFS, ignored by us
  *
- * Calls try_to_free_buffers() to free the buffers and put the page if the
+ * Calls try_to_free_buffers() to free the buffers and put the folio if the
  * buffers can be released.
  *
- * Returns: 1 if the page was put or else 0
+ * Returns: true if the folio was put or else false
  */
 
-int gfs2_releasepage(struct page *page, gfp_t gfp_mask)
+bool gfs2_release_folio(struct folio *folio, gfp_t gfp_mask)
 {
-	struct address_space *mapping = page->mapping;
+	struct address_space *mapping = folio->mapping;
 	struct gfs2_sbd *sdp = gfs2_mapping2sbd(mapping);
 	struct buffer_head *bh, *head;
 	struct gfs2_bufdata *bd;
 
-	if (!page_has_buffers(page))
-		return 0;
+	head = folio_buffers(folio);
+	if (!head)
+		return false;
 
 	/*
-	 * From xfs_vm_releasepage: mm accommodates an old ext3 case where
-	 * clean pages might not have had the dirty bit cleared.  Thus, it can
-	 * send actual dirty pages to ->releasepage() via shrink_active_list().
+	 * mm accommodates an old ext3 case where clean folios might
+	 * not have had the dirty bit cleared.	Thus, it can send actual
+	 * dirty folios to ->release_folio() via shrink_active_list().
 	 *
-	 * As a workaround, we skip pages that contain dirty buffers below.
-	 * Once ->releasepage isn't called on dirty pages anymore, we can warn
-	 * on dirty buffers like we used to here again.
+	 * As a workaround, we skip folios that contain dirty buffers
+	 * below.  Once ->release_folio isn't called on dirty folios
+	 * anymore, we can warn on dirty buffers like we used to here
+	 * again.
 	 */
 
 	gfs2_log_lock(sdp);
-	head = bh = page_buffers(page);
+	bh = head;
 	do {
 		if (atomic_read(&bh->b_count))
 			goto cannot_release;
@@ -732,9 +734,9 @@ int gfs2_releasepage(struct page *page, gfp_t gfp_mask)
 		if (buffer_dirty(bh) || WARN_ON(buffer_pinned(bh)))
 			goto cannot_release;
 		bh = bh->b_this_page;
-	} while(bh != head);
+	} while (bh != head);
 
-	head = bh = page_buffers(page);
+	bh = head;
 	do {
 		bd = bh->b_private;
 		if (bd) {
@@ -755,11 +757,11 @@ int gfs2_releasepage(struct page *page, gfp_t gfp_mask)
 	} while (bh != head);
 	gfs2_log_unlock(sdp);
 
-	return try_to_free_buffers(page);
+	return try_to_free_buffers(&folio->page);
 
 cannot_release:
 	gfs2_log_unlock(sdp);
-	return 0;
+	return false;
 }
 
 static const struct address_space_operations gfs2_aops = {
@@ -785,7 +787,7 @@ static const struct address_space_operations gfs2_jdata_aops = {
 	.dirty_folio = jdata_dirty_folio,
 	.bmap = gfs2_bmap,
 	.invalidate_folio = gfs2_invalidate_folio,
-	.releasepage = gfs2_releasepage,
+	.release_folio = gfs2_release_folio,
 	.is_partially_uptodate = block_is_partially_uptodate,
 	.error_remove_page = generic_error_remove_page,
 };
diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h
index 7b2c1f390db7..0264d514dda7 100644
--- a/fs/gfs2/inode.h
+++ b/fs/gfs2/inode.h
@@ -12,7 +12,7 @@
 #include <linux/mm.h>
 #include "util.h"
 
-extern int gfs2_releasepage(struct page *page, gfp_t gfp_mask);
+bool gfs2_release_folio(struct folio *folio, gfp_t gfp_mask);
 extern int gfs2_internal_read(struct gfs2_inode *ip,
 			      char *buf, loff_t *pos, unsigned size);
 extern void gfs2_set_aops(struct inode *inode);
diff --git a/fs/gfs2/meta_io.c b/fs/gfs2/meta_io.c
index d8bd1d48bd78..868dcc71b581 100644
--- a/fs/gfs2/meta_io.c
+++ b/fs/gfs2/meta_io.c
@@ -92,14 +92,14 @@ const struct address_space_operations gfs2_meta_aops = {
 	.dirty_folio	= block_dirty_folio,
 	.invalidate_folio = block_invalidate_folio,
 	.writepage = gfs2_aspace_writepage,
-	.releasepage = gfs2_releasepage,
+	.release_folio = gfs2_release_folio,
 };
 
 const struct address_space_operations gfs2_rgrp_aops = {
 	.dirty_folio	= block_dirty_folio,
 	.invalidate_folio = block_invalidate_folio,
 	.writepage = gfs2_aspace_writepage,
-	.releasepage = gfs2_releasepage,
+	.release_folio = gfs2_release_folio,
 };
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 12/26] hfs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (10 preceding siblings ...)
  2022-05-02  5:55 ` [PATCH 11/26] gfs2: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 13/26] hfsplus: " Matthew Wilcox (Oracle)
                   ` (13 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use a folio throughout hfs_release_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/hfs/inode.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index ba3ff9cd7cfc..86fd50e5fccb 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -69,14 +69,15 @@ static sector_t hfs_bmap(struct address_space *mapping, sector_t block)
 	return generic_block_bmap(mapping, block, hfs_get_block);
 }
 
-static int hfs_releasepage(struct page *page, gfp_t mask)
+static bool hfs_release_folio(struct folio *folio, gfp_t mask)
 {
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 	struct super_block *sb = inode->i_sb;
 	struct hfs_btree *tree;
 	struct hfs_bnode *node;
 	u32 nidx;
-	int i, res = 1;
+	int i;
+	bool res = true;
 
 	switch (inode->i_ino) {
 	case HFS_EXT_CNID:
@@ -87,27 +88,27 @@ static int hfs_releasepage(struct page *page, gfp_t mask)
 		break;
 	default:
 		BUG();
-		return 0;
+		return false;
 	}
 
 	if (!tree)
-		return 0;
+		return false;
 
 	if (tree->node_size >= PAGE_SIZE) {
-		nidx = page->index >> (tree->node_size_shift - PAGE_SHIFT);
+		nidx = folio->index >> (tree->node_size_shift - PAGE_SHIFT);
 		spin_lock(&tree->hash_lock);
 		node = hfs_bnode_findhash(tree, nidx);
 		if (!node)
 			;
 		else if (atomic_read(&node->refcnt))
-			res = 0;
+			res = false;
 		if (res && node) {
 			hfs_bnode_unhash(node);
 			hfs_bnode_free(node);
 		}
 		spin_unlock(&tree->hash_lock);
 	} else {
-		nidx = page->index << (PAGE_SHIFT - tree->node_size_shift);
+		nidx = folio->index << (PAGE_SHIFT - tree->node_size_shift);
 		i = 1 << (PAGE_SHIFT - tree->node_size_shift);
 		spin_lock(&tree->hash_lock);
 		do {
@@ -115,7 +116,7 @@ static int hfs_releasepage(struct page *page, gfp_t mask)
 			if (!node)
 				continue;
 			if (atomic_read(&node->refcnt)) {
-				res = 0;
+				res = false;
 				break;
 			}
 			hfs_bnode_unhash(node);
@@ -123,7 +124,7 @@ static int hfs_releasepage(struct page *page, gfp_t mask)
 		} while (--i && nidx < tree->node_count);
 		spin_unlock(&tree->hash_lock);
 	}
-	return res ? try_to_free_buffers(page) : 0;
+	return res ? try_to_free_buffers(&folio->page) : false;
 }
 
 static ssize_t hfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
@@ -165,7 +166,7 @@ const struct address_space_operations hfs_btree_aops = {
 	.write_begin	= hfs_write_begin,
 	.write_end	= generic_write_end,
 	.bmap		= hfs_bmap,
-	.releasepage	= hfs_releasepage,
+	.release_folio	= hfs_release_folio,
 };
 
 const struct address_space_operations hfs_aops = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 13/26] hfsplus: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (11 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 12/26] hfs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 14/26] jfs: " Matthew Wilcox (Oracle)
                   ` (12 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use a folio throughout hfsplus_release_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/hfsplus/inode.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
index 982b34eefec7..f723e0e91d51 100644
--- a/fs/hfsplus/inode.c
+++ b/fs/hfsplus/inode.c
@@ -63,14 +63,15 @@ static sector_t hfsplus_bmap(struct address_space *mapping, sector_t block)
 	return generic_block_bmap(mapping, block, hfsplus_get_block);
 }
 
-static int hfsplus_releasepage(struct page *page, gfp_t mask)
+static bool hfsplus_release_folio(struct folio *folio, gfp_t mask)
 {
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 	struct super_block *sb = inode->i_sb;
 	struct hfs_btree *tree;
 	struct hfs_bnode *node;
 	u32 nidx;
-	int i, res = 1;
+	int i;
+	bool res = true;
 
 	switch (inode->i_ino) {
 	case HFSPLUS_EXT_CNID:
@@ -84,26 +85,26 @@ static int hfsplus_releasepage(struct page *page, gfp_t mask)
 		break;
 	default:
 		BUG();
-		return 0;
+		return false;
 	}
 	if (!tree)
-		return 0;
+		return false;
 	if (tree->node_size >= PAGE_SIZE) {
-		nidx = page->index >>
+		nidx = folio->index >>
 			(tree->node_size_shift - PAGE_SHIFT);
 		spin_lock(&tree->hash_lock);
 		node = hfs_bnode_findhash(tree, nidx);
 		if (!node)
 			;
 		else if (atomic_read(&node->refcnt))
-			res = 0;
+			res = false;
 		if (res && node) {
 			hfs_bnode_unhash(node);
 			hfs_bnode_free(node);
 		}
 		spin_unlock(&tree->hash_lock);
 	} else {
-		nidx = page->index <<
+		nidx = folio->index <<
 			(PAGE_SHIFT - tree->node_size_shift);
 		i = 1 << (PAGE_SHIFT - tree->node_size_shift);
 		spin_lock(&tree->hash_lock);
@@ -112,7 +113,7 @@ static int hfsplus_releasepage(struct page *page, gfp_t mask)
 			if (!node)
 				continue;
 			if (atomic_read(&node->refcnt)) {
-				res = 0;
+				res = false;
 				break;
 			}
 			hfs_bnode_unhash(node);
@@ -120,7 +121,7 @@ static int hfsplus_releasepage(struct page *page, gfp_t mask)
 		} while (--i && nidx < tree->node_count);
 		spin_unlock(&tree->hash_lock);
 	}
-	return res ? try_to_free_buffers(page) : 0;
+	return res ? try_to_free_buffers(&folio->page) : false;
 }
 
 static ssize_t hfsplus_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
@@ -162,7 +163,7 @@ const struct address_space_operations hfsplus_btree_aops = {
 	.write_begin	= hfsplus_write_begin,
 	.write_end	= generic_write_end,
 	.bmap		= hfsplus_bmap,
-	.releasepage	= hfsplus_releasepage,
+	.release_folio	= hfsplus_release_folio,
 };
 
 const struct address_space_operations hfsplus_aops = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 14/26] jfs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (12 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 13/26] hfsplus: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 15/26] nfs: " Matthew Wilcox (Oracle)
                   ` (11 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

The use of folios should be pushed further down into jfs from here.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/jfs/jfs_metapage.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c
index 2fc78405b3f2..387652ae14c2 100644
--- a/fs/jfs/jfs_metapage.c
+++ b/fs/jfs/jfs_metapage.c
@@ -524,29 +524,29 @@ static int metapage_read_folio(struct file *fp, struct folio *folio)
 	return -EIO;
 }
 
-static int metapage_releasepage(struct page *page, gfp_t gfp_mask)
+static bool metapage_release_folio(struct folio *folio, gfp_t gfp_mask)
 {
 	struct metapage *mp;
-	int ret = 1;
+	bool ret = true;
 	int offset;
 
 	for (offset = 0; offset < PAGE_SIZE; offset += PSIZE) {
-		mp = page_to_mp(page, offset);
+		mp = page_to_mp(&folio->page, offset);
 
 		if (!mp)
 			continue;
 
-		jfs_info("metapage_releasepage: mp = 0x%p", mp);
+		jfs_info("metapage_release_folio: mp = 0x%p", mp);
 		if (mp->count || mp->nohomeok ||
 		    test_bit(META_dirty, &mp->flag)) {
 			jfs_info("count = %ld, nohomeok = %d", mp->count,
 				 mp->nohomeok);
-			ret = 0;
+			ret = false;
 			continue;
 		}
 		if (mp->lsn)
 			remove_from_logsync(mp);
-		remove_metapage(page, mp);
+		remove_metapage(&folio->page, mp);
 		INCREMENT(mpStat.pagefree);
 		free_metapage(mp);
 	}
@@ -560,13 +560,13 @@ static void metapage_invalidate_folio(struct folio *folio, size_t offset,
 
 	BUG_ON(folio_test_writeback(folio));
 
-	metapage_releasepage(&folio->page, 0);
+	metapage_release_folio(folio, 0);
 }
 
 const struct address_space_operations jfs_metapage_aops = {
 	.read_folio	= metapage_read_folio,
 	.writepage	= metapage_writepage,
-	.releasepage	= metapage_releasepage,
+	.release_folio	= metapage_release_folio,
 	.invalidate_folio = metapage_invalidate_folio,
 	.dirty_folio	= filemap_dirty_folio,
 };
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 15/26] nfs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (13 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 14/26] jfs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 16/26] nilfs2: Remove comment about releasepage Matthew Wilcox (Oracle)
                   ` (10 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use folios throughout the release_folio paths.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/nfs/file.c    | 22 +++++++++++-----------
 fs/nfs/fscache.h | 14 +++++++-------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 4f6d1f90b87f..d764b3ce7905 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -415,19 +415,19 @@ static void nfs_invalidate_folio(struct folio *folio, size_t offset,
 }
 
 /*
- * Attempt to release the private state associated with a page
- * - Called if either PG_private or PG_fscache is set on the page
- * - Caller holds page lock
- * - Return true (may release page) or false (may not)
+ * Attempt to release the private state associated with a folio
+ * - Called if either private or fscache flags are set on the folio
+ * - Caller holds folio lock
+ * - Return true (may release folio) or false (may not)
  */
-static int nfs_release_page(struct page *page, gfp_t gfp)
+static bool nfs_release_folio(struct folio *folio, gfp_t gfp)
 {
-	dfprintk(PAGECACHE, "NFS: release_page(%p)\n", page);
+	dfprintk(PAGECACHE, "NFS: release_folio(%p)\n", folio);
 
-	/* If PagePrivate() is set, then the page is not freeable */
-	if (PagePrivate(page))
-		return 0;
-	return nfs_fscache_release_page(page, gfp);
+	/* If the private flag is set, then the folio is not freeable */
+	if (folio_test_private(folio))
+		return false;
+	return nfs_fscache_release_folio(folio, gfp);
 }
 
 static void nfs_check_dirty_writeback(struct folio *folio,
@@ -522,7 +522,7 @@ const struct address_space_operations nfs_file_aops = {
 	.write_begin = nfs_write_begin,
 	.write_end = nfs_write_end,
 	.invalidate_folio = nfs_invalidate_folio,
-	.releasepage = nfs_release_page,
+	.release_folio = nfs_release_folio,
 	.direct_IO = nfs_direct_IO,
 #ifdef CONFIG_MIGRATION
 	.migratepage = nfs_migrate_page,
diff --git a/fs/nfs/fscache.h b/fs/nfs/fscache.h
index 4e980cc04779..2a37af880978 100644
--- a/fs/nfs/fscache.h
+++ b/fs/nfs/fscache.h
@@ -48,14 +48,14 @@ extern void nfs_fscache_release_file(struct inode *, struct file *);
 extern int __nfs_fscache_read_page(struct inode *, struct page *);
 extern void __nfs_fscache_write_page(struct inode *, struct page *);
 
-static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)
+static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp)
 {
-	if (PageFsCache(page)) {
+	if (folio_test_fscache(folio)) {
 		if (current_is_kswapd() || !(gfp & __GFP_FS))
 			return false;
-		wait_on_page_fscache(page);
-		fscache_note_page_release(nfs_i_fscache(page->mapping->host));
-		nfs_inc_fscache_stats(page->mapping->host,
+		folio_wait_fscache(folio);
+		fscache_note_page_release(nfs_i_fscache(folio->mapping->host));
+		nfs_inc_fscache_stats(folio->mapping->host,
 				      NFSIOS_FSCACHE_PAGES_UNCACHED);
 	}
 	return true;
@@ -129,9 +129,9 @@ static inline void nfs_fscache_open_file(struct inode *inode,
 					 struct file *filp) {}
 static inline void nfs_fscache_release_file(struct inode *inode, struct file *file) {}
 
-static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)
+static inline bool nfs_fscache_release_folio(struct folio *folio, gfp_t gfp)
 {
-	return 1; /* True: may release page */
+	return true; /* may release folio */
 }
 static inline int nfs_fscache_read_page(struct inode *inode, struct page *page)
 {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 16/26] nilfs2: Remove comment about releasepage
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (14 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 15/26] nfs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 17/26] ocfs2: Convert to release_folio Matthew Wilcox (Oracle)
                   ` (9 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

If we need a release_folio, we can add it back.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/nilfs2/inode.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
index 26b8065401b0..538ca5473b0d 100644
--- a/fs/nilfs2/inode.c
+++ b/fs/nilfs2/inode.c
@@ -304,7 +304,6 @@ const struct address_space_operations nilfs_aops = {
 	.readahead		= nilfs_readahead,
 	.write_begin		= nilfs_write_begin,
 	.write_end		= nilfs_write_end,
-	/* .releasepage		= nilfs_releasepage, */
 	.invalidate_folio	= block_invalidate_folio,
 	.direct_IO		= nilfs_direct_IO,
 	.is_partially_uptodate  = block_is_partially_uptodate,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 17/26] ocfs2: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (15 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 16/26] nilfs2: Remove comment about releasepage Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 18/26] orangefs: " Matthew Wilcox (Oracle)
                   ` (8 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use folios throughout the release_folio path.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ocfs2/aops.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 6b1679db9636..7d7b86ca078f 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -498,11 +498,11 @@ static sector_t ocfs2_bmap(struct address_space *mapping, sector_t block)
 	return status;
 }
 
-static int ocfs2_releasepage(struct page *page, gfp_t wait)
+static bool ocfs2_release_folio(struct folio *folio, gfp_t wait)
 {
-	if (!page_has_buffers(page))
-		return 0;
-	return try_to_free_buffers(page);
+	if (!folio_buffers(folio))
+		return false;
+	return try_to_free_buffers(&folio->page);
 }
 
 static void ocfs2_figure_cluster_boundaries(struct ocfs2_super *osb,
@@ -2463,7 +2463,7 @@ const struct address_space_operations ocfs2_aops = {
 	.bmap			= ocfs2_bmap,
 	.direct_IO		= ocfs2_direct_IO,
 	.invalidate_folio	= block_invalidate_folio,
-	.releasepage		= ocfs2_releasepage,
+	.release_folio		= ocfs2_release_folio,
 	.migratepage		= buffer_migrate_page,
 	.is_partially_uptodate	= block_is_partially_uptodate,
 	.error_remove_page	= generic_error_remove_page,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 18/26] orangefs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (16 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 17/26] ocfs2: Convert to release_folio Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 19/26] reiserfs: " Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use folios throughout the release_folio path.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/orangefs/inode.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/orangefs/inode.c b/fs/orangefs/inode.c
index 241ac21f527b..3509618e7b37 100644
--- a/fs/orangefs/inode.c
+++ b/fs/orangefs/inode.c
@@ -485,9 +485,9 @@ static void orangefs_invalidate_folio(struct folio *folio,
 	orangefs_launder_folio(folio);
 }
 
-static int orangefs_releasepage(struct page *page, gfp_t foo)
+static bool orangefs_release_folio(struct folio *folio, gfp_t foo)
 {
-	return !PagePrivate(page);
+	return !folio_test_private(folio);
 }
 
 static void orangefs_freepage(struct page *page)
@@ -636,7 +636,7 @@ static const struct address_space_operations orangefs_address_operations = {
 	.write_begin = orangefs_write_begin,
 	.write_end = orangefs_write_end,
 	.invalidate_folio = orangefs_invalidate_folio,
-	.releasepage = orangefs_releasepage,
+	.release_folio = orangefs_release_folio,
 	.freepage = orangefs_freepage,
 	.launder_folio = orangefs_launder_folio,
 	.direct_IO = orangefs_direct_IO,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 19/26] reiserfs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (17 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 18/26] orangefs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 20/26] ubifs: " Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use folios throughout the release_folio path.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/reiserfs/inode.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
index 33a9555f77b9..9cf2e1420a74 100644
--- a/fs/reiserfs/inode.c
+++ b/fs/reiserfs/inode.c
@@ -3202,39 +3202,39 @@ static bool reiserfs_dirty_folio(struct address_space *mapping,
 }
 
 /*
- * Returns 1 if the page's buffers were dropped.  The page is locked.
+ * Returns true if the folio's buffers were dropped.  The folio is locked.
  *
  * Takes j_dirty_buffers_lock to protect the b_assoc_buffers list_heads
- * in the buffers at page_buffers(page).
+ * in the buffers at folio_buffers(folio).
  *
  * even in -o notail mode, we can't be sure an old mount without -o notail
  * didn't create files with tails.
  */
-static int reiserfs_releasepage(struct page *page, gfp_t unused_gfp_flags)
+static bool reiserfs_release_folio(struct folio *folio, gfp_t unused_gfp_flags)
 {
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 	struct reiserfs_journal *j = SB_JOURNAL(inode->i_sb);
 	struct buffer_head *head;
 	struct buffer_head *bh;
-	int ret = 1;
+	bool ret = true;
 
-	WARN_ON(PageChecked(page));
+	WARN_ON(folio_test_checked(folio));
 	spin_lock(&j->j_dirty_buffers_lock);
-	head = page_buffers(page);
+	head = folio_buffers(folio);
 	bh = head;
 	do {
 		if (bh->b_private) {
 			if (!buffer_dirty(bh) && !buffer_locked(bh)) {
 				reiserfs_free_jh(bh);
 			} else {
-				ret = 0;
+				ret = false;
 				break;
 			}
 		}
 		bh = bh->b_this_page;
 	} while (bh != head);
 	if (ret)
-		ret = try_to_free_buffers(page);
+		ret = try_to_free_buffers(&folio->page);
 	spin_unlock(&j->j_dirty_buffers_lock);
 	return ret;
 }
@@ -3423,7 +3423,7 @@ const struct address_space_operations reiserfs_address_space_operations = {
 	.writepage = reiserfs_writepage,
 	.read_folio = reiserfs_read_folio,
 	.readahead = reiserfs_readahead,
-	.releasepage = reiserfs_releasepage,
+	.release_folio = reiserfs_release_folio,
 	.invalidate_folio = reiserfs_invalidate_folio,
 	.write_begin = reiserfs_write_begin,
 	.write_end = reiserfs_write_end,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 20/26] ubifs: Convert to release_folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (18 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 19/26] reiserfs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 21/26] fs: Remove last vestiges of releasepage Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Use folios throughout the release_folio path.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ubifs/file.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index 7cbf2edf8907..04ced154960f 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -1484,22 +1484,22 @@ static int ubifs_migrate_page(struct address_space *mapping,
 }
 #endif
 
-static int ubifs_releasepage(struct page *page, gfp_t unused_gfp_flags)
+static bool ubifs_release_folio(struct folio *folio, gfp_t unused_gfp_flags)
 {
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 	struct ubifs_info *c = inode->i_sb->s_fs_info;
 
 	/*
 	 * An attempt to release a dirty page without budgeting for it - should
 	 * not happen.
 	 */
-	if (PageWriteback(page))
-		return 0;
-	ubifs_assert(c, PagePrivate(page));
+	if (folio_test_writeback(folio))
+		return false;
+	ubifs_assert(c, folio_test_private(folio));
 	ubifs_assert(c, 0);
-	detach_page_private(page);
-	ClearPageChecked(page);
-	return 1;
+	folio_detach_private(folio);
+	folio_clear_checked(folio);
+	return true;
 }
 
 /*
@@ -1652,7 +1652,7 @@ const struct address_space_operations ubifs_file_address_operations = {
 #ifdef CONFIG_MIGRATION
 	.migratepage	= ubifs_migrate_page,
 #endif
-	.releasepage    = ubifs_releasepage,
+	.release_folio    = ubifs_release_folio,
 };
 
 const struct inode_operations ubifs_file_inode_operations = {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 21/26] fs: Remove last vestiges of releasepage
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (19 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 20/26] ubifs: " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 22/26] reiserfs: Convert release_buffer_page() to use a folio Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

All users are now converted to release_folio

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/fs.h         | 1 -
 include/linux/page-flags.h | 2 +-
 mm/filemap.c               | 2 --
 3 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/include/linux/fs.h b/include/linux/fs.h
index ad768f13f485..1cee64d9724b 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -356,7 +356,6 @@ struct address_space_operations {
 	sector_t (*bmap)(struct address_space *, sector_t);
 	void (*invalidate_folio) (struct folio *, size_t offset, size_t len);
 	bool (*release_folio)(struct folio *, gfp_t);
-	int (*releasepage) (struct page *, gfp_t);
 	void (*freepage)(struct page *);
 	ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
 	/*
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 9d8eeaa67d05..af10149a6c31 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -516,7 +516,7 @@ PAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
 /*
  * Private page markings that may be used by the filesystem that owns the page
  * for its own purposes.
- * - PG_private and PG_private_2 cause releasepage() and co to be invoked
+ * - PG_private and PG_private_2 cause release_folio() and co to be invoked
  */
 PAGEFLAG(Private, private, PF_ANY)
 PAGEFLAG(Private2, private_2, PF_ANY) TESTSCFLAG(Private2, private_2, PF_ANY)
diff --git a/mm/filemap.c b/mm/filemap.c
index 40df5704ec39..7d55bb53bff7 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3958,8 +3958,6 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
 
 	if (mapping && mapping->a_ops->release_folio)
 		return mapping->a_ops->release_folio(folio, gfp);
-	if (mapping && mapping->a_ops->releasepage)
-		return mapping->a_ops->releasepage(&folio->page, gfp);
 	return try_to_free_buffers(&folio->page);
 }
 EXPORT_SYMBOL(filemap_release_folio);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 22/26] reiserfs: Convert release_buffer_page() to use a folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (20 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 21/26] fs: Remove last vestiges of releasepage Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 23/26] jbd2: Convert jbd2_journal_try_to_free_buffers to take " Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Saves 671 bytes from an allmodconfig build (!)

Function                                     old     new   delta
release_buffer_page                         1617     946    -671
Total: Before=67656, After=66985, chg -0.99%

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/reiserfs/journal.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index b5b6f6201bed..99ba495b0f28 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -601,14 +601,14 @@ static int journal_list_still_alive(struct super_block *s,
  */
 static void release_buffer_page(struct buffer_head *bh)
 {
-	struct page *page = bh->b_page;
-	if (!page->mapping && trylock_page(page)) {
-		get_page(page);
+	struct folio *folio = page_folio(bh->b_page);
+	if (!folio->mapping && folio_trylock(folio)) {
+		folio_get(folio);
 		put_bh(bh);
-		if (!page->mapping)
-			try_to_free_buffers(page);
-		unlock_page(page);
-		put_page(page);
+		if (!folio->mapping)
+			try_to_free_buffers(&folio->page);
+		folio_unlock(folio);
+		folio_put(folio);
 	} else {
 		put_bh(bh);
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 23/26] jbd2: Convert jbd2_journal_try_to_free_buffers to take a folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (21 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 22/26] reiserfs: Convert release_buffer_page() to use a folio Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 24/26] jbd2: Convert release_buffer_page() to use " Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Also convert it to return a bool since it's called from release_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c       |  2 +-
 fs/jbd2/transaction.c | 12 ++++++------
 include/linux/jbd2.h  |  2 +-
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 52c46ac5bc8a..943937cb5302 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3253,7 +3253,7 @@ static bool ext4_release_folio(struct folio *folio, gfp_t wait)
 	if (folio_test_checked(folio))
 		return false;
 	if (journal)
-		return jbd2_journal_try_to_free_buffers(journal, &folio->page);
+		return jbd2_journal_try_to_free_buffers(journal, folio);
 	else
 		return try_to_free_buffers(&folio->page);
 }
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index fcb9175016a5..ee33d277d51e 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -2143,17 +2143,17 @@ __journal_try_to_free_buffer(journal_t *journal, struct buffer_head *bh)
  * cannot happen because we never reallocate freed data as metadata
  * while the data is part of a transaction.  Yes?
  *
- * Return 0 on failure, 1 on success
+ * Return false on failure, true on success
  */
-int jbd2_journal_try_to_free_buffers(journal_t *journal, struct page *page)
+bool jbd2_journal_try_to_free_buffers(journal_t *journal, struct folio *folio)
 {
 	struct buffer_head *head;
 	struct buffer_head *bh;
-	int ret = 0;
+	bool ret = false;
 
-	J_ASSERT(PageLocked(page));
+	J_ASSERT(folio_test_locked(folio));
 
-	head = page_buffers(page);
+	head = folio_buffers(folio);
 	bh = head;
 	do {
 		struct journal_head *jh;
@@ -2175,7 +2175,7 @@ int jbd2_journal_try_to_free_buffers(journal_t *journal, struct page *page)
 			goto busy;
 	} while ((bh = bh->b_this_page) != head);
 
-	ret = try_to_free_buffers(page);
+	ret = try_to_free_buffers(&folio->page);
 busy:
 	return ret;
 }
diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
index de9536680b2b..e79d6e0b14e8 100644
--- a/include/linux/jbd2.h
+++ b/include/linux/jbd2.h
@@ -1529,7 +1529,7 @@ extern int	 jbd2_journal_dirty_metadata (handle_t *, struct buffer_head *);
 extern int	 jbd2_journal_forget (handle_t *, struct buffer_head *);
 int jbd2_journal_invalidate_folio(journal_t *, struct folio *,
 					size_t offset, size_t length);
-extern int	 jbd2_journal_try_to_free_buffers(journal_t *journal, struct page *page);
+bool jbd2_journal_try_to_free_buffers(journal_t *journal, struct folio *folio);
 extern int	 jbd2_journal_stop(handle_t *);
 extern int	 jbd2_journal_flush(journal_t *journal, unsigned int flags);
 extern void	 jbd2_journal_lock_updates (journal_t *);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 24/26] jbd2: Convert release_buffer_page() to use a folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (22 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 23/26] jbd2: Convert jbd2_journal_try_to_free_buffers to take " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 25/26] fs: Change try_to_free_buffers() to take " Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 26/26] fs: Convert drop_buffers() to use " Matthew Wilcox (Oracle)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Saves a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/jbd2/commit.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
index ac7f067b7bdd..2f37108da0ec 100644
--- a/fs/jbd2/commit.c
+++ b/fs/jbd2/commit.c
@@ -62,6 +62,7 @@ static void journal_end_buffer_io_sync(struct buffer_head *bh, int uptodate)
  */
 static void release_buffer_page(struct buffer_head *bh)
 {
+	struct folio *folio;
 	struct page *page;
 
 	if (buffer_dirty(bh))
@@ -71,18 +72,19 @@ static void release_buffer_page(struct buffer_head *bh)
 	page = bh->b_page;
 	if (!page)
 		goto nope;
-	if (page->mapping)
+	folio = page_folio(page);
+	if (folio->mapping)
 		goto nope;
 
 	/* OK, it's a truncated page */
-	if (!trylock_page(page))
+	if (!folio_trylock(folio))
 		goto nope;
 
-	get_page(page);
+	folio_get(folio);
 	__brelse(bh);
-	try_to_free_buffers(page);
-	unlock_page(page);
-	put_page(page);
+	try_to_free_buffers(&folio->page);
+	folio_unlock(folio);
+	folio_put(folio);
 	return;
 
 nope:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 25/26] fs: Change try_to_free_buffers() to take a folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (23 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 24/26] jbd2: Convert release_buffer_page() to use " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  2022-05-02  5:56 ` [PATCH 26/26] fs: Convert drop_buffers() to use " Matthew Wilcox (Oracle)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

All but two of the callers already have a folio; pass a folio into
try_to_free_buffers().  This removes the last user of cancel_dirty_page()
so remove that wrapper function too.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/buffer.c                 | 42 ++++++++++++++++++-------------------
 fs/ext4/inode.c             |  2 +-
 fs/gfs2/aops.c              |  2 +-
 fs/hfs/inode.c              |  2 +-
 fs/hfsplus/inode.c          |  2 +-
 fs/jbd2/commit.c            |  2 +-
 fs/jbd2/transaction.c       |  4 ++--
 fs/mpage.c                  |  2 +-
 fs/ocfs2/aops.c             |  2 +-
 fs/reiserfs/inode.c         |  2 +-
 fs/reiserfs/journal.c       |  2 +-
 include/linux/buffer_head.h |  4 ++--
 include/linux/pagemap.h     |  4 ----
 mm/filemap.c                |  2 +-
 mm/migrate.c                |  2 +-
 mm/vmscan.c                 |  2 +-
 16 files changed, 37 insertions(+), 41 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 786ef5b98c80..701af0035802 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -955,7 +955,7 @@ grow_dev_page(struct block_device *bdev, sector_t block,
 						size);
 			goto done;
 		}
-		if (!try_to_free_buffers(page))
+		if (!try_to_free_buffers(page_folio(page)))
 			goto failed;
 	}
 
@@ -3155,20 +3155,20 @@ int sync_dirty_buffer(struct buffer_head *bh)
 EXPORT_SYMBOL(sync_dirty_buffer);
 
 /*
- * try_to_free_buffers() checks if all the buffers on this particular page
+ * try_to_free_buffers() checks if all the buffers on this particular folio
  * are unused, and releases them if so.
  *
  * Exclusion against try_to_free_buffers may be obtained by either
- * locking the page or by holding its mapping's private_lock.
+ * locking the folio or by holding its mapping's private_lock.
  *
- * If the page is dirty but all the buffers are clean then we need to
- * be sure to mark the page clean as well.  This is because the page
+ * If the folio is dirty but all the buffers are clean then we need to
+ * be sure to mark the folio clean as well.  This is because the folio
  * may be against a block device, and a later reattachment of buffers
- * to a dirty page will set *all* buffers dirty.  Which would corrupt
+ * to a dirty folio will set *all* buffers dirty.  Which would corrupt
  * filesystem data on the same device.
  *
- * The same applies to regular filesystem pages: if all the buffers are
- * clean then we set the page clean and proceed.  To do that, we require
+ * The same applies to regular filesystem folios: if all the buffers are
+ * clean then we set the folio clean and proceed.  To do that, we require
  * total exclusion from block_dirty_folio().  That is obtained with
  * private_lock.
  *
@@ -3207,40 +3207,40 @@ drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
 	return 0;
 }
 
-int try_to_free_buffers(struct page *page)
+bool try_to_free_buffers(struct folio *folio)
 {
-	struct address_space * const mapping = page->mapping;
+	struct address_space * const mapping = folio->mapping;
 	struct buffer_head *buffers_to_free = NULL;
-	int ret = 0;
+	bool ret = 0;
 
-	BUG_ON(!PageLocked(page));
-	if (PageWriteback(page))
-		return 0;
+	BUG_ON(!folio_test_locked(folio));
+	if (folio_test_writeback(folio))
+		return false;
 
 	if (mapping == NULL) {		/* can this still happen? */
-		ret = drop_buffers(page, &buffers_to_free);
+		ret = drop_buffers(&folio->page, &buffers_to_free);
 		goto out;
 	}
 
 	spin_lock(&mapping->private_lock);
-	ret = drop_buffers(page, &buffers_to_free);
+	ret = drop_buffers(&folio->page, &buffers_to_free);
 
 	/*
 	 * If the filesystem writes its buffers by hand (eg ext3)
-	 * then we can have clean buffers against a dirty page.  We
-	 * clean the page here; otherwise the VM will never notice
+	 * then we can have clean buffers against a dirty folio.  We
+	 * clean the folio here; otherwise the VM will never notice
 	 * that the filesystem did any IO at all.
 	 *
 	 * Also, during truncate, discard_buffer will have marked all
-	 * the page's buffers clean.  We discover that here and clean
-	 * the page also.
+	 * the folio's buffers clean.  We discover that here and clean
+	 * the folio also.
 	 *
 	 * private_lock must be held over this entire operation in order
 	 * to synchronise against block_dirty_folio and prevent the
 	 * dirty bit from being lost.
 	 */
 	if (ret)
-		cancel_dirty_page(page);
+		folio_cancel_dirty(folio);
 	spin_unlock(&mapping->private_lock);
 out:
 	if (buffers_to_free) {
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 943937cb5302..987ea77e672d 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3255,7 +3255,7 @@ static bool ext4_release_folio(struct folio *folio, gfp_t wait)
 	if (journal)
 		return jbd2_journal_try_to_free_buffers(journal, folio);
 	else
-		return try_to_free_buffers(&folio->page);
+		return try_to_free_buffers(folio);
 }
 
 static bool ext4_inode_datasync_dirty(struct inode *inode)
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 95a674d70c04..106e90a36583 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -757,7 +757,7 @@ bool gfs2_release_folio(struct folio *folio, gfp_t gfp_mask)
 	} while (bh != head);
 	gfs2_log_unlock(sdp);
 
-	return try_to_free_buffers(&folio->page);
+	return try_to_free_buffers(folio);
 
 cannot_release:
 	gfs2_log_unlock(sdp);
diff --git a/fs/hfs/inode.c b/fs/hfs/inode.c
index 86fd50e5fccb..c4526f16355d 100644
--- a/fs/hfs/inode.c
+++ b/fs/hfs/inode.c
@@ -124,7 +124,7 @@ static bool hfs_release_folio(struct folio *folio, gfp_t mask)
 		} while (--i && nidx < tree->node_count);
 		spin_unlock(&tree->hash_lock);
 	}
-	return res ? try_to_free_buffers(&folio->page) : false;
+	return res ? try_to_free_buffers(folio) : false;
 }
 
 static ssize_t hfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
index f723e0e91d51..aeab83ed1c9c 100644
--- a/fs/hfsplus/inode.c
+++ b/fs/hfsplus/inode.c
@@ -121,7 +121,7 @@ static bool hfsplus_release_folio(struct folio *folio, gfp_t mask)
 		} while (--i && nidx < tree->node_count);
 		spin_unlock(&tree->hash_lock);
 	}
-	return res ? try_to_free_buffers(&folio->page) : false;
+	return res ? try_to_free_buffers(folio) : false;
 }
 
 static ssize_t hfsplus_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
index 2f37108da0ec..eb315e81f1a6 100644
--- a/fs/jbd2/commit.c
+++ b/fs/jbd2/commit.c
@@ -82,7 +82,7 @@ static void release_buffer_page(struct buffer_head *bh)
 
 	folio_get(folio);
 	__brelse(bh);
-	try_to_free_buffers(&folio->page);
+	try_to_free_buffers(folio);
 	folio_unlock(folio);
 	folio_put(folio);
 	return;
diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
index ee33d277d51e..e49bb0938376 100644
--- a/fs/jbd2/transaction.c
+++ b/fs/jbd2/transaction.c
@@ -2175,7 +2175,7 @@ bool jbd2_journal_try_to_free_buffers(journal_t *journal, struct folio *folio)
 			goto busy;
 	} while ((bh = bh->b_this_page) != head);
 
-	ret = try_to_free_buffers(&folio->page);
+	ret = try_to_free_buffers(folio);
 busy:
 	return ret;
 }
@@ -2482,7 +2482,7 @@ int jbd2_journal_invalidate_folio(journal_t *journal, struct folio *folio,
 	} while (bh != head);
 
 	if (!partial_page) {
-		if (may_free && try_to_free_buffers(&folio->page))
+		if (may_free && try_to_free_buffers(folio))
 			J_ASSERT(!folio_buffers(folio));
 	}
 	return 0;
diff --git a/fs/mpage.c b/fs/mpage.c
index 6df9c3aa5728..0d25f44f5707 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -431,7 +431,7 @@ static void clean_buffers(struct page *page, unsigned first_unmapped)
 	 * disk before we reach the platter.
 	 */
 	if (buffer_heads_over_limit && PageUptodate(page))
-		try_to_free_buffers(page);
+		try_to_free_buffers(page_folio(page));
 }
 
 /*
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 7d7b86ca078f..35d40a67204c 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -502,7 +502,7 @@ static bool ocfs2_release_folio(struct folio *folio, gfp_t wait)
 {
 	if (!folio_buffers(folio))
 		return false;
-	return try_to_free_buffers(&folio->page);
+	return try_to_free_buffers(folio);
 }
 
 static void ocfs2_figure_cluster_boundaries(struct ocfs2_super *osb,
diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
index 9cf2e1420a74..0cffe054b78e 100644
--- a/fs/reiserfs/inode.c
+++ b/fs/reiserfs/inode.c
@@ -3234,7 +3234,7 @@ static bool reiserfs_release_folio(struct folio *folio, gfp_t unused_gfp_flags)
 		bh = bh->b_this_page;
 	} while (bh != head);
 	if (ret)
-		ret = try_to_free_buffers(&folio->page);
+		ret = try_to_free_buffers(folio);
 	spin_unlock(&j->j_dirty_buffers_lock);
 	return ret;
 }
diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index 99ba495b0f28..d8cc9a366124 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -606,7 +606,7 @@ static void release_buffer_page(struct buffer_head *bh)
 		folio_get(folio);
 		put_bh(bh);
 		if (!folio->mapping)
-			try_to_free_buffers(&folio->page);
+			try_to_free_buffers(folio);
 		folio_unlock(folio);
 		folio_put(folio);
 	} else {
diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
index 31d82fd9abe8..c9d1463bb20f 100644
--- a/include/linux/buffer_head.h
+++ b/include/linux/buffer_head.h
@@ -158,7 +158,7 @@ void mark_buffer_write_io_error(struct buffer_head *bh);
 void touch_buffer(struct buffer_head *bh);
 void set_bh_page(struct buffer_head *bh,
 		struct page *page, unsigned long offset);
-int try_to_free_buffers(struct page *);
+bool try_to_free_buffers(struct folio *);
 struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 		bool retry);
 void create_empty_buffers(struct page *, unsigned long,
@@ -402,7 +402,7 @@ bool block_dirty_folio(struct address_space *mapping, struct folio *folio);
 #else /* CONFIG_BLOCK */
 
 static inline void buffer_init(void) {}
-static inline int try_to_free_buffers(struct page *page) { return 1; }
+static inline bool try_to_free_buffers(struct folio *folio) { return true; }
 static inline int inode_has_buffers(struct inode *inode) { return 0; }
 static inline void invalidate_inode_buffers(struct inode *inode) {}
 static inline int remove_inode_buffers(struct inode *inode) { return 1; }
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 831b28dab01a..82dfb279e0c4 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -1067,10 +1067,6 @@ static inline void folio_cancel_dirty(struct folio *folio)
 	if (folio_test_dirty(folio))
 		__folio_cancel_dirty(folio);
 }
-static inline void cancel_dirty_page(struct page *page)
-{
-	folio_cancel_dirty(page_folio(page));
-}
 bool folio_clear_dirty_for_io(struct folio *folio);
 bool clear_page_dirty_for_io(struct page *page);
 void folio_invalidate(struct folio *folio, size_t offset, size_t length);
diff --git a/mm/filemap.c b/mm/filemap.c
index 7d55bb53bff7..638802f60cec 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3958,6 +3958,6 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
 
 	if (mapping && mapping->a_ops->release_folio)
 		return mapping->a_ops->release_folio(folio, gfp);
-	return try_to_free_buffers(&folio->page);
+	return try_to_free_buffers(folio);
 }
 EXPORT_SYMBOL(filemap_release_folio);
diff --git a/mm/migrate.c b/mm/migrate.c
index 6c31ee1e1c9b..21d82636c291 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1013,7 +1013,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	if (!page->mapping) {
 		VM_BUG_ON_PAGE(PageAnon(page), page);
 		if (page_has_private(page)) {
-			try_to_free_buffers(page);
+			try_to_free_buffers(folio);
 			goto out_unlock_both;
 		}
 	} else if (page_mapped(page)) {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 27851232e00c..f3f7ce2c4068 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1181,7 +1181,7 @@ static pageout_t pageout(struct folio *folio, struct address_space *mapping)
 		 * folio->mapping == NULL while being dirty with clean buffers.
 		 */
 		if (folio_test_private(folio)) {
-			if (try_to_free_buffers(&folio->page)) {
+			if (try_to_free_buffers(folio)) {
 				folio_clear_dirty(folio);
 				pr_info("%s: orphaned folio\n", __func__);
 				return PAGE_CLEAN;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 26/26] fs: Convert drop_buffers() to use a folio
  2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
                   ` (24 preceding siblings ...)
  2022-05-02  5:56 ` [PATCH 25/26] fs: Change try_to_free_buffers() to take " Matthew Wilcox (Oracle)
@ 2022-05-02  5:56 ` Matthew Wilcox (Oracle)
  25 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-02  5:56 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

All callers now have a folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/buffer.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 701af0035802..898c7f301b1b 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3180,10 +3180,10 @@ static inline int buffer_busy(struct buffer_head *bh)
 		(bh->b_state & ((1 << BH_Dirty) | (1 << BH_Lock)));
 }
 
-static int
-drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
+static bool
+drop_buffers(struct folio *folio, struct buffer_head **buffers_to_free)
 {
-	struct buffer_head *head = page_buffers(page);
+	struct buffer_head *head = folio_buffers(folio);
 	struct buffer_head *bh;
 
 	bh = head;
@@ -3201,10 +3201,10 @@ drop_buffers(struct page *page, struct buffer_head **buffers_to_free)
 		bh = next;
 	} while (bh != head);
 	*buffers_to_free = head;
-	detach_page_private(page);
-	return 1;
+	folio_detach_private(folio);
+	return true;
 failed:
-	return 0;
+	return false;
 }
 
 bool try_to_free_buffers(struct folio *folio)
@@ -3218,12 +3218,12 @@ bool try_to_free_buffers(struct folio *folio)
 		return false;
 
 	if (mapping == NULL) {		/* can this still happen? */
-		ret = drop_buffers(&folio->page, &buffers_to_free);
+		ret = drop_buffers(folio, &buffers_to_free);
 		goto out;
 	}
 
 	spin_lock(&mapping->private_lock);
-	ret = drop_buffers(&folio->page, &buffers_to_free);
+	ret = drop_buffers(folio, &buffers_to_free);
 
 	/*
 	 * If the filesystem writes its buffers by hand (eg ext3)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/26] fs: Add aops->release_folio
  2022-05-02  5:55 ` [PATCH 01/26] fs: Add aops->release_folio Matthew Wilcox (Oracle)
@ 2022-05-02 15:19   ` Jeff Layton
  2022-05-02 18:06     ` Matthew Wilcox
  0 siblings, 1 reply; 30+ messages in thread
From: Jeff Layton @ 2022-05-02 15:19 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), linux-fsdevel

On Mon, 2022-05-02 at 06:55 +0100, Matthew Wilcox (Oracle) wrote:
> This replaces aops->releasepage.  Update the documentation, and call it
> if it exists.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  .../filesystems/caching/netfs-api.rst         |  4 +-
>  Documentation/filesystems/locking.rst         | 14 +++---
>  Documentation/filesystems/vfs.rst             | 45 +++++++++----------
>  include/linux/fs.h                            |  1 +
>  mm/filemap.c                                  |  2 +
>  5 files changed, 34 insertions(+), 32 deletions(-)
> 
> diff --git a/Documentation/filesystems/caching/netfs-api.rst b/Documentation/filesystems/caching/netfs-api.rst
> index 7308d76a29dc..1d18e9def183 100644
> --- a/Documentation/filesystems/caching/netfs-api.rst
> +++ b/Documentation/filesystems/caching/netfs-api.rst
> @@ -433,11 +433,11 @@ has done a write and then the page it wrote from has been released by the VM,
>  after which it *has* to look in the cache.
>  
>  To inform fscache that a page might now be in the cache, the following function
> -should be called from the ``releasepage`` address space op::
> +should be called from the ``release_folio`` address space op::
>  
>  	void fscache_note_page_release(struct fscache_cookie *cookie);
>  
> -if the page has been released (ie. releasepage returned true).
> +if the page has been released (ie. release_folio returned true).
>  
>  Page release and page invalidation should also wait for any mark left on the
>  page to say that a DIO write is underway from that page::
> diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
> index aeba2475a53c..2a295bb72dbc 100644
> --- a/Documentation/filesystems/locking.rst
> +++ b/Documentation/filesystems/locking.rst
> @@ -249,7 +249,7 @@ prototypes::
>  				struct page *page, void *fsdata);
>  	sector_t (*bmap)(struct address_space *, sector_t);
>  	void (*invalidate_folio) (struct folio *, size_t start, size_t len);
> -	int (*releasepage) (struct page *, int);
> +	int (*release_folio)(struct folio *, gfp_t);
>  	void (*freepage)(struct page *);
>  	int (*direct_IO)(struct kiocb *, struct iov_iter *iter);
>  	bool (*isolate_page) (struct page *, isolate_mode_t);
> @@ -270,13 +270,13 @@ ops			PageLocked(page)	 i_rwsem	invalidate_lock
>  writepage:		yes, unlocks (see below)
>  read_folio:		yes, unlocks				shared
>  writepages:
> -dirty_folio		maybe
> +dirty_folio:		maybe
>  readahead:		yes, unlocks				shared
>  write_begin:		locks the page		 exclusive
>  write_end:		yes, unlocks		 exclusive
>  bmap:
>  invalidate_folio:	yes					exclusive
> -releasepage:		yes
> +release_folio:		yes
>  freepage:		yes
>  direct_IO:
>  isolate_page:		yes
> @@ -372,10 +372,10 @@ invalidate_lock before invalidating page cache in truncate / hole punch
>  path (and thus calling into ->invalidate_folio) to block races between page
>  cache invalidation and page cache filling functions (fault, read, ...).
>  
> -->releasepage() is called when the kernel is about to try to drop the
> -buffers from the page in preparation for freeing it.  It returns zero to
> -indicate that the buffers are (or may be) freeable.  If ->releasepage is zero,
> -the kernel assumes that the fs has no private interest in the buffers.
> +->release_folio() is called when the kernel is about to try to drop the
> +buffers from the folio in preparation for freeing it.  It returns false to
> +indicate that the buffers are (or may be) freeable.  If ->release_folio is
> +NULL, the kernel assumes that the fs has no private interest in the buffers.
>  
>  ->freepage() is called when the kernel is done dropping the page
>  from the page cache.
> diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
> index 0919a4ad973a..679887b5c8fc 100644
> --- a/Documentation/filesystems/vfs.rst
> +++ b/Documentation/filesystems/vfs.rst
> @@ -620,9 +620,9 @@ Writeback.
>  The first can be used independently to the others.  The VM can try to
>  either write dirty pages in order to clean them, or release clean pages
>  in order to reuse them.  To do this it can call the ->writepage method
> -on dirty pages, and ->releasepage on clean pages with PagePrivate set.
> -Clean pages without PagePrivate and with no external references will be
> -released without notice being given to the address_space.
> +on dirty pages, and ->release_folio on clean folios with the private
> +flag set.  Clean pages without PagePrivate and with no external references
> +will be released without notice being given to the address_space.
>  
>  To achieve this functionality, pages need to be placed on an LRU with
>  lru_cache_add and mark_page_active needs to be called whenever the page
> @@ -734,7 +734,7 @@ cache in your filesystem.  The following members are defined:
>  				 struct page *page, void *fsdata);
>  		sector_t (*bmap)(struct address_space *, sector_t);
>  		void (*invalidate_folio) (struct folio *, size_t start, size_t len);
> -		int (*releasepage) (struct page *, int);
> +		bool (*release_folio)(struct folio *, gfp_t);
>  		void (*freepage)(struct page *);
>  		ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
>  		/* isolate a page for migration */
> @@ -864,33 +864,32 @@ cache in your filesystem.  The following members are defined:
>  	address space.  This generally corresponds to either a
>  	truncation, punch hole or a complete invalidation of the address
>  	space (in the latter case 'offset' will always be 0 and 'length'
> -	will be folio_size()).  Any private data associated with the page
> +	will be folio_size()).  Any private data associated with the folio
>  	should be updated to reflect this truncation.  If offset is 0
>  	and length is folio_size(), then the private data should be
> -	released, because the page must be able to be completely
> -	discarded.  This may be done by calling the ->releasepage
> +	released, because the folio must be able to be completely
> +	discarded.  This may be done by calling the ->release_folio
>  	function, but in this case the release MUST succeed.
>  
> -``releasepage``
> -	releasepage is called on PagePrivate pages to indicate that the
> -	page should be freed if possible.  ->releasepage should remove
> -	any private data from the page and clear the PagePrivate flag.
> -	If releasepage() fails for some reason, it must indicate failure
> -	with a 0 return value.  releasepage() is used in two distinct
> -	though related cases.  The first is when the VM finds a clean
> -	page with no active users and wants to make it a free page.  If
> -	->releasepage succeeds, the page will be removed from the
> -	address_space and become free.
> +``release_folio``
> +	release_folio is called on folios with private data to tell the
> +	filesystem that the folio is about to be freed.  ->release_folio
> +	should remove any private data from the folio and clear the
> +	private flag.  If release_folio() fails, it should return false.
> +	release_folio() is used in two distinct though related cases.
> +	The first is when the VM wants to free a clean folio with no
> +	active users.  If ->release_folio succeeds, the folio will be
> +	removed from the address_space and be freed.
>  
>  	The second case is when a request has been made to invalidate
> -	some or all pages in an address_space.  This can happen through
> -	the fadvise(POSIX_FADV_DONTNEED) system call or by the
> -	filesystem explicitly requesting it as nfs and 9fs do (when they
> +	some or all folios in an address_space.  This can happen
> +	through the fadvise(POSIX_FADV_DONTNEED) system call or by the
> +	filesystem explicitly requesting it as nfs and 9p do (when they
>  	believe the cache may be out of date with storage) by calling
>  	invalidate_inode_pages2().  If the filesystem makes such a call,
> -	and needs to be certain that all pages are invalidated, then its
> -	releasepage will need to ensure this.  Possibly it can clear the
> -	PageUptodate bit if it cannot free private data yet.
> +	and needs to be certain that all folios are invalidated, then
> +	its release_folio will need to ensure this.  Possibly it can
> +	clear the uptodate flag if it cannot free private data yet.
>  
>  ``freepage``
>  	freepage is called once the page is no longer visible in the
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index f812f5aa07dd..ad768f13f485 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -355,6 +355,7 @@ struct address_space_operations {
>  	/* Unfortunately this kludge is needed for FIBMAP. Don't use it */
>  	sector_t (*bmap)(struct address_space *, sector_t);
>  	void (*invalidate_folio) (struct folio *, size_t offset, size_t len);
> +	bool (*release_folio)(struct folio *, gfp_t);
>  	int (*releasepage) (struct page *, gfp_t);
>  	void (*freepage)(struct page *);
>  	ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 81a0ed08a82c..40df5704ec39 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -3956,6 +3956,8 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
>  	if (folio_test_writeback(folio))
>  		return false;
>  
> +	if (mapping && mapping->a_ops->release_folio)
> +		return mapping->a_ops->release_folio(folio, gfp);

Might it be worthwhile to add something like this to the above condition
for now?

      BUG_ON(mapping->a_ops->releasepage);

It might help catch bad conversions...


>  	if (mapping && mapping->a_ops->releasepage)
>  		return mapping->a_ops->releasepage(&folio->page, gfp);
>  	return try_to_free_buffers(&folio->page);


Looks pretty like a straighforward change overall.

Reviewed-by: Jeff Layton <jlayton@kernel.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 01/26] fs: Add aops->release_folio
  2022-05-02 15:19   ` Jeff Layton
@ 2022-05-02 18:06     ` Matthew Wilcox
  0 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox @ 2022-05-02 18:06 UTC (permalink / raw)
  To: Jeff Layton; +Cc: linux-fsdevel

On Mon, May 02, 2022 at 11:19:19AM -0400, Jeff Layton wrote:
> On Mon, 2022-05-02 at 06:55 +0100, Matthew Wilcox (Oracle) wrote:
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 81a0ed08a82c..40df5704ec39 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -3956,6 +3956,8 @@ bool filemap_release_folio(struct folio *folio, gfp_t gfp)
> >  	if (folio_test_writeback(folio))
> >  		return false;
> >  
> > +	if (mapping && mapping->a_ops->release_folio)
> > +		return mapping->a_ops->release_folio(folio, gfp);
> 
> Might it be worthwhile to add something like this to the above condition
> for now?
> 
>       BUG_ON(mapping->a_ops->releasepage);
> 
> It might help catch bad conversions...

Patch 26 gets rid of ->releasepage ... I don't intend for it to stick
around for a kernel cycle and let people introduce new users ;-)

> >  	if (mapping && mapping->a_ops->releasepage)
> >  		return mapping->a_ops->releasepage(&folio->page, gfp);
> >  	return try_to_free_buffers(&folio->page);
> 
> 
> Looks pretty like a straighforward change overall.
> 
> Reviewed-by: Jeff Layton <jlayton@kernel.org>

Thanks!

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 02/26] iomap: Convert to release_folio
  2022-05-08 20:32 ` [PATCH 00/26] Convert aops->releasepage to aops->release_folio Matthew Wilcox (Oracle)
@ 2022-05-08 20:32   ` Matthew Wilcox (Oracle)
  0 siblings, 0 replies; 30+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-05-08 20:32 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle)

Change all the filesystems which used iomap_releasepage to use the
new function.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/gfs2/aops.c         |  2 +-
 fs/iomap/buffered-io.c | 22 ++++++++++------------
 fs/iomap/trace.h       |  2 +-
 fs/xfs/xfs_aops.c      |  2 +-
 fs/zonefs/super.c      |  2 +-
 include/linux/iomap.h  |  2 +-
 6 files changed, 15 insertions(+), 17 deletions(-)

diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 1016631bcbdc..3d6c5c5eb4f1 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -768,7 +768,7 @@ static const struct address_space_operations gfs2_aops = {
 	.read_folio = gfs2_read_folio,
 	.readahead = gfs2_readahead,
 	.dirty_folio = filemap_dirty_folio,
-	.releasepage = iomap_releasepage,
+	.release_folio = iomap_release_folio,
 	.invalidate_folio = iomap_invalidate_folio,
 	.bmap = gfs2_bmap,
 	.direct_IO = noop_direct_IO,
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 2de087ac87b6..8532f0e2e2d6 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -452,25 +452,23 @@ bool iomap_is_partially_uptodate(struct folio *folio, size_t from, size_t count)
 }
 EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate);
 
-int
-iomap_releasepage(struct page *page, gfp_t gfp_mask)
+bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags)
 {
-	struct folio *folio = page_folio(page);
-
-	trace_iomap_releasepage(folio->mapping->host, folio_pos(folio),
+	trace_iomap_release_folio(folio->mapping->host, folio_pos(folio),
 			folio_size(folio));
 
 	/*
-	 * mm accommodates an old ext3 case where clean pages might not have had
-	 * the dirty bit cleared. Thus, it can send actual dirty pages to
-	 * ->releasepage() via shrink_active_list(); skip those here.
+	 * mm accommodates an old ext3 case where clean folios might
+	 * not have had the dirty bit cleared.  Thus, it can send actual
+	 * dirty folios to ->release_folio() via shrink_active_list();
+	 * skip those here.
 	 */
 	if (folio_test_dirty(folio) || folio_test_writeback(folio))
-		return 0;
+		return false;
 	iomap_page_release(folio);
-	return 1;
+	return true;
 }
-EXPORT_SYMBOL_GPL(iomap_releasepage);
+EXPORT_SYMBOL_GPL(iomap_release_folio);
 
 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len)
 {
@@ -1483,7 +1481,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data)
 		 * Skip the page if it's fully outside i_size, e.g. due to a
 		 * truncate operation that's in progress. We must redirty the
 		 * page so that reclaim stops reclaiming it. Otherwise
-		 * iomap_vm_releasepage() is called on it and gets confused.
+		 * iomap_release_folio() is called on it and gets confused.
 		 *
 		 * Note that the end_index is unsigned long.  If the given
 		 * offset is greater than 16TB on a 32-bit system then if we
diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
index a6689a563c6e..d48868fc40d7 100644
--- a/fs/iomap/trace.h
+++ b/fs/iomap/trace.h
@@ -80,7 +80,7 @@ DEFINE_EVENT(iomap_range_class, name,	\
 	TP_PROTO(struct inode *inode, loff_t off, u64 len),\
 	TP_ARGS(inode, off, len))
 DEFINE_RANGE_EVENT(iomap_writepage);
-DEFINE_RANGE_EVENT(iomap_releasepage);
+DEFINE_RANGE_EVENT(iomap_release_folio);
 DEFINE_RANGE_EVENT(iomap_invalidate_folio);
 DEFINE_RANGE_EVENT(iomap_dio_invalidate_fail);
 
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index a9c4bb500d53..2acbfc6925dd 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -568,7 +568,7 @@ const struct address_space_operations xfs_address_space_operations = {
 	.readahead		= xfs_vm_readahead,
 	.writepages		= xfs_vm_writepages,
 	.dirty_folio		= filemap_dirty_folio,
-	.releasepage		= iomap_releasepage,
+	.release_folio		= iomap_release_folio,
 	.invalidate_folio	= iomap_invalidate_folio,
 	.bmap			= xfs_vm_bmap,
 	.direct_IO		= noop_direct_IO,
diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index c3a38f711b24..b1a428f860b3 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -197,7 +197,7 @@ static const struct address_space_operations zonefs_file_aops = {
 	.writepage		= zonefs_writepage,
 	.writepages		= zonefs_writepages,
 	.dirty_folio		= filemap_dirty_folio,
-	.releasepage		= iomap_releasepage,
+	.release_folio		= iomap_release_folio,
 	.invalidate_folio	= iomap_invalidate_folio,
 	.migratepage		= iomap_migrate_page,
 	.is_partially_uptodate	= iomap_is_partially_uptodate,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 5b2aa45ddda3..0d674695b6d3 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -228,7 +228,7 @@ ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
 int iomap_read_folio(struct folio *folio, const struct iomap_ops *ops);
 void iomap_readahead(struct readahead_control *, const struct iomap_ops *ops);
 bool iomap_is_partially_uptodate(struct folio *, size_t from, size_t count);
-int iomap_releasepage(struct page *page, gfp_t gfp_mask);
+bool iomap_release_folio(struct folio *folio, gfp_t gfp_flags);
 void iomap_invalidate_folio(struct folio *folio, size_t offset, size_t len);
 #ifdef CONFIG_MIGRATION
 int iomap_migrate_page(struct address_space *mapping, struct page *newpage,
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2022-05-08 20:33 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-02  5:55 [PATCH 00/26] Converting release_page to release_folio Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 01/26] fs: Add aops->release_folio Matthew Wilcox (Oracle)
2022-05-02 15:19   ` Jeff Layton
2022-05-02 18:06     ` Matthew Wilcox
2022-05-02  5:55 ` [PATCH 02/26] iomap: Convert to release_folio Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 03/26] 9p: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 04/26] afs: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 05/26] btrfs: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 06/26] ceph: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 07/26] cifs: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 08/26] erofs: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 09/26] ext4: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 10/26] f2fs: " Matthew Wilcox (Oracle)
2022-05-02  5:55 ` [PATCH 11/26] gfs2: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 12/26] hfs: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 13/26] hfsplus: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 14/26] jfs: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 15/26] nfs: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 16/26] nilfs2: Remove comment about releasepage Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 17/26] ocfs2: Convert to release_folio Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 18/26] orangefs: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 19/26] reiserfs: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 20/26] ubifs: " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 21/26] fs: Remove last vestiges of releasepage Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 22/26] reiserfs: Convert release_buffer_page() to use a folio Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 23/26] jbd2: Convert jbd2_journal_try_to_free_buffers to take " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 24/26] jbd2: Convert release_buffer_page() to use " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 25/26] fs: Change try_to_free_buffers() to take " Matthew Wilcox (Oracle)
2022-05-02  5:56 ` [PATCH 26/26] fs: Convert drop_buffers() to use " Matthew Wilcox (Oracle)
2022-05-08 19:33 [GIT UPDATE] pagecache tree Matthew Wilcox
2022-05-08 20:32 ` [PATCH 00/26] Convert aops->releasepage to aops->release_folio Matthew Wilcox (Oracle)
2022-05-08 20:32   ` [PATCH 02/26] iomap: Convert to release_folio Matthew Wilcox (Oracle)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.