All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/29] Convert most of ext4 to folios
@ 2023-03-24 18:01 Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 01/29] fs: Add FGP_WRITEBEGIN Matthew Wilcox (Oracle)
                   ` (29 more replies)
  0 siblings, 30 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

On top of next-20230321, this converts most of ext4 to use folios instead
of pages.  It does not enable large folios although it fixes some places
that will need to be fixed before they can be enabled for ext4.  It does
not convert mballoc to use folios.  write_begin() and write_end() still
take a page parameter instead of a folio.

It does convert a lot of code away from the page APIs that we're trying
to remove.  It does remove a lot of calls to compound_head().  I'd like
to see it land in 6.4.

v2:
 Address all the feedback I received on v1.  At least I hope I did.

Matthew Wilcox (Oracle) (29):
  fs: Add FGP_WRITEBEGIN
  fscrypt: Add some folio helper functions
  ext4: Convert ext4_bio_write_page() to use a folio
  ext4: Convert ext4_finish_bio() to use folios
  ext4: Turn mpage_process_page() into mpage_process_folio()
  ext4: Convert mpage_submit_page() to mpage_submit_folio()
  ext4: Convert mpage_page_done() to mpage_folio_done()
  ext4: Convert ext4_bio_write_page() to ext4_bio_write_folio()
  ext4: Convert ext4_readpage_inline() to take a folio
  ext4: Convert ext4_convert_inline_data_to_extent() to use a folio
  ext4: Convert ext4_try_to_write_inline_data() to use a folio
  ext4: Convert ext4_da_convert_inline_data_to_extent() to use a folio
  ext4: Convert ext4_da_write_inline_data_begin() to use a folio
  ext4: Convert ext4_read_inline_page() to ext4_read_inline_folio()
  ext4: Convert ext4_write_inline_data_end() to use a folio
  ext4: Convert ext4_write_begin() to use a folio
  ext4: Convert ext4_write_end() to use a folio
  ext4: Use a folio in ext4_journalled_write_end()
  ext4: Convert ext4_journalled_zero_new_buffers() to use a folio
  ext4: Convert __ext4_block_zero_page_range() to use a folio
  ext4: Convert ext4_page_nomap_can_writeout to
    ext4_folio_nomap_can_writeout
  ext4: Use a folio in ext4_da_write_begin()
  ext4: Convert ext4_mpage_readpages() to work on folios
  ext4: Convert ext4_block_write_begin() to take a folio
  ext4: Use a folio in ext4_page_mkwrite()
  ext4: Use a folio iterator in __read_end_io()
  ext4: Convert mext_page_mkuptodate() to take a folio
  ext4: Convert pagecache_read() to use a folio
  ext4: Use a folio in ext4_read_merkle_tree_page

 block/bio.c                |   1 +
 fs/ext4/ext4.h             |   9 +-
 fs/ext4/inline.c           | 171 ++++++++++----------
 fs/ext4/inode.c            | 312 +++++++++++++++++++------------------
 fs/ext4/move_extent.c      |  33 ++--
 fs/ext4/page-io.c          |  98 ++++++------
 fs/ext4/readpage.c         |  72 ++++-----
 fs/ext4/verity.c           |  30 ++--
 fs/iomap/buffered-io.c     |   2 +-
 fs/netfs/buffered_read.c   |   3 +-
 fs/nfs/file.c              |  12 +-
 include/linux/fscrypt.h    |  21 +++
 include/linux/page-flags.h |   5 -
 include/linux/pagemap.h    |   2 +
 mm/folio-compat.c          |   4 +-
 15 files changed, 387 insertions(+), 388 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH v2 01/29] fs: Add FGP_WRITEBEGIN
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-04-06 14:56   ` [PATCH v2 1/29] " Theodore Ts'o
  2023-03-24 18:01 ` [PATCH v2 02/29] fscrypt: Add some folio helper functions Matthew Wilcox (Oracle)
                   ` (28 subsequent siblings)
  29 siblings, 1 reply; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

This particular combination of flags is used by most filesystems
in their ->write_begin method, although it does find use in a
few other places.  Before folios, it warranted its own function
(grab_cache_page_write_begin()), but I think that just having specialised
flags is enough.  It certainly helps the few places that have been
converted from grab_cache_page_write_begin() to __filemap_get_folio().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/move_extent.c    |  5 ++---
 fs/iomap/buffered-io.c   |  2 +-
 fs/netfs/buffered_read.c |  3 +--
 fs/nfs/file.c            | 12 ++----------
 include/linux/pagemap.h  |  2 ++
 mm/folio-compat.c        |  4 +---
 6 files changed, 9 insertions(+), 19 deletions(-)

diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index 7bf6d069199c..a84a794fed56 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -126,7 +126,6 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
 {
 	struct address_space *mapping[2];
 	unsigned int flags;
-	unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE;
 
 	BUG_ON(!inode1 || !inode2);
 	if (inode1 < inode2) {
@@ -139,14 +138,14 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
 	}
 
 	flags = memalloc_nofs_save();
-	folio[0] = __filemap_get_folio(mapping[0], index1, fgp_flags,
+	folio[0] = __filemap_get_folio(mapping[0], index1, FGP_WRITEBEGIN,
 			mapping_gfp_mask(mapping[0]));
 	if (IS_ERR(folio[0])) {
 		memalloc_nofs_restore(flags);
 		return PTR_ERR(folio[0]);
 	}
 
-	folio[1] = __filemap_get_folio(mapping[1], index2, fgp_flags,
+	folio[1] = __filemap_get_folio(mapping[1], index2, FGP_WRITEBEGIN,
 			mapping_gfp_mask(mapping[1]));
 	memalloc_nofs_restore(flags);
 	if (IS_ERR(folio[1])) {
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 96bb56c203f4..063133ec77f4 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -467,7 +467,7 @@ EXPORT_SYMBOL_GPL(iomap_is_partially_uptodate);
  */
 struct folio *iomap_get_folio(struct iomap_iter *iter, loff_t pos)
 {
-	unsigned fgp = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE | FGP_NOFS;
+	unsigned fgp = FGP_WRITEBEGIN | FGP_NOFS;
 
 	if (iter->flags & IOMAP_NOWAIT)
 		fgp |= FGP_NOWAIT;
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index 209726a9cfdb..3404707ddbe7 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -341,14 +341,13 @@ int netfs_write_begin(struct netfs_inode *ctx,
 {
 	struct netfs_io_request *rreq;
 	struct folio *folio;
-	unsigned int fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE;
 	pgoff_t index = pos >> PAGE_SHIFT;
 	int ret;
 
 	DEFINE_READAHEAD(ractl, file, NULL, mapping, index);
 
 retry:
-	folio = __filemap_get_folio(mapping, index, fgp_flags,
+	folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
 				    mapping_gfp_mask(mapping));
 	if (IS_ERR(folio))
 		return PTR_ERR(folio);
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 1d03406e6c03..dd9ef0655716 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -306,15 +306,6 @@ static bool nfs_want_read_modify_write(struct file *file, struct folio *folio,
 	return false;
 }
 
-static struct folio *
-nfs_folio_grab_cache_write_begin(struct address_space *mapping, pgoff_t index)
-{
-	unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE;
-
-	return __filemap_get_folio(mapping, index, fgp_flags,
-				   mapping_gfp_mask(mapping));
-}
-
 /*
  * This does the "real" work of the write. We must allocate and lock the
  * page to be sent back to the generic routine, which then copies the
@@ -335,7 +326,8 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping,
 		file, mapping->host->i_ino, len, (long long) pos);
 
 start:
-	folio = nfs_folio_grab_cache_write_begin(mapping, pos >> PAGE_SHIFT);
+	folio = __filemap_get_folio(mapping, pos >> PAGE_SHIFT, FGP_WRITEBEGIN,
+				   mapping_gfp_mask(mapping));
 	if (IS_ERR(folio))
 		return PTR_ERR(folio);
 	*pagep = &folio->page;
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index fdcd595d2294..a56308a9d1a4 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -506,6 +506,8 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
 #define FGP_FOR_MMAP		0x00000040
 #define FGP_STABLE		0x00000080
 
+#define FGP_WRITEBEGIN		(FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE)
+
 void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
 struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 		int fgp_flags, gfp_t gfp);
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 2511c055a35f..c6f056c20503 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -106,9 +106,7 @@ EXPORT_SYMBOL(pagecache_get_page);
 struct page *grab_cache_page_write_begin(struct address_space *mapping,
 					pgoff_t index)
 {
-	unsigned fgp_flags = FGP_LOCK | FGP_WRITE | FGP_CREAT | FGP_STABLE;
-
-	return pagecache_get_page(mapping, index, fgp_flags,
+	return pagecache_get_page(mapping, index, FGP_WRITEBEGIN,
 			mapping_gfp_mask(mapping));
 }
 EXPORT_SYMBOL(grab_cache_page_write_begin);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 02/29] fscrypt: Add some folio helper functions
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 01/29] fs: Add FGP_WRITEBEGIN Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 03/29] ext4: Convert ext4_bio_write_page() to use a folio Matthew Wilcox (Oracle)
                   ` (27 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel
  Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel, Ritesh Harjani

fscrypt_is_bounce_folio() is the equivalent of fscrypt_is_bounce_page()
and fscrypt_pagecache_folio() is the equivalent of fscrypt_pagecache_page().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
 include/linux/fscrypt.h | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/linux/fscrypt.h b/include/linux/fscrypt.h
index a69f1302051d..c895b12737a1 100644
--- a/include/linux/fscrypt.h
+++ b/include/linux/fscrypt.h
@@ -273,6 +273,16 @@ static inline struct page *fscrypt_pagecache_page(struct page *bounce_page)
 	return (struct page *)page_private(bounce_page);
 }
 
+static inline bool fscrypt_is_bounce_folio(struct folio *folio)
+{
+	return folio->mapping == NULL;
+}
+
+static inline struct folio *fscrypt_pagecache_folio(struct folio *bounce_folio)
+{
+	return bounce_folio->private;
+}
+
 void fscrypt_free_bounce_page(struct page *bounce_page);
 
 /* policy.c */
@@ -446,6 +456,17 @@ static inline struct page *fscrypt_pagecache_page(struct page *bounce_page)
 	return ERR_PTR(-EINVAL);
 }
 
+static inline bool fscrypt_is_bounce_folio(struct folio *folio)
+{
+	return false;
+}
+
+static inline struct folio *fscrypt_pagecache_folio(struct folio *bounce_folio)
+{
+	WARN_ON_ONCE(1);
+	return ERR_PTR(-EINVAL);
+}
+
 static inline void fscrypt_free_bounce_page(struct page *bounce_page)
 {
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 03/29] ext4: Convert ext4_bio_write_page() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 01/29] fs: Add FGP_WRITEBEGIN Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 02/29] fscrypt: Add some folio helper functions Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 04/29] ext4: Convert ext4_finish_bio() to use folios Matthew Wilcox (Oracle)
                   ` (26 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel
  Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel, Ritesh Harjani

Remove several calls to compound_head() and the last caller of
set_page_writeback_keepwrite(), so remove the wrapper too.

Also export bio_add_folio() as this is the first caller from a module.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 block/bio.c                |  1 +
 fs/ext4/page-io.c          | 58 ++++++++++++++++++--------------------
 include/linux/page-flags.h |  5 ----
 3 files changed, 28 insertions(+), 36 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index fc98c1c723ca..798cc4cf3bd2 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1159,6 +1159,7 @@ bool bio_add_folio(struct bio *bio, struct folio *folio, size_t len,
 		return false;
 	return bio_add_page(bio, &folio->page, len, off) > 0;
 }
+EXPORT_SYMBOL(bio_add_folio);
 
 void __bio_release_pages(struct bio *bio, bool mark_dirty)
 {
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 8703fd732abb..7850d2cb2e08 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -409,12 +409,10 @@ static void io_submit_init_bio(struct ext4_io_submit *io,
 
 static void io_submit_add_bh(struct ext4_io_submit *io,
 			     struct inode *inode,
-			     struct page *pagecache_page,
-			     struct page *bounce_page,
+			     struct folio *folio,
+			     struct folio *io_folio,
 			     struct buffer_head *bh)
 {
-	int ret;
-
 	if (io->io_bio && (bh->b_blocknr != io->io_next_block ||
 			   !fscrypt_mergeable_bio_bh(io->io_bio, bh))) {
 submit_and_retry:
@@ -422,11 +420,9 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
 	}
 	if (io->io_bio == NULL)
 		io_submit_init_bio(io, bh);
-	ret = bio_add_page(io->io_bio, bounce_page ?: pagecache_page,
-			   bh->b_size, bh_offset(bh));
-	if (ret != bh->b_size)
+	if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh)))
 		goto submit_and_retry;
-	wbc_account_cgroup_owner(io->io_wbc, pagecache_page, bh->b_size);
+	wbc_account_cgroup_owner(io->io_wbc, &folio->page, bh->b_size);
 	io->io_next_block++;
 }
 
@@ -434,8 +430,9 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 			struct page *page,
 			int len)
 {
-	struct page *bounce_page = NULL;
-	struct inode *inode = page->mapping->host;
+	struct folio *folio = page_folio(page);
+	struct folio *io_folio = folio;
+	struct inode *inode = folio->mapping->host;
 	unsigned block_start;
 	struct buffer_head *bh, *head;
 	int ret = 0;
@@ -443,30 +440,30 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 	struct writeback_control *wbc = io->io_wbc;
 	bool keep_towrite = false;
 
-	BUG_ON(!PageLocked(page));
-	BUG_ON(PageWriteback(page));
+	BUG_ON(!folio_test_locked(folio));
+	BUG_ON(folio_test_writeback(folio));
 
-	ClearPageError(page);
+	folio_clear_error(folio);
 
 	/*
 	 * Comments copied from block_write_full_page:
 	 *
-	 * The page straddles i_size.  It must be zeroed out on each and every
+	 * The folio straddles i_size.  It must be zeroed out on each and every
 	 * writepage invocation because it may be mmapped.  "A file is mapped
 	 * in multiples of the page size.  For a file that is not a multiple of
 	 * the page size, the remaining memory is zeroed when mapped, and
 	 * writes to that region are not written out to the file."
 	 */
-	if (len < PAGE_SIZE)
-		zero_user_segment(page, len, PAGE_SIZE);
+	if (len < folio_size(folio))
+		folio_zero_segment(folio, len, folio_size(folio));
 	/*
 	 * In the first loop we prepare and mark buffers to submit. We have to
-	 * mark all buffers in the page before submitting so that
-	 * end_page_writeback() cannot be called from ext4_end_bio() when IO
+	 * mark all buffers in the folio before submitting so that
+	 * folio_end_writeback() cannot be called from ext4_end_bio() when IO
 	 * on the first buffer finishes and we are still working on submitting
 	 * the second buffer.
 	 */
-	bh = head = page_buffers(page);
+	bh = head = folio_buffers(folio);
 	do {
 		block_start = bh_offset(bh);
 		if (block_start >= len) {
@@ -481,14 +478,14 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 				clear_buffer_dirty(bh);
 			/*
 			 * Keeping dirty some buffer we cannot write? Make sure
-			 * to redirty the page and keep TOWRITE tag so that
-			 * racing WB_SYNC_ALL writeback does not skip the page.
+			 * to redirty the folio and keep TOWRITE tag so that
+			 * racing WB_SYNC_ALL writeback does not skip the folio.
 			 * This happens e.g. when doing writeout for
 			 * transaction commit.
 			 */
 			if (buffer_dirty(bh)) {
-				if (!PageDirty(page))
-					redirty_page_for_writepage(wbc, page);
+				if (!folio_test_dirty(folio))
+					folio_redirty_for_writepage(wbc, folio);
 				keep_towrite = true;
 			}
 			continue;
@@ -500,11 +497,11 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 		nr_to_submit++;
 	} while ((bh = bh->b_this_page) != head);
 
-	/* Nothing to submit? Just unlock the page... */
+	/* Nothing to submit? Just unlock the folio... */
 	if (!nr_to_submit)
 		return 0;
 
-	bh = head = page_buffers(page);
+	bh = head = folio_buffers(folio);
 
 	/*
 	 * If any blocks are being written to an encrypted file, encrypt them
@@ -516,6 +513,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 	if (fscrypt_inode_uses_fs_layer_crypto(inode) && nr_to_submit) {
 		gfp_t gfp_flags = GFP_NOFS;
 		unsigned int enc_bytes = round_up(len, i_blocksize(inode));
+		struct page *bounce_page;
 
 		/*
 		 * Since bounce page allocation uses a mempool, we can only use
@@ -542,7 +540,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 			}
 
 			printk_ratelimited(KERN_ERR "%s: ret = %d\n", __func__, ret);
-			redirty_page_for_writepage(wbc, page);
+			folio_redirty_for_writepage(wbc, folio);
 			do {
 				if (buffer_async_write(bh)) {
 					clear_buffer_async_write(bh);
@@ -553,18 +551,16 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 
 			return ret;
 		}
+		io_folio = page_folio(bounce_page);
 	}
 
-	if (keep_towrite)
-		set_page_writeback_keepwrite(page);
-	else
-		set_page_writeback(page);
+	__folio_start_writeback(folio, keep_towrite);
 
 	/* Now submit buffers to write */
 	do {
 		if (!buffer_async_write(bh))
 			continue;
-		io_submit_add_bh(io, inode, page, bounce_page, bh);
+		io_submit_add_bh(io, inode, folio, io_folio, bh);
 	} while ((bh = bh->b_this_page) != head);
 
 	return 0;
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 88600a94fa91..1c68d67b832f 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -753,11 +753,6 @@ bool set_page_writeback(struct page *page);
 #define folio_start_writeback_keepwrite(folio)	\
 	__folio_start_writeback(folio, true)
 
-static inline void set_page_writeback_keepwrite(struct page *page)
-{
-	folio_start_writeback_keepwrite(page_folio(page));
-}
-
 static inline bool test_set_page_writeback(struct page *page)
 {
 	return set_page_writeback(page);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 04/29] ext4: Convert ext4_finish_bio() to use folios
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 03/29] ext4: Convert ext4_bio_write_page() to use a folio Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 05/29] ext4: Turn mpage_process_page() into mpage_process_folio() Matthew Wilcox (Oracle)
                   ` (25 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel
  Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel, Ritesh Harjani

Prepare ext4 to support large folios in the page writeback path.
Also set the actual error in the mapping, not just -EIO.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/page-io.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index 7850d2cb2e08..f0144ef39bb1 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -99,30 +99,30 @@ static void buffer_io_error(struct buffer_head *bh)
 
 static void ext4_finish_bio(struct bio *bio)
 {
-	struct bio_vec *bvec;
-	struct bvec_iter_all iter_all;
+	struct folio_iter fi;
 
-	bio_for_each_segment_all(bvec, bio, iter_all) {
-		struct page *page = bvec->bv_page;
-		struct page *bounce_page = NULL;
+	bio_for_each_folio_all(fi, bio) {
+		struct folio *folio = fi.folio;
+		struct folio *io_folio = NULL;
 		struct buffer_head *bh, *head;
-		unsigned bio_start = bvec->bv_offset;
-		unsigned bio_end = bio_start + bvec->bv_len;
+		size_t bio_start = fi.offset;
+		size_t bio_end = bio_start + fi.length;
 		unsigned under_io = 0;
 		unsigned long flags;
 
-		if (fscrypt_is_bounce_page(page)) {
-			bounce_page = page;
-			page = fscrypt_pagecache_page(bounce_page);
+		if (fscrypt_is_bounce_folio(folio)) {
+			io_folio = folio;
+			folio = fscrypt_pagecache_folio(folio);
 		}
 
 		if (bio->bi_status) {
-			SetPageError(page);
-			mapping_set_error(page->mapping, -EIO);
+			int err = blk_status_to_errno(bio->bi_status);
+			folio_set_error(folio);
+			mapping_set_error(folio->mapping, err);
 		}
-		bh = head = page_buffers(page);
+		bh = head = folio_buffers(folio);
 		/*
-		 * We check all buffers in the page under b_uptodate_lock
+		 * We check all buffers in the folio under b_uptodate_lock
 		 * to avoid races with other end io clearing async_write flags
 		 */
 		spin_lock_irqsave(&head->b_uptodate_lock, flags);
@@ -141,8 +141,8 @@ static void ext4_finish_bio(struct bio *bio)
 		} while ((bh = bh->b_this_page) != head);
 		spin_unlock_irqrestore(&head->b_uptodate_lock, flags);
 		if (!under_io) {
-			fscrypt_free_bounce_page(bounce_page);
-			end_page_writeback(page);
+			fscrypt_free_bounce_page(&io_folio->page);
+			folio_end_writeback(folio);
 		}
 	}
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 05/29] ext4: Turn mpage_process_page() into mpage_process_folio()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 04/29] ext4: Convert ext4_finish_bio() to use folios Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 06/29] ext4: Convert mpage_submit_page() to mpage_submit_folio() Matthew Wilcox (Oracle)
                   ` (24 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

The page/folio is only used to extract the buffers, so this is a
simple change.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inode.c | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index eaeec84ec1b0..f8c02e55fbe3 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2022,21 +2022,22 @@ static int mpage_process_page_bufs(struct mpage_da_data *mpd,
 }
 
 /*
- * mpage_process_page - update page buffers corresponding to changed extent and
- *		       may submit fully mapped page for IO
- *
- * @mpd		- description of extent to map, on return next extent to map
- * @m_lblk	- logical block mapping.
- * @m_pblk	- corresponding physical mapping.
- * @map_bh	- determines on return whether this page requires any further
+ * mpage_process_folio - update folio buffers corresponding to changed extent
+ *			 and may submit fully mapped page for IO
+ * @mpd: description of extent to map, on return next extent to map
+ * @folio: Contains these buffers.
+ * @m_lblk: logical block mapping.
+ * @m_pblk: corresponding physical mapping.
+ * @map_bh: determines on return whether this page requires any further
  *		  mapping or not.
- * Scan given page buffers corresponding to changed extent and update buffer
+ *
+ * Scan given folio buffers corresponding to changed extent and update buffer
  * state according to new extent state.
  * We map delalloc buffers to their physical location, clear unwritten bits.
- * If the given page is not fully mapped, we update @map to the next extent in
- * the given page that needs mapping & return @map_bh as true.
+ * If the given folio is not fully mapped, we update @mpd to the next extent in
+ * the given folio that needs mapping & return @map_bh as true.
  */
-static int mpage_process_page(struct mpage_da_data *mpd, struct page *page,
+static int mpage_process_folio(struct mpage_da_data *mpd, struct folio *folio,
 			      ext4_lblk_t *m_lblk, ext4_fsblk_t *m_pblk,
 			      bool *map_bh)
 {
@@ -2049,14 +2050,14 @@ static int mpage_process_page(struct mpage_da_data *mpd, struct page *page,
 	ssize_t io_end_size = 0;
 	struct ext4_io_end_vec *io_end_vec = ext4_last_io_end_vec(io_end);
 
-	bh = head = page_buffers(page);
+	bh = head = folio_buffers(folio);
 	do {
 		if (lblk < mpd->map.m_lblk)
 			continue;
 		if (lblk >= mpd->map.m_lblk + mpd->map.m_len) {
 			/*
 			 * Buffer after end of mapped extent.
-			 * Find next buffer in the page to map.
+			 * Find next buffer in the folio to map.
 			 */
 			mpd->map.m_len = 0;
 			mpd->map.m_flags = 0;
@@ -2129,9 +2130,9 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd)
 		if (nr == 0)
 			break;
 		for (i = 0; i < nr; i++) {
-			struct page *page = &fbatch.folios[i]->page;
+			struct folio *folio = fbatch.folios[i];
 
-			err = mpage_process_page(mpd, page, &lblk, &pblock,
+			err = mpage_process_folio(mpd, folio, &lblk, &pblock,
 						 &map_bh);
 			/*
 			 * If map_bh is true, means page may require further bh
@@ -2141,10 +2142,10 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd)
 			if (err < 0 || map_bh)
 				goto out;
 			/* Page fully mapped - let IO run! */
-			err = mpage_submit_page(mpd, page);
+			err = mpage_submit_page(mpd, &folio->page);
 			if (err < 0)
 				goto out;
-			mpage_page_done(mpd, page);
+			mpage_page_done(mpd, &folio->page);
 		}
 		folio_batch_release(&fbatch);
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 06/29] ext4: Convert mpage_submit_page() to mpage_submit_folio()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 05/29] ext4: Turn mpage_process_page() into mpage_process_folio() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 07/29] ext4: Convert mpage_page_done() to mpage_folio_done() Matthew Wilcox (Oracle)
                   ` (23 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

All callers now have a folio so we can pass one in and use the folio
APIs to support large folios as well as save instructions by eliminating
calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inode.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index f8c02e55fbe3..8f482032d501 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1869,34 +1869,33 @@ static void mpage_page_done(struct mpage_da_data *mpd, struct page *page)
 	unlock_page(page);
 }
 
-static int mpage_submit_page(struct mpage_da_data *mpd, struct page *page)
+static int mpage_submit_folio(struct mpage_da_data *mpd, struct folio *folio)
 {
-	int len;
+	size_t len;
 	loff_t size;
 	int err;
 
-	BUG_ON(page->index != mpd->first_page);
-	clear_page_dirty_for_io(page);
+	BUG_ON(folio->index != mpd->first_page);
+	folio_clear_dirty_for_io(folio);
 	/*
 	 * We have to be very careful here!  Nothing protects writeback path
 	 * against i_size changes and the page can be writeably mapped into
 	 * page tables. So an application can be growing i_size and writing
-	 * data through mmap while writeback runs. clear_page_dirty_for_io()
+	 * data through mmap while writeback runs. folio_clear_dirty_for_io()
 	 * write-protects our page in page tables and the page cannot get
-	 * written to again until we release page lock. So only after
-	 * clear_page_dirty_for_io() we are safe to sample i_size for
+	 * written to again until we release folio lock. So only after
+	 * folio_clear_dirty_for_io() we are safe to sample i_size for
 	 * ext4_bio_write_page() to zero-out tail of the written page. We rely
 	 * on the barrier provided by TestClearPageDirty in
-	 * clear_page_dirty_for_io() to make sure i_size is really sampled only
+	 * folio_clear_dirty_for_io() to make sure i_size is really sampled only
 	 * after page tables are updated.
 	 */
 	size = i_size_read(mpd->inode);
-	if (page->index == size >> PAGE_SHIFT &&
+	len = folio_size(folio);
+	if (folio_pos(folio) + len > size &&
 	    !ext4_verity_in_progress(mpd->inode))
 		len = size & ~PAGE_MASK;
-	else
-		len = PAGE_SIZE;
-	err = ext4_bio_write_page(&mpd->io_submit, page, len);
+	err = ext4_bio_write_page(&mpd->io_submit, &folio->page, len);
 	if (!err)
 		mpd->wbc->nr_to_write--;
 
@@ -2009,7 +2008,7 @@ static int mpage_process_page_bufs(struct mpage_da_data *mpd,
 	} while (lblk++, (bh = bh->b_this_page) != head);
 	/* So far everything mapped? Submit the page for IO. */
 	if (mpd->map.m_len == 0) {
-		err = mpage_submit_page(mpd, head->b_page);
+		err = mpage_submit_folio(mpd, head->b_folio);
 		if (err < 0)
 			return err;
 		mpage_page_done(mpd, head->b_page);
@@ -2142,7 +2141,7 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd)
 			if (err < 0 || map_bh)
 				goto out;
 			/* Page fully mapped - let IO run! */
-			err = mpage_submit_page(mpd, &folio->page);
+			err = mpage_submit_folio(mpd, folio);
 			if (err < 0)
 				goto out;
 			mpage_page_done(mpd, &folio->page);
@@ -2532,12 +2531,12 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 				if (ext4_page_nomap_can_writeout(&folio->page)) {
 					WARN_ON_ONCE(sb->s_writers.frozen ==
 						     SB_FREEZE_COMPLETE);
-					err = mpage_submit_page(mpd, &folio->page);
+					err = mpage_submit_folio(mpd, folio);
 					if (err < 0)
 						goto out;
 				}
 				/* Pending dirtying of journalled data? */
-				if (PageChecked(&folio->page)) {
+				if (folio_test_checked(folio)) {
 					WARN_ON_ONCE(sb->s_writers.frozen >=
 						     SB_FREEZE_FS);
 					err = mpage_journal_page_buffers(handle,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 07/29] ext4: Convert mpage_page_done() to mpage_folio_done()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 06/29] ext4: Convert mpage_submit_page() to mpage_submit_folio() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 08/29] ext4: Convert ext4_bio_write_page() to ext4_bio_write_folio() Matthew Wilcox (Oracle)
                   ` (22 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

All callers now have a folio so we can pass one in and use the folio
APIs to support large folios as well as save instructions by eliminating
a call to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 8f482032d501..801fdeffe2f9 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1863,10 +1863,10 @@ int ext4_da_get_block_prep(struct inode *inode, sector_t iblock,
 	return 0;
 }
 
-static void mpage_page_done(struct mpage_da_data *mpd, struct page *page)
+static void mpage_folio_done(struct mpage_da_data *mpd, struct folio *folio)
 {
-	mpd->first_page++;
-	unlock_page(page);
+	mpd->first_page += folio_nr_pages(folio);
+	folio_unlock(folio);
 }
 
 static int mpage_submit_folio(struct mpage_da_data *mpd, struct folio *folio)
@@ -2011,7 +2011,7 @@ static int mpage_process_page_bufs(struct mpage_da_data *mpd,
 		err = mpage_submit_folio(mpd, head->b_folio);
 		if (err < 0)
 			return err;
-		mpage_page_done(mpd, head->b_page);
+		mpage_folio_done(mpd, head->b_folio);
 	}
 	if (lblk >= blocks) {
 		mpd->scanned_until_end = 1;
@@ -2144,7 +2144,7 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd)
 			err = mpage_submit_folio(mpd, folio);
 			if (err < 0)
 				goto out;
-			mpage_page_done(mpd, &folio->page);
+			mpage_folio_done(mpd, folio);
 		}
 		folio_batch_release(&fbatch);
 	}
@@ -2544,7 +2544,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 					if (err < 0)
 						goto out;
 				}
-				mpage_page_done(mpd, &folio->page);
+				mpage_folio_done(mpd, folio);
 			} else {
 				/* Add all dirty buffers to mpd */
 				lblk = ((ext4_lblk_t)folio->index) <<
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 08/29] ext4: Convert ext4_bio_write_page() to ext4_bio_write_folio()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 07/29] ext4: Convert mpage_page_done() to mpage_folio_done() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 09/29] ext4: Convert ext4_readpage_inline() to take a folio Matthew Wilcox (Oracle)
                   ` (21 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

The only caller now has a folio so pass it in directly and avoid the call
to page_folio() at the beginning.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/ext4.h    |  5 ++---
 fs/ext4/inode.c   |  6 +++---
 fs/ext4/page-io.c | 10 ++++------
 3 files changed, 9 insertions(+), 12 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 9b2cfc32cf78..bee344ebd385 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3757,9 +3757,8 @@ extern void ext4_io_submit_init(struct ext4_io_submit *io,
 				struct writeback_control *wbc);
 extern void ext4_end_io_rsv_work(struct work_struct *work);
 extern void ext4_io_submit(struct ext4_io_submit *io);
-extern int ext4_bio_write_page(struct ext4_io_submit *io,
-			       struct page *page,
-			       int len);
+int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *page,
+		size_t len);
 extern struct ext4_io_end_vec *ext4_alloc_io_end_vec(ext4_io_end_t *io_end);
 extern struct ext4_io_end_vec *ext4_last_io_end_vec(ext4_io_end_t *io_end);
 
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 801fdeffe2f9..4119c63c1215 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1885,8 +1885,8 @@ static int mpage_submit_folio(struct mpage_da_data *mpd, struct folio *folio)
 	 * write-protects our page in page tables and the page cannot get
 	 * written to again until we release folio lock. So only after
 	 * folio_clear_dirty_for_io() we are safe to sample i_size for
-	 * ext4_bio_write_page() to zero-out tail of the written page. We rely
-	 * on the barrier provided by TestClearPageDirty in
+	 * ext4_bio_write_folio() to zero-out tail of the written page. We rely
+	 * on the barrier provided by folio_test_clear_dirty() in
 	 * folio_clear_dirty_for_io() to make sure i_size is really sampled only
 	 * after page tables are updated.
 	 */
@@ -1895,7 +1895,7 @@ static int mpage_submit_folio(struct mpage_da_data *mpd, struct folio *folio)
 	if (folio_pos(folio) + len > size &&
 	    !ext4_verity_in_progress(mpd->inode))
 		len = size & ~PAGE_MASK;
-	err = ext4_bio_write_page(&mpd->io_submit, &folio->page, len);
+	err = ext4_bio_write_folio(&mpd->io_submit, folio, len);
 	if (!err)
 		mpd->wbc->nr_to_write--;
 
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index f0144ef39bb1..8fe1875b0a42 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -426,11 +426,9 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
 	io->io_next_block++;
 }
 
-int ext4_bio_write_page(struct ext4_io_submit *io,
-			struct page *page,
-			int len)
+int ext4_bio_write_folio(struct ext4_io_submit *io, struct folio *folio,
+		size_t len)
 {
-	struct folio *folio = page_folio(page);
 	struct folio *io_folio = folio;
 	struct inode *inode = folio->mapping->host;
 	unsigned block_start;
@@ -523,8 +521,8 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
 		if (io->io_bio)
 			gfp_flags = GFP_NOWAIT | __GFP_NOWARN;
 	retry_encrypt:
-		bounce_page = fscrypt_encrypt_pagecache_blocks(page, enc_bytes,
-							       0, gfp_flags);
+		bounce_page = fscrypt_encrypt_pagecache_blocks(&folio->page,
+					enc_bytes, 0, gfp_flags);
 		if (IS_ERR(bounce_page)) {
 			ret = PTR_ERR(bounce_page);
 			if (ret == -ENOMEM &&
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 09/29] ext4: Convert ext4_readpage_inline() to take a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 08/29] ext4: Convert ext4_bio_write_page() to ext4_bio_write_folio() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 10/29] ext4: Convert ext4_convert_inline_data_to_extent() to use " Matthew Wilcox (Oracle)
                   ` (20 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Use the folio API in this function, saves a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/ext4.h   |  2 +-
 fs/ext4/inline.c | 14 +++++++-------
 fs/ext4/inode.c  |  2 +-
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index bee344ebd385..1de5d838996a 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3550,7 +3550,7 @@ extern int ext4_init_inline_data(handle_t *handle, struct inode *inode,
 				 unsigned int len);
 extern int ext4_destroy_inline_data(handle_t *handle, struct inode *inode);
 
-extern int ext4_readpage_inline(struct inode *inode, struct page *page);
+int ext4_readpage_inline(struct inode *inode, struct folio *folio);
 extern int ext4_try_to_write_inline_data(struct address_space *mapping,
 					 struct inode *inode,
 					 loff_t pos, unsigned len,
diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 1602d74b5eeb..e9bae3002319 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -501,7 +501,7 @@ static int ext4_read_inline_page(struct inode *inode, struct page *page)
 	return ret;
 }
 
-int ext4_readpage_inline(struct inode *inode, struct page *page)
+int ext4_readpage_inline(struct inode *inode, struct folio *folio)
 {
 	int ret = 0;
 
@@ -515,16 +515,16 @@ int ext4_readpage_inline(struct inode *inode, struct page *page)
 	 * Current inline data can only exist in the 1st page,
 	 * So for all the other pages, just set them uptodate.
 	 */
-	if (!page->index)
-		ret = ext4_read_inline_page(inode, page);
-	else if (!PageUptodate(page)) {
-		zero_user_segment(page, 0, PAGE_SIZE);
-		SetPageUptodate(page);
+	if (!folio->index)
+		ret = ext4_read_inline_page(inode, &folio->page);
+	else if (!folio_test_uptodate(folio)) {
+		folio_zero_segment(folio, 0, folio_size(folio));
+		folio_mark_uptodate(folio);
 	}
 
 	up_read(&EXT4_I(inode)->xattr_sem);
 
-	unlock_page(page);
+	folio_unlock(folio);
 	return ret >= 0 ? 0 : ret;
 }
 
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 4119c63c1215..6287cd1aa97e 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3155,7 +3155,7 @@ static int ext4_read_folio(struct file *file, struct folio *folio)
 	trace_ext4_readpage(page);
 
 	if (ext4_has_inline_data(inode))
-		ret = ext4_readpage_inline(inode, page);
+		ret = ext4_readpage_inline(inode, folio);
 
 	if (ret == -EAGAIN)
 		return ext4_mpage_readpages(inode, NULL, page);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 10/29] ext4: Convert ext4_convert_inline_data_to_extent() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (8 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 09/29] ext4: Convert ext4_readpage_inline() to take a folio Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 11/29] ext4: Convert ext4_try_to_write_inline_data() " Matthew Wilcox (Oracle)
                   ` (19 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Saves a number of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inline.c | 40 +++++++++++++++++++---------------------
 1 file changed, 19 insertions(+), 21 deletions(-)

diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index e9bae3002319..f339340ba66c 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -534,8 +534,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 	int ret, needed_blocks, no_expand;
 	handle_t *handle = NULL;
 	int retries = 0, sem_held = 0;
-	struct page *page = NULL;
-	unsigned int flags;
+	struct folio *folio = NULL;
 	unsigned from, to;
 	struct ext4_iloc iloc;
 
@@ -564,10 +563,9 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 
 	/* We cannot recurse into the filesystem as the transaction is already
 	 * started */
-	flags = memalloc_nofs_save();
-	page = grab_cache_page_write_begin(mapping, 0);
-	memalloc_nofs_restore(flags);
-	if (!page) {
+	folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
+			mapping_gfp_mask(mapping));
+	if (!folio) {
 		ret = -ENOMEM;
 		goto out;
 	}
@@ -582,8 +580,8 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 
 	from = 0;
 	to = ext4_get_inline_size(inode);
-	if (!PageUptodate(page)) {
-		ret = ext4_read_inline_page(inode, page);
+	if (!folio_test_uptodate(folio)) {
+		ret = ext4_read_inline_page(inode, &folio->page);
 		if (ret < 0)
 			goto out;
 	}
@@ -593,21 +591,21 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 		goto out;
 
 	if (ext4_should_dioread_nolock(inode)) {
-		ret = __block_write_begin(page, from, to,
+		ret = __block_write_begin(&folio->page, from, to,
 					  ext4_get_block_unwritten);
 	} else
-		ret = __block_write_begin(page, from, to, ext4_get_block);
+		ret = __block_write_begin(&folio->page, from, to, ext4_get_block);
 
 	if (!ret && ext4_should_journal_data(inode)) {
-		ret = ext4_walk_page_buffers(handle, inode, page_buffers(page),
-					     from, to, NULL,
-					     do_journal_get_write_access);
+		ret = ext4_walk_page_buffers(handle, inode,
+					     folio_buffers(folio), from, to,
+					     NULL, do_journal_get_write_access);
 	}
 
 	if (ret) {
-		unlock_page(page);
-		put_page(page);
-		page = NULL;
+		folio_unlock(folio);
+		folio_put(folio);
+		folio = NULL;
 		ext4_orphan_add(handle, inode);
 		ext4_write_unlock_xattr(inode, &no_expand);
 		sem_held = 0;
@@ -627,12 +625,12 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 	if (ret == -ENOSPC && ext4_should_retry_alloc(inode->i_sb, &retries))
 		goto retry;
 
-	if (page)
-		block_commit_write(page, from, to);
+	if (folio)
+		block_commit_write(&folio->page, from, to);
 out:
-	if (page) {
-		unlock_page(page);
-		put_page(page);
+	if (folio) {
+		folio_unlock(folio);
+		folio_put(folio);
 	}
 	if (sem_held)
 		ext4_write_unlock_xattr(inode, &no_expand);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 11/29] ext4: Convert ext4_try_to_write_inline_data() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (9 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 10/29] ext4: Convert ext4_convert_inline_data_to_extent() to use " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 12/29] ext4: Convert ext4_da_convert_inline_data_to_extent() " Matthew Wilcox (Oracle)
                   ` (18 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Saves a number of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inline.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index f339340ba66c..881d559c503f 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -653,8 +653,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
 {
 	int ret;
 	handle_t *handle;
-	unsigned int flags;
-	struct page *page;
+	struct folio *folio;
 	struct ext4_iloc iloc;
 
 	if (pos + len > ext4_get_max_inline_size(inode))
@@ -691,28 +690,27 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
 	if (ret)
 		goto out;
 
-	flags = memalloc_nofs_save();
-	page = grab_cache_page_write_begin(mapping, 0);
-	memalloc_nofs_restore(flags);
-	if (!page) {
+	folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
+					mapping_gfp_mask(mapping));
+	if (!folio) {
 		ret = -ENOMEM;
 		goto out;
 	}
 
-	*pagep = page;
+	*pagep = &folio->page;
 	down_read(&EXT4_I(inode)->xattr_sem);
 	if (!ext4_has_inline_data(inode)) {
 		ret = 0;
-		unlock_page(page);
-		put_page(page);
+		folio_unlock(folio);
+		folio_put(folio);
 		goto out_up_read;
 	}
 
-	if (!PageUptodate(page)) {
-		ret = ext4_read_inline_page(inode, page);
+	if (!folio_test_uptodate(folio)) {
+		ret = ext4_read_inline_page(inode, &folio->page);
 		if (ret < 0) {
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			goto out_up_read;
 		}
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 12/29] ext4: Convert ext4_da_convert_inline_data_to_extent() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (10 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 11/29] ext4: Convert ext4_try_to_write_inline_data() " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 13/29] ext4: Convert ext4_da_write_inline_data_begin() " Matthew Wilcox (Oracle)
                   ` (17 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Saves a number of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inline.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 881d559c503f..45d74274d822 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -848,10 +848,11 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
 						 void **fsdata)
 {
 	int ret = 0, inline_size;
-	struct page *page;
+	struct folio *folio;
 
-	page = grab_cache_page_write_begin(mapping, 0);
-	if (!page)
+	folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN,
+					mapping_gfp_mask(mapping));
+	if (!folio)
 		return -ENOMEM;
 
 	down_read(&EXT4_I(inode)->xattr_sem);
@@ -862,32 +863,32 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
 
 	inline_size = ext4_get_inline_size(inode);
 
-	if (!PageUptodate(page)) {
-		ret = ext4_read_inline_page(inode, page);
+	if (!folio_test_uptodate(folio)) {
+		ret = ext4_read_inline_page(inode, &folio->page);
 		if (ret < 0)
 			goto out;
 	}
 
-	ret = __block_write_begin(page, 0, inline_size,
+	ret = __block_write_begin(&folio->page, 0, inline_size,
 				  ext4_da_get_block_prep);
 	if (ret) {
 		up_read(&EXT4_I(inode)->xattr_sem);
-		unlock_page(page);
-		put_page(page);
+		folio_unlock(folio);
+		folio_put(folio);
 		ext4_truncate_failed_write(inode);
 		return ret;
 	}
 
-	SetPageDirty(page);
-	SetPageUptodate(page);
+	folio_mark_dirty(folio);
+	folio_mark_uptodate(folio);
 	ext4_clear_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA);
 	*fsdata = (void *)CONVERT_INLINE_DATA;
 
 out:
 	up_read(&EXT4_I(inode)->xattr_sem);
-	if (page) {
-		unlock_page(page);
-		put_page(page);
+	if (folio) {
+		folio_unlock(folio);
+		folio_put(folio);
 	}
 	return ret;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 13/29] ext4: Convert ext4_da_write_inline_data_begin() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (11 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 12/29] ext4: Convert ext4_da_convert_inline_data_to_extent() " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 14/29] ext4: Convert ext4_read_inline_page() to ext4_read_inline_folio() Matthew Wilcox (Oracle)
                   ` (16 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Saves a number of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inline.c | 20 +++++++++-----------
 1 file changed, 9 insertions(+), 11 deletions(-)

diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 45d74274d822..2fa6c51baef9 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -909,10 +909,9 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
 {
 	int ret;
 	handle_t *handle;
-	struct page *page;
+	struct folio *folio;
 	struct ext4_iloc iloc;
 	int retries = 0;
-	unsigned int flags;
 
 	ret = ext4_get_inode_loc(inode, &iloc);
 	if (ret)
@@ -944,10 +943,9 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
 	 * We cannot recurse into the filesystem as the transaction
 	 * is already started.
 	 */
-	flags = memalloc_nofs_save();
-	page = grab_cache_page_write_begin(mapping, 0);
-	memalloc_nofs_restore(flags);
-	if (!page) {
+	folio = __filemap_get_folio(mapping, 0, FGP_WRITEBEGIN | FGP_NOFS,
+					mapping_gfp_mask(mapping));
+	if (!folio) {
 		ret = -ENOMEM;
 		goto out_journal;
 	}
@@ -958,8 +956,8 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
 		goto out_release_page;
 	}
 
-	if (!PageUptodate(page)) {
-		ret = ext4_read_inline_page(inode, page);
+	if (!folio_test_uptodate(folio)) {
+		ret = ext4_read_inline_page(inode, &folio->page);
 		if (ret < 0)
 			goto out_release_page;
 	}
@@ -969,13 +967,13 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
 		goto out_release_page;
 
 	up_read(&EXT4_I(inode)->xattr_sem);
-	*pagep = page;
+	*pagep = &folio->page;
 	brelse(iloc.bh);
 	return 1;
 out_release_page:
 	up_read(&EXT4_I(inode)->xattr_sem);
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 out_journal:
 	ext4_journal_stop(handle);
 out:
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 14/29] ext4: Convert ext4_read_inline_page() to ext4_read_inline_folio()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (12 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 13/29] ext4: Convert ext4_da_write_inline_data_begin() " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 15/29] ext4: Convert ext4_write_inline_data_end() to use a folio Matthew Wilcox (Oracle)
                   ` (15 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

All callers now have a folio, so pass it and use it.  The folio may
be large, although I doubt we'll want to use a large folio for an
inline file.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inline.c | 27 ++++++++++++++-------------
 1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 2fa6c51baef9..4c819b6c70c1 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -467,16 +467,16 @@ static int ext4_destroy_inline_data_nolock(handle_t *handle,
 	return error;
 }
 
-static int ext4_read_inline_page(struct inode *inode, struct page *page)
+static int ext4_read_inline_folio(struct inode *inode, struct folio *folio)
 {
 	void *kaddr;
 	int ret = 0;
 	size_t len;
 	struct ext4_iloc iloc;
 
-	BUG_ON(!PageLocked(page));
+	BUG_ON(!folio_test_locked(folio));
 	BUG_ON(!ext4_has_inline_data(inode));
-	BUG_ON(page->index);
+	BUG_ON(folio->index);
 
 	if (!EXT4_I(inode)->i_inline_off) {
 		ext4_warning(inode->i_sb, "inode %lu doesn't have inline data.",
@@ -489,12 +489,13 @@ static int ext4_read_inline_page(struct inode *inode, struct page *page)
 		goto out;
 
 	len = min_t(size_t, ext4_get_inline_size(inode), i_size_read(inode));
-	kaddr = kmap_atomic(page);
+	BUG_ON(len > PAGE_SIZE);
+	kaddr = kmap_local_folio(folio, 0);
 	ret = ext4_read_inline_data(inode, kaddr, len, &iloc);
-	flush_dcache_page(page);
-	kunmap_atomic(kaddr);
-	zero_user_segment(page, len, PAGE_SIZE);
-	SetPageUptodate(page);
+	flush_dcache_folio(folio);
+	kunmap_local(kaddr);
+	folio_zero_segment(folio, len, folio_size(folio));
+	folio_mark_uptodate(folio);
 	brelse(iloc.bh);
 
 out:
@@ -516,7 +517,7 @@ int ext4_readpage_inline(struct inode *inode, struct folio *folio)
 	 * So for all the other pages, just set them uptodate.
 	 */
 	if (!folio->index)
-		ret = ext4_read_inline_page(inode, &folio->page);
+		ret = ext4_read_inline_folio(inode, folio);
 	else if (!folio_test_uptodate(folio)) {
 		folio_zero_segment(folio, 0, folio_size(folio));
 		folio_mark_uptodate(folio);
@@ -581,7 +582,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
 	from = 0;
 	to = ext4_get_inline_size(inode);
 	if (!folio_test_uptodate(folio)) {
-		ret = ext4_read_inline_page(inode, &folio->page);
+		ret = ext4_read_inline_folio(inode, folio);
 		if (ret < 0)
 			goto out;
 	}
@@ -707,7 +708,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
 	}
 
 	if (!folio_test_uptodate(folio)) {
-		ret = ext4_read_inline_page(inode, &folio->page);
+		ret = ext4_read_inline_folio(inode, folio);
 		if (ret < 0) {
 			folio_unlock(folio);
 			folio_put(folio);
@@ -864,7 +865,7 @@ static int ext4_da_convert_inline_data_to_extent(struct address_space *mapping,
 	inline_size = ext4_get_inline_size(inode);
 
 	if (!folio_test_uptodate(folio)) {
-		ret = ext4_read_inline_page(inode, &folio->page);
+		ret = ext4_read_inline_folio(inode, folio);
 		if (ret < 0)
 			goto out;
 	}
@@ -957,7 +958,7 @@ int ext4_da_write_inline_data_begin(struct address_space *mapping,
 	}
 
 	if (!folio_test_uptodate(folio)) {
-		ret = ext4_read_inline_page(inode, &folio->page);
+		ret = ext4_read_inline_folio(inode, folio);
 		if (ret < 0)
 			goto out_release_page;
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 15/29] ext4: Convert ext4_write_inline_data_end() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (13 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 14/29] ext4: Convert ext4_read_inline_page() to ext4_read_inline_folio() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 16/29] ext4: Convert ext4_write_begin() " Matthew Wilcox (Oracle)
                   ` (14 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Convert the incoming page to a folio so that we call compound_head()
only once instead of seven times.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inline.c | 29 +++++++++++++++--------------
 1 file changed, 15 insertions(+), 14 deletions(-)

diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
index 4c819b6c70c1..b9fb1177fff6 100644
--- a/fs/ext4/inline.c
+++ b/fs/ext4/inline.c
@@ -732,20 +732,21 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
 int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
 			       unsigned copied, struct page *page)
 {
+	struct folio *folio = page_folio(page);
 	handle_t *handle = ext4_journal_current_handle();
 	int no_expand;
 	void *kaddr;
 	struct ext4_iloc iloc;
 	int ret = 0, ret2;
 
-	if (unlikely(copied < len) && !PageUptodate(page))
+	if (unlikely(copied < len) && !folio_test_uptodate(folio))
 		copied = 0;
 
 	if (likely(copied)) {
 		ret = ext4_get_inode_loc(inode, &iloc);
 		if (ret) {
-			unlock_page(page);
-			put_page(page);
+			folio_unlock(folio);
+			folio_put(folio);
 			ext4_std_error(inode->i_sb, ret);
 			goto out;
 		}
@@ -759,30 +760,30 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
 		 */
 		(void) ext4_find_inline_data_nolock(inode);
 
-		kaddr = kmap_atomic(page);
+		kaddr = kmap_local_folio(folio, 0);
 		ext4_write_inline_data(inode, &iloc, kaddr, pos, copied);
-		kunmap_atomic(kaddr);
-		SetPageUptodate(page);
-		/* clear page dirty so that writepages wouldn't work for us. */
-		ClearPageDirty(page);
+		kunmap_local(kaddr);
+		folio_mark_uptodate(folio);
+		/* clear dirty flag so that writepages wouldn't work for us. */
+		folio_clear_dirty(folio);
 
 		ext4_write_unlock_xattr(inode, &no_expand);
 		brelse(iloc.bh);
 
 		/*
-		 * It's important to update i_size while still holding page
+		 * It's important to update i_size while still holding folio
 		 * lock: page writeout could otherwise come in and zero
 		 * beyond i_size.
 		 */
 		ext4_update_inode_size(inode, pos + copied);
 	}
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 
 	/*
-	 * Don't mark the inode dirty under page lock. First, it unnecessarily
-	 * makes the holding time of page lock longer. Second, it forces lock
-	 * ordering of page lock and transaction start for journaling
+	 * Don't mark the inode dirty under folio lock. First, it unnecessarily
+	 * makes the holding time of folio lock longer. Second, it forces lock
+	 * ordering of folio lock and transaction start for journaling
 	 * filesystems.
 	 */
 	if (likely(copied))
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 16/29] ext4: Convert ext4_write_begin() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (14 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 15/29] ext4: Convert ext4_write_inline_data_end() to use a folio Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 17/29] ext4: Convert ext4_write_end() " Matthew Wilcox (Oracle)
                   ` (13 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Remove a lot of calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c | 53 +++++++++++++++++++++++++------------------------
 1 file changed, 27 insertions(+), 26 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 6287cd1aa97e..769f6d5e0ec3 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1139,7 +1139,7 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
 	int ret, needed_blocks;
 	handle_t *handle;
 	int retries = 0;
-	struct page *page;
+	struct folio *folio;
 	pgoff_t index;
 	unsigned from, to;
 
@@ -1166,68 +1166,69 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
 	}
 
 	/*
-	 * grab_cache_page_write_begin() can take a long time if the
-	 * system is thrashing due to memory pressure, or if the page
+	 * __filemap_get_folio() can take a long time if the
+	 * system is thrashing due to memory pressure, or if the folio
 	 * is being written back.  So grab it first before we start
 	 * the transaction handle.  This also allows us to allocate
-	 * the page (if needed) without using GFP_NOFS.
+	 * the folio (if needed) without using GFP_NOFS.
 	 */
 retry_grab:
-	page = grab_cache_page_write_begin(mapping, index);
-	if (!page)
+	folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
+					mapping_gfp_mask(mapping));
+	if (!folio)
 		return -ENOMEM;
 	/*
 	 * The same as page allocation, we prealloc buffer heads before
 	 * starting the handle.
 	 */
-	if (!page_has_buffers(page))
-		create_empty_buffers(page, inode->i_sb->s_blocksize, 0);
+	if (!folio_buffers(folio))
+		create_empty_buffers(&folio->page, inode->i_sb->s_blocksize, 0);
 
-	unlock_page(page);
+	folio_unlock(folio);
 
 retry_journal:
 	handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, needed_blocks);
 	if (IS_ERR(handle)) {
-		put_page(page);
+		folio_put(folio);
 		return PTR_ERR(handle);
 	}
 
-	lock_page(page);
-	if (page->mapping != mapping) {
-		/* The page got truncated from under us */
-		unlock_page(page);
-		put_page(page);
+	folio_lock(folio);
+	if (folio->mapping != mapping) {
+		/* The folio got truncated from under us */
+		folio_unlock(folio);
+		folio_put(folio);
 		ext4_journal_stop(handle);
 		goto retry_grab;
 	}
-	/* In case writeback began while the page was unlocked */
-	wait_for_stable_page(page);
+	/* In case writeback began while the folio was unlocked */
+	folio_wait_stable(folio);
 
 #ifdef CONFIG_FS_ENCRYPTION
 	if (ext4_should_dioread_nolock(inode))
-		ret = ext4_block_write_begin(page, pos, len,
+		ret = ext4_block_write_begin(&folio->page, pos, len,
 					     ext4_get_block_unwritten);
 	else
-		ret = ext4_block_write_begin(page, pos, len,
+		ret = ext4_block_write_begin(&folio->page, pos, len,
 					     ext4_get_block);
 #else
 	if (ext4_should_dioread_nolock(inode))
-		ret = __block_write_begin(page, pos, len,
+		ret = __block_write_begin(&folio->page, pos, len,
 					  ext4_get_block_unwritten);
 	else
-		ret = __block_write_begin(page, pos, len, ext4_get_block);
+		ret = __block_write_begin(&folio->page, pos, len, ext4_get_block);
 #endif
 	if (!ret && ext4_should_journal_data(inode)) {
 		ret = ext4_walk_page_buffers(handle, inode,
-					     page_buffers(page), from, to, NULL,
-					     do_journal_get_write_access);
+					     folio_buffers(folio), from, to,
+					     NULL, do_journal_get_write_access);
 	}
 
 	if (ret) {
 		bool extended = (pos + len > inode->i_size) &&
 				!ext4_verity_in_progress(inode);
 
-		unlock_page(page);
+		folio_unlock(folio);
 		/*
 		 * __block_write_begin may have instantiated a few blocks
 		 * outside i_size.  Trim these off again. Don't need
@@ -1255,10 +1256,10 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
 		if (ret == -ENOSPC &&
 		    ext4_should_retry_alloc(inode->i_sb, &retries))
 			goto retry_journal;
-		put_page(page);
+		folio_put(folio);
 		return ret;
 	}
-	*pagep = page;
+	*pagep = &folio->page;
 	return ret;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 17/29] ext4: Convert ext4_write_end() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (15 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 16/29] ext4: Convert ext4_write_begin() " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 18/29] ext4: Use a folio in ext4_journalled_write_end() Matthew Wilcox (Oracle)
                   ` (12 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Convert the incoming struct page to a folio.  Replaces two implicit
calls to compound_head() with one explicit call.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inode.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 769f6d5e0ec3..af2bfabfbd27 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1289,6 +1289,7 @@ static int ext4_write_end(struct file *file,
 			  loff_t pos, unsigned len, unsigned copied,
 			  struct page *page, void *fsdata)
 {
+	struct folio *folio = page_folio(page);
 	handle_t *handle = ext4_journal_current_handle();
 	struct inode *inode = mapping->host;
 	loff_t old_size = inode->i_size;
@@ -1304,7 +1305,7 @@ static int ext4_write_end(struct file *file,
 
 	copied = block_write_end(file, mapping, pos, len, copied, page, fsdata);
 	/*
-	 * it's important to update i_size while still holding page lock:
+	 * it's important to update i_size while still holding folio lock:
 	 * page writeout could otherwise come in and zero beyond i_size.
 	 *
 	 * If FS_IOC_ENABLE_VERITY is running on this inode, then Merkle tree
@@ -1312,15 +1313,15 @@ static int ext4_write_end(struct file *file,
 	 */
 	if (!verity)
 		i_size_changed = ext4_update_inode_size(inode, pos + copied);
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 
 	if (old_size < pos && !verity)
 		pagecache_isize_extended(inode, old_size, pos);
 	/*
-	 * Don't mark the inode dirty under page lock. First, it unnecessarily
-	 * makes the holding time of page lock longer. Second, it forces lock
-	 * ordering of page lock and transaction start for journaling
+	 * Don't mark the inode dirty under folio lock. First, it unnecessarily
+	 * makes the holding time of folio lock longer. Second, it forces lock
+	 * ordering of folio lock and transaction start for journaling
 	 * filesystems.
 	 */
 	if (i_size_changed)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 18/29] ext4: Use a folio in ext4_journalled_write_end()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (16 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 17/29] ext4: Convert ext4_write_end() " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 19/29] ext4: Convert ext4_journalled_zero_new_buffers() to use a folio Matthew Wilcox (Oracle)
                   ` (11 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Convert the incoming page to a folio to remove a few calls to
compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inode.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index af2bfabfbd27..172b4ca43981 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1392,6 +1392,7 @@ static int ext4_journalled_write_end(struct file *file,
 				     loff_t pos, unsigned len, unsigned copied,
 				     struct page *page, void *fsdata)
 {
+	struct folio *folio = page_folio(page);
 	handle_t *handle = ext4_journal_current_handle();
 	struct inode *inode = mapping->host;
 	loff_t old_size = inode->i_size;
@@ -1410,25 +1411,26 @@ static int ext4_journalled_write_end(struct file *file,
 	if (ext4_has_inline_data(inode))
 		return ext4_write_inline_data_end(inode, pos, len, copied, page);
 
-	if (unlikely(copied < len) && !PageUptodate(page)) {
+	if (unlikely(copied < len) && !folio_test_uptodate(folio)) {
 		copied = 0;
 		ext4_journalled_zero_new_buffers(handle, inode, page, from, to);
 	} else {
 		if (unlikely(copied < len))
 			ext4_journalled_zero_new_buffers(handle, inode, page,
 							 from + copied, to);
-		ret = ext4_walk_page_buffers(handle, inode, page_buffers(page),
+		ret = ext4_walk_page_buffers(handle, inode,
+					     folio_buffers(folio),
 					     from, from + copied, &partial,
 					     write_end_fn);
 		if (!partial)
-			SetPageUptodate(page);
+			folio_mark_uptodate(folio);
 	}
 	if (!verity)
 		size_changed = ext4_update_inode_size(inode, pos + copied);
 	ext4_set_inode_state(inode, EXT4_STATE_JDATA);
 	EXT4_I(inode)->i_datasync_tid = handle->h_transaction->t_tid;
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 
 	if (old_size < pos && !verity)
 		pagecache_isize_extended(inode, old_size, pos);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 19/29] ext4: Convert ext4_journalled_zero_new_buffers() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (17 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 18/29] ext4: Use a folio in ext4_journalled_write_end() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 20/29] ext4: Convert __ext4_block_zero_page_range() " Matthew Wilcox (Oracle)
                   ` (10 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Remove a call to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 172b4ca43981..92418efe1afe 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1359,24 +1359,24 @@ static int ext4_write_end(struct file *file,
  */
 static void ext4_journalled_zero_new_buffers(handle_t *handle,
 					    struct inode *inode,
-					    struct page *page,
+					    struct folio *folio,
 					    unsigned from, unsigned to)
 {
 	unsigned int block_start = 0, block_end;
 	struct buffer_head *head, *bh;
 
-	bh = head = page_buffers(page);
+	bh = head = folio_buffers(folio);
 	do {
 		block_end = block_start + bh->b_size;
 		if (buffer_new(bh)) {
 			if (block_end > from && block_start < to) {
-				if (!PageUptodate(page)) {
+				if (!folio_test_uptodate(folio)) {
 					unsigned start, size;
 
 					start = max(from, block_start);
 					size = min(to, block_end) - start;
 
-					zero_user(page, start, size);
+					folio_zero_range(folio, start, size);
 					write_end_fn(handle, inode, bh);
 				}
 				clear_buffer_new(bh);
@@ -1413,10 +1413,11 @@ static int ext4_journalled_write_end(struct file *file,
 
 	if (unlikely(copied < len) && !folio_test_uptodate(folio)) {
 		copied = 0;
-		ext4_journalled_zero_new_buffers(handle, inode, page, from, to);
+		ext4_journalled_zero_new_buffers(handle, inode, folio,
+						 from, to);
 	} else {
 		if (unlikely(copied < len))
-			ext4_journalled_zero_new_buffers(handle, inode, page,
+			ext4_journalled_zero_new_buffers(handle, inode, folio,
 							 from + copied, to);
 		ret = ext4_walk_page_buffers(handle, inode,
 					     folio_buffers(folio),
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 20/29] ext4: Convert __ext4_block_zero_page_range() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (18 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 19/29] ext4: Convert ext4_journalled_zero_new_buffers() to use a folio Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 21/29] ext4: Convert ext4_page_nomap_can_writeout to ext4_folio_nomap_can_writeout Matthew Wilcox (Oracle)
                   ` (9 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel
  Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel, Ritesh Harjani

Use folio APIs throughout.  Saves many calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
 fs/ext4/inode.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 92418efe1afe..a81540a6e8c6 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3669,23 +3669,26 @@ static int __ext4_block_zero_page_range(handle_t *handle,
 	ext4_lblk_t iblock;
 	struct inode *inode = mapping->host;
 	struct buffer_head *bh;
-	struct page *page;
+	struct folio *folio;
 	int err = 0;
 
-	page = find_or_create_page(mapping, from >> PAGE_SHIFT,
-				   mapping_gfp_constraint(mapping, ~__GFP_FS));
-	if (!page)
+	folio = __filemap_get_folio(mapping, from >> PAGE_SHIFT,
+				    FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+				    mapping_gfp_constraint(mapping, ~__GFP_FS));
+	if (!folio)
 		return -ENOMEM;
 
 	blocksize = inode->i_sb->s_blocksize;
 
 	iblock = index << (PAGE_SHIFT - inode->i_sb->s_blocksize_bits);
 
-	if (!page_has_buffers(page))
-		create_empty_buffers(page, blocksize, 0);
+	bh = folio_buffers(folio);
+	if (!bh) {
+		create_empty_buffers(&folio->page, blocksize, 0);
+		bh = folio_buffers(folio);
+	}
 
 	/* Find the buffer that contains "offset" */
-	bh = page_buffers(page);
 	pos = blocksize;
 	while (offset >= pos) {
 		bh = bh->b_this_page;
@@ -3707,7 +3710,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
 	}
 
 	/* Ok, it's mapped. Make sure it's up-to-date */
-	if (PageUptodate(page))
+	if (folio_test_uptodate(folio))
 		set_buffer_uptodate(bh);
 
 	if (!buffer_uptodate(bh)) {
@@ -3717,7 +3720,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
 		if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
 			/* We expect the key to be set. */
 			BUG_ON(!fscrypt_has_encryption_key(inode));
-			err = fscrypt_decrypt_pagecache_blocks(page_folio(page),
+			err = fscrypt_decrypt_pagecache_blocks(folio,
 							       blocksize,
 							       bh_offset(bh));
 			if (err) {
@@ -3733,7 +3736,7 @@ static int __ext4_block_zero_page_range(handle_t *handle,
 		if (err)
 			goto unlock;
 	}
-	zero_user(page, offset, length);
+	folio_zero_range(folio, offset, length);
 	BUFFER_TRACE(bh, "zeroed end of block");
 
 	if (ext4_should_journal_data(inode)) {
@@ -3747,8 +3750,8 @@ static int __ext4_block_zero_page_range(handle_t *handle,
 	}
 
 unlock:
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 	return err;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 21/29] ext4: Convert ext4_page_nomap_can_writeout to ext4_folio_nomap_can_writeout
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (19 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 20/29] ext4: Convert __ext4_block_zero_page_range() " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 22/29] ext4: Use a folio in ext4_da_write_begin() Matthew Wilcox (Oracle)
                   ` (8 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Its one caller already uses a folio.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
---
 fs/ext4/inode.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index a81540a6e8c6..acb2345fb379 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2335,12 +2335,12 @@ static int ext4_da_writepages_trans_blocks(struct inode *inode)
 				MAX_WRITEPAGES_EXTENT_LEN + bpp - 1, bpp);
 }
 
-/* Return true if the page needs to be written as part of transaction commit */
-static bool ext4_page_nomap_can_writeout(struct page *page)
+/* Return true if the folio needs to be written as part of transaction commit */
+static bool ext4_folio_nomap_can_writeout(struct folio *folio)
 {
 	struct buffer_head *bh, *head;
 
-	bh = head = page_buffers(page);
+	bh = head = folio_buffers(folio);
 	do {
 		if (buffer_dirty(bh) && buffer_mapped(bh) && !buffer_delay(bh))
 			return true;
@@ -2533,7 +2533,7 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 			 * range operations before discarding the page cache.
 			 */
 			if (!mpd->can_map) {
-				if (ext4_page_nomap_can_writeout(&folio->page)) {
+				if (ext4_folio_nomap_can_writeout(folio)) {
 					WARN_ON_ONCE(sb->s_writers.frozen ==
 						     SB_FREEZE_COMPLETE);
 					err = mpage_submit_folio(mpd, folio);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 22/29] ext4: Use a folio in ext4_da_write_begin()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (20 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 21/29] ext4: Convert ext4_page_nomap_can_writeout to ext4_folio_nomap_can_writeout Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Remove a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index acb2345fb379..c88ce6f43c01 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2902,7 +2902,7 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
 			       struct page **pagep, void **fsdata)
 {
 	int ret, retries = 0;
-	struct page *page;
+	struct folio *folio;
 	pgoff_t index;
 	struct inode *inode = mapping->host;
 
@@ -2929,22 +2929,23 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
 	}
 
 retry:
-	page = grab_cache_page_write_begin(mapping, index);
-	if (!page)
+	folio = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
+			mapping_gfp_mask(mapping));
+	if (!folio)
 		return -ENOMEM;
 
-	/* In case writeback began while the page was unlocked */
-	wait_for_stable_page(page);
+	/* In case writeback began while the folio was unlocked */
+	folio_wait_stable(folio);
 
 #ifdef CONFIG_FS_ENCRYPTION
-	ret = ext4_block_write_begin(page, pos, len,
+	ret = ext4_block_write_begin(&folio->page, pos, len,
 				     ext4_da_get_block_prep);
 #else
-	ret = __block_write_begin(page, pos, len, ext4_da_get_block_prep);
+	ret = __block_write_begin(&folio->page, pos, len, ext4_da_get_block_prep);
 #endif
 	if (ret < 0) {
-		unlock_page(page);
-		put_page(page);
+		folio_unlock(folio);
+		folio_put(folio);
 		/*
 		 * block_write_begin may have instantiated a few blocks
 		 * outside i_size.  Trim these off again. Don't need
@@ -2959,7 +2960,7 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
 		return ret;
 	}
 
-	*pagep = page;
+	*pagep = &folio->page;
 	return ret;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (21 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 22/29] ext4: Use a folio in ext4_da_write_begin() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 22:29   ` Eric Biggers
  2023-03-24 18:01 ` [PATCH v2 24/29] ext4: Convert ext4_block_write_begin() to take a folio Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  29 siblings, 1 reply; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

This definitely doesn't include support for large folios; there
are all kinds of assumptions about the number of buffers attached
to a folio.  But it does remove several calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/ext4.h     |  2 +-
 fs/ext4/inode.c    |  7 +++---
 fs/ext4/readpage.c | 58 ++++++++++++++++++++++------------------------
 3 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index 1de5d838996a..57357ef1659b 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3647,7 +3647,7 @@ static inline void ext4_set_de_type(struct super_block *sb,
 
 /* readpages.c */
 extern int ext4_mpage_readpages(struct inode *inode,
-		struct readahead_control *rac, struct page *page);
+		struct readahead_control *rac, struct folio *folio);
 extern int __init ext4_init_post_read_processing(void);
 extern void ext4_exit_post_read_processing(void);
 
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index c88ce6f43c01..116acc5fe00c 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3154,17 +3154,16 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
 
 static int ext4_read_folio(struct file *file, struct folio *folio)
 {
-	struct page *page = &folio->page;
 	int ret = -EAGAIN;
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 
-	trace_ext4_readpage(page);
+	trace_ext4_readpage(&folio->page);
 
 	if (ext4_has_inline_data(inode))
 		ret = ext4_readpage_inline(inode, folio);
 
 	if (ret == -EAGAIN)
-		return ext4_mpage_readpages(inode, NULL, page);
+		return ext4_mpage_readpages(inode, NULL, folio);
 
 	return ret;
 }
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index c61dc8a7c014..fed4ddb652df 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -218,7 +218,7 @@ static inline loff_t ext4_readpage_limit(struct inode *inode)
 }
 
 int ext4_mpage_readpages(struct inode *inode,
-		struct readahead_control *rac, struct page *page)
+		struct readahead_control *rac, struct folio *folio)
 {
 	struct bio *bio = NULL;
 	sector_t last_block_in_bio = 0;
@@ -247,16 +247,15 @@ int ext4_mpage_readpages(struct inode *inode,
 		int fully_mapped = 1;
 		unsigned first_hole = blocks_per_page;
 
-		if (rac) {
-			page = readahead_page(rac);
-			prefetchw(&page->flags);
-		}
+		if (rac)
+			folio = readahead_folio(rac);
+		prefetchw(&folio->flags);
 
-		if (page_has_buffers(page))
+		if (folio_buffers(folio))
 			goto confused;
 
 		block_in_file = next_block =
-			(sector_t)page->index << (PAGE_SHIFT - blkbits);
+			(sector_t)folio->index << (PAGE_SHIFT - blkbits);
 		last_block = block_in_file + nr_pages * blocks_per_page;
 		last_block_in_file = (ext4_readpage_limit(inode) +
 				      blocksize - 1) >> blkbits;
@@ -290,7 +289,7 @@ int ext4_mpage_readpages(struct inode *inode,
 
 		/*
 		 * Then do more ext4_map_blocks() calls until we are
-		 * done with this page.
+		 * done with this folio.
 		 */
 		while (page_block < blocks_per_page) {
 			if (block_in_file < last_block) {
@@ -299,10 +298,10 @@ int ext4_mpage_readpages(struct inode *inode,
 
 				if (ext4_map_blocks(NULL, inode, &map, 0) < 0) {
 				set_error_page:
-					SetPageError(page);
-					zero_user_segment(page, 0,
-							  PAGE_SIZE);
-					unlock_page(page);
+					folio_set_error(folio);
+					folio_zero_segment(folio, 0,
+							  folio_size(folio));
+					folio_unlock(folio);
 					goto next_page;
 				}
 			}
@@ -333,22 +332,22 @@ int ext4_mpage_readpages(struct inode *inode,
 			}
 		}
 		if (first_hole != blocks_per_page) {
-			zero_user_segment(page, first_hole << blkbits,
-					  PAGE_SIZE);
+			folio_zero_segment(folio, first_hole << blkbits,
+					  folio_size(folio));
 			if (first_hole == 0) {
-				if (ext4_need_verity(inode, page->index) &&
-				    !fsverity_verify_page(page))
+				if (ext4_need_verity(inode, folio->index) &&
+				    !fsverity_verify_page(&folio->page))
 					goto set_error_page;
-				SetPageUptodate(page);
-				unlock_page(page);
-				goto next_page;
+				folio_mark_uptodate(folio);
+				folio_unlock(folio);
+				continue;
 			}
 		} else if (fully_mapped) {
-			SetPageMappedToDisk(page);
+			folio_set_mappedtodisk(folio);
 		}
 
 		/*
-		 * This page will go to BIO.  Do we need to send this
+		 * This folio will go to BIO.  Do we need to send this
 		 * BIO off first?
 		 */
 		if (bio && (last_block_in_bio != blocks[0] - 1 ||
@@ -366,7 +365,7 @@ int ext4_mpage_readpages(struct inode *inode,
 					REQ_OP_READ, GFP_KERNEL);
 			fscrypt_set_bio_crypt_ctx(bio, inode, next_block,
 						  GFP_KERNEL);
-			ext4_set_bio_post_read_ctx(bio, inode, page->index);
+			ext4_set_bio_post_read_ctx(bio, inode, folio->index);
 			bio->bi_iter.bi_sector = blocks[0] << (blkbits - 9);
 			bio->bi_end_io = mpage_end_io;
 			if (rac)
@@ -374,7 +373,7 @@ int ext4_mpage_readpages(struct inode *inode,
 		}
 
 		length = first_hole << blkbits;
-		if (bio_add_page(bio, page, length, 0) < length)
+		if (!bio_add_folio(bio, folio, length, 0))
 			goto submit_and_realloc;
 
 		if (((map.m_flags & EXT4_MAP_BOUNDARY) &&
@@ -384,19 +383,18 @@ int ext4_mpage_readpages(struct inode *inode,
 			bio = NULL;
 		} else
 			last_block_in_bio = blocks[blocks_per_page - 1];
-		goto next_page;
+		continue;
 	confused:
 		if (bio) {
 			submit_bio(bio);
 			bio = NULL;
 		}
-		if (!PageUptodate(page))
-			block_read_full_folio(page_folio(page), ext4_get_block);
+		if (!folio_test_uptodate(folio))
+			block_read_full_folio(folio, ext4_get_block);
 		else
-			unlock_page(page);
-	next_page:
-		if (rac)
-			put_page(page);
+			folio_unlock(folio);
+next_page:
+		; /* A label shall be followed by a statement until C23 */
 	}
 	if (bio)
 		submit_bio(bio);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 24/29] ext4: Convert ext4_block_write_begin() to take a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (22 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 25/29] ext4: Use a folio in ext4_page_mkwrite() Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel
  Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel, Ritesh Harjani

All the callers now have a folio, so pass that in and operate on folios.
Removes four calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
 fs/ext4/inode.c | 42 +++++++++++++++++++++---------------------
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 116acc5fe00c..cf2b89a819cb 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -1030,12 +1030,12 @@ int do_journal_get_write_access(handle_t *handle, struct inode *inode,
 }
 
 #ifdef CONFIG_FS_ENCRYPTION
-static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
+static int ext4_block_write_begin(struct folio *folio, loff_t pos, unsigned len,
 				  get_block_t *get_block)
 {
 	unsigned from = pos & (PAGE_SIZE - 1);
 	unsigned to = from + len;
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 	unsigned block_start, block_end;
 	sector_t block;
 	int err = 0;
@@ -1045,22 +1045,24 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
 	int nr_wait = 0;
 	int i;
 
-	BUG_ON(!PageLocked(page));
+	BUG_ON(!folio_test_locked(folio));
 	BUG_ON(from > PAGE_SIZE);
 	BUG_ON(to > PAGE_SIZE);
 	BUG_ON(from > to);
 
-	if (!page_has_buffers(page))
-		create_empty_buffers(page, blocksize, 0);
-	head = page_buffers(page);
+	head = folio_buffers(folio);
+	if (!head) {
+		create_empty_buffers(&folio->page, blocksize, 0);
+		head = folio_buffers(folio);
+	}
 	bbits = ilog2(blocksize);
-	block = (sector_t)page->index << (PAGE_SHIFT - bbits);
+	block = (sector_t)folio->index << (PAGE_SHIFT - bbits);
 
 	for (bh = head, block_start = 0; bh != head || !block_start;
 	    block++, block_start = block_end, bh = bh->b_this_page) {
 		block_end = block_start + blocksize;
 		if (block_end <= from || block_start >= to) {
-			if (PageUptodate(page)) {
+			if (folio_test_uptodate(folio)) {
 				set_buffer_uptodate(bh);
 			}
 			continue;
@@ -1073,19 +1075,20 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
 			if (err)
 				break;
 			if (buffer_new(bh)) {
-				if (PageUptodate(page)) {
+				if (folio_test_uptodate(folio)) {
 					clear_buffer_new(bh);
 					set_buffer_uptodate(bh);
 					mark_buffer_dirty(bh);
 					continue;
 				}
 				if (block_end > to || block_start < from)
-					zero_user_segments(page, to, block_end,
-							   block_start, from);
+					folio_zero_segments(folio, to,
+							    block_end,
+							    block_start, from);
 				continue;
 			}
 		}
-		if (PageUptodate(page)) {
+		if (folio_test_uptodate(folio)) {
 			set_buffer_uptodate(bh);
 			continue;
 		}
@@ -1105,14 +1108,13 @@ static int ext4_block_write_begin(struct page *page, loff_t pos, unsigned len,
 			err = -EIO;
 	}
 	if (unlikely(err)) {
-		page_zero_new_buffers(page, from, to);
+		page_zero_new_buffers(&folio->page, from, to);
 	} else if (fscrypt_inode_uses_fs_layer_crypto(inode)) {
 		for (i = 0; i < nr_wait; i++) {
 			int err2;
 
-			err2 = fscrypt_decrypt_pagecache_blocks(page_folio(page),
-								blocksize,
-								bh_offset(wait[i]));
+			err2 = fscrypt_decrypt_pagecache_blocks(folio,
+						blocksize, bh_offset(wait[i]));
 			if (err2) {
 				clear_buffer_uptodate(wait[i]);
 				err = err2;
@@ -1206,11 +1208,10 @@ static int ext4_write_begin(struct file *file, struct address_space *mapping,
 
 #ifdef CONFIG_FS_ENCRYPTION
 	if (ext4_should_dioread_nolock(inode))
-		ret = ext4_block_write_begin(&folio->page, pos, len,
+		ret = ext4_block_write_begin(folio, pos, len,
 					     ext4_get_block_unwritten);
 	else
-		ret = ext4_block_write_begin(&folio->page, pos, len,
-					     ext4_get_block);
+		ret = ext4_block_write_begin(folio, pos, len, ext4_get_block);
 #else
 	if (ext4_should_dioread_nolock(inode))
 		ret = __block_write_begin(&folio->page, pos, len,
@@ -2938,8 +2939,7 @@ static int ext4_da_write_begin(struct file *file, struct address_space *mapping,
 	folio_wait_stable(folio);
 
 #ifdef CONFIG_FS_ENCRYPTION
-	ret = ext4_block_write_begin(&folio->page, pos, len,
-				     ext4_da_get_block_prep);
+	ret = ext4_block_write_begin(folio, pos, len, ext4_da_get_block_prep);
 #else
 	ret = __block_write_begin(&folio->page, pos, len, ext4_da_get_block_prep);
 #endif
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 25/29] ext4: Use a folio in ext4_page_mkwrite()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (23 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 24/29] ext4: Convert ext4_block_write_begin() to take a folio Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 26/29] ext4: Use a folio iterator in __read_end_io() Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Convert to the folio API, saving a few calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/inode.c | 42 ++++++++++++++++++++----------------------
 1 file changed, 20 insertions(+), 22 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index cf2b89a819cb..f0ebf211983d 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -6075,7 +6075,7 @@ static int ext4_bh_unmapped(handle_t *handle, struct inode *inode,
 vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct page *page = vmf->page;
+	struct folio *folio = page_folio(vmf->page);
 	loff_t size;
 	unsigned long len;
 	int err;
@@ -6119,19 +6119,18 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 		goto out_ret;
 	}
 
-	lock_page(page);
+	folio_lock(folio);
 	size = i_size_read(inode);
 	/* Page got truncated from under us? */
-	if (page->mapping != mapping || page_offset(page) > size) {
-		unlock_page(page);
+	if (folio->mapping != mapping || folio_pos(folio) > size) {
+		folio_unlock(folio);
 		ret = VM_FAULT_NOPAGE;
 		goto out;
 	}
 
-	if (page->index == size >> PAGE_SHIFT)
-		len = size & ~PAGE_MASK;
-	else
-		len = PAGE_SIZE;
+	len = folio_size(folio);
+	if (folio_pos(folio) + len > size)
+		len = size - folio_pos(folio);
 	/*
 	 * Return if we have all the buffers mapped. This avoids the need to do
 	 * journal_start/journal_stop which can block and take a long time
@@ -6139,17 +6138,17 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	 * This cannot be done for data journalling, as we have to add the
 	 * inode to the transaction's list to writeprotect pages on commit.
 	 */
-	if (page_has_buffers(page)) {
-		if (!ext4_walk_page_buffers(NULL, inode, page_buffers(page),
+	if (folio_buffers(folio)) {
+		if (!ext4_walk_page_buffers(NULL, inode, folio_buffers(folio),
 					    0, len, NULL,
 					    ext4_bh_unmapped)) {
 			/* Wait so that we don't change page under IO */
-			wait_for_stable_page(page);
+			folio_wait_stable(folio);
 			ret = VM_FAULT_LOCKED;
 			goto out;
 		}
 	}
-	unlock_page(page);
+	folio_unlock(folio);
 	/* OK, we need to fill the hole... */
 	if (ext4_should_dioread_nolock(inode))
 		get_block = ext4_get_block_unwritten;
@@ -6170,26 +6169,25 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	if (!ext4_should_journal_data(inode)) {
 		err = block_page_mkwrite(vma, vmf, get_block);
 	} else {
-		lock_page(page);
+		folio_lock(folio);
 		size = i_size_read(inode);
 		/* Page got truncated from under us? */
-		if (page->mapping != mapping || page_offset(page) > size) {
+		if (folio->mapping != mapping || folio_pos(folio) > size) {
 			ret = VM_FAULT_NOPAGE;
 			goto out_error;
 		}
 
-		if (page->index == size >> PAGE_SHIFT)
-			len = size & ~PAGE_MASK;
-		else
-			len = PAGE_SIZE;
+		len = folio_size(folio);
+		if (folio_pos(folio) + len > size)
+			len = size - folio_pos(folio);
 
-		err = __block_write_begin(page, 0, len, ext4_get_block);
+		err = __block_write_begin(&folio->page, 0, len, ext4_get_block);
 		if (!err) {
 			ret = VM_FAULT_SIGBUS;
-			if (ext4_journal_page_buffers(handle, page, len))
+			if (ext4_journal_page_buffers(handle, &folio->page, len))
 				goto out_error;
 		} else {
-			unlock_page(page);
+			folio_unlock(folio);
 		}
 	}
 	ext4_journal_stop(handle);
@@ -6202,7 +6200,7 @@ vm_fault_t ext4_page_mkwrite(struct vm_fault *vmf)
 	sb_end_pagefault(inode->i_sb);
 	return ret;
 out_error:
-	unlock_page(page);
+	folio_unlock(folio);
 	ext4_journal_stop(handle);
 	goto out;
 }
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 26/29] ext4: Use a folio iterator in __read_end_io()
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (24 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 25/29] ext4: Use a folio in ext4_page_mkwrite() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 27/29] ext4: Convert mext_page_mkuptodate() to take a folio Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Iterate once per folio, not once per page.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/readpage.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index fed4ddb652df..6f46823fba61 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -68,18 +68,16 @@ struct bio_post_read_ctx {
 
 static void __read_end_io(struct bio *bio)
 {
-	struct page *page;
-	struct bio_vec *bv;
-	struct bvec_iter_all iter_all;
+	struct folio_iter fi;
 
-	bio_for_each_segment_all(bv, bio, iter_all) {
-		page = bv->bv_page;
+	bio_for_each_folio_all(fi, bio) {
+		struct folio *folio = fi.folio;
 
 		if (bio->bi_status)
-			ClearPageUptodate(page);
+			folio_clear_uptodate(folio);
 		else
-			SetPageUptodate(page);
-		unlock_page(page);
+			folio_mark_uptodate(folio);
+		folio_unlock(folio);
 	}
 	if (bio->bi_private)
 		mempool_free(bio->bi_private, bio_post_read_ctx_pool);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 27/29] ext4: Convert mext_page_mkuptodate() to take a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (25 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 26/29] ext4: Use a folio iterator in __read_end_io() Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 28/29] ext4: Convert pagecache_read() to use " Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Use a folio throughout.  Does not support large folios due to
an array sized for MAX_BUF_PER_PAGE, but it does remove a few
calls to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/move_extent.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index a84a794fed56..b5af2fc03b2f 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -168,25 +168,27 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
 
 /* Force page buffers uptodate w/o dropping page's lock */
 static int
-mext_page_mkuptodate(struct page *page, unsigned from, unsigned to)
+mext_page_mkuptodate(struct folio *folio, unsigned from, unsigned to)
 {
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = folio->mapping->host;
 	sector_t block;
 	struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
 	unsigned int blocksize, block_start, block_end;
 	int i, err,  nr = 0, partial = 0;
-	BUG_ON(!PageLocked(page));
-	BUG_ON(PageWriteback(page));
+	BUG_ON(!folio_test_locked(folio));
+	BUG_ON(folio_test_writeback(folio));
 
-	if (PageUptodate(page))
+	if (folio_test_uptodate(folio))
 		return 0;
 
 	blocksize = i_blocksize(inode);
-	if (!page_has_buffers(page))
-		create_empty_buffers(page, blocksize, 0);
+	head = folio_buffers(folio);
+	if (!head) {
+		create_empty_buffers(&folio->page, blocksize, 0);
+		head = folio_buffers(folio);
+	}
 
-	head = page_buffers(page);
-	block = (sector_t)page->index << (PAGE_SHIFT - inode->i_blkbits);
+	block = (sector_t)folio->index << (PAGE_SHIFT - inode->i_blkbits);
 	for (bh = head, block_start = 0; bh != head || !block_start;
 	     block++, block_start = block_end, bh = bh->b_this_page) {
 		block_end = block_start + blocksize;
@@ -200,11 +202,11 @@ mext_page_mkuptodate(struct page *page, unsigned from, unsigned to)
 		if (!buffer_mapped(bh)) {
 			err = ext4_get_block(inode, block, bh, 0);
 			if (err) {
-				SetPageError(page);
+				folio_set_error(folio);
 				return err;
 			}
 			if (!buffer_mapped(bh)) {
-				zero_user(page, block_start, blocksize);
+				folio_zero_range(folio, block_start, blocksize);
 				set_buffer_uptodate(bh);
 				continue;
 			}
@@ -226,7 +228,7 @@ mext_page_mkuptodate(struct page *page, unsigned from, unsigned to)
 	}
 out:
 	if (!partial)
-		SetPageUptodate(page);
+		folio_mark_uptodate(folio);
 	return 0;
 }
 
@@ -354,7 +356,7 @@ move_extent_per_page(struct file *o_filp, struct inode *donor_inode,
 		goto unlock_folios;
 	}
 data_copy:
-	*err = mext_page_mkuptodate(&folio[0]->page, from, from + replaced_size);
+	*err = mext_page_mkuptodate(folio[0], from, from + replaced_size);
 	if (*err)
 		goto unlock_folios;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 28/29] ext4: Convert pagecache_read() to use a folio
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (26 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 27/29] ext4: Convert mext_page_mkuptodate() to take a folio Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-03-24 18:01 ` [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page Matthew Wilcox (Oracle)
  2023-04-15  2:29 ` [PATCH v2 00/29] Convert most of ext4 to folios Theodore Ts'o
  29 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

Use the folio API and support folios of arbitrary sizes.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/verity.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
index e4da1704438e..afe847c967a4 100644
--- a/fs/ext4/verity.c
+++ b/fs/ext4/verity.c
@@ -42,18 +42,16 @@ static int pagecache_read(struct inode *inode, void *buf, size_t count,
 			  loff_t pos)
 {
 	while (count) {
-		size_t n = min_t(size_t, count,
-				 PAGE_SIZE - offset_in_page(pos));
-		struct page *page;
+		struct folio *folio;
+		size_t n;
 
-		page = read_mapping_page(inode->i_mapping, pos >> PAGE_SHIFT,
+		folio = read_mapping_folio(inode->i_mapping, pos >> PAGE_SHIFT,
 					 NULL);
-		if (IS_ERR(page))
-			return PTR_ERR(page);
-
-		memcpy_from_page(buf, page, offset_in_page(pos), n);
+		if (IS_ERR(folio))
+			return PTR_ERR(folio);
 
-		put_page(page);
+		n = memcpy_from_file_folio(buf, folio, pos, count);
+		folio_put(folio);
 
 		buf += n;
 		pos += n;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (27 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 28/29] ext4: Convert pagecache_read() to use " Matthew Wilcox (Oracle)
@ 2023-03-24 18:01 ` Matthew Wilcox (Oracle)
  2023-04-18  6:50   ` Eric Biggers
  2023-04-15  2:29 ` [PATCH v2 00/29] Convert most of ext4 to folios Theodore Ts'o
  29 siblings, 1 reply; 38+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-03-24 18:01 UTC (permalink / raw)
  To: tytso, adilger.kernel; +Cc: Matthew Wilcox (Oracle), linux-ext4, linux-fsdevel

This is an implementation of fsverity_operations read_merkle_tree_page,
so it must still return the precise page asked for, but we can use the
folio API to reduce the number of conversions between folios & pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/verity.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
index afe847c967a4..3b01247066dd 100644
--- a/fs/ext4/verity.c
+++ b/fs/ext4/verity.c
@@ -361,21 +361,21 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
 					       pgoff_t index,
 					       unsigned long num_ra_pages)
 {
-	struct page *page;
+	struct folio *folio;
 
 	index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
 
-	page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
-	if (!page || !PageUptodate(page)) {
+	folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
+	if (!folio || !folio_test_uptodate(folio)) {
 		DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
 
-		if (page)
-			put_page(page);
+		if (folio)
+			folio_put(folio);
 		else if (num_ra_pages > 1)
 			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
-		page = read_mapping_page(inode->i_mapping, index, NULL);
+		folio = read_mapping_folio(inode->i_mapping, index, NULL);
 	}
-	return page;
+	return folio_file_page(folio, index);
 }
 
 static int ext4_write_merkle_tree_block(struct inode *inode, const void *buf,
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios
  2023-03-24 18:01 ` [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios Matthew Wilcox (Oracle)
@ 2023-03-24 22:29   ` Eric Biggers
  2023-03-26  3:25     ` Matthew Wilcox
  0 siblings, 1 reply; 38+ messages in thread
From: Eric Biggers @ 2023-03-24 22:29 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: tytso, adilger.kernel, linux-ext4, linux-fsdevel

On Fri, Mar 24, 2023 at 06:01:23PM +0000, Matthew Wilcox (Oracle) wrote:
>  		if (first_hole != blocks_per_page) {
> -			zero_user_segment(page, first_hole << blkbits,
> -					  PAGE_SIZE);
> +			folio_zero_segment(folio, first_hole << blkbits,
> +					  folio_size(folio));
>  			if (first_hole == 0) {
> -				if (ext4_need_verity(inode, page->index) &&
> -				    !fsverity_verify_page(page))
> +				if (ext4_need_verity(inode, folio->index) &&
> +				    !fsverity_verify_page(&folio->page))
>  					goto set_error_page;

This can use fsverity_verify_folio().

- Eric

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios
  2023-03-24 22:29   ` Eric Biggers
@ 2023-03-26  3:25     ` Matthew Wilcox
  0 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox @ 2023-03-26  3:25 UTC (permalink / raw)
  To: Eric Biggers; +Cc: tytso, adilger.kernel, linux-ext4, linux-fsdevel

On Fri, Mar 24, 2023 at 03:29:51PM -0700, Eric Biggers wrote:
> On Fri, Mar 24, 2023 at 06:01:23PM +0000, Matthew Wilcox (Oracle) wrote:
> >  		if (first_hole != blocks_per_page) {
> > -			zero_user_segment(page, first_hole << blkbits,
> > -					  PAGE_SIZE);
> > +			folio_zero_segment(folio, first_hole << blkbits,
> > +					  folio_size(folio));
> >  			if (first_hole == 0) {
> > -				if (ext4_need_verity(inode, page->index) &&
> > -				    !fsverity_verify_page(page))
> > +				if (ext4_need_verity(inode, folio->index) &&
> > +				    !fsverity_verify_page(&folio->page))
> >  					goto set_error_page;
> 
> This can use fsverity_verify_folio().

Thanks!  Ted, let me know if you want a resend with this fixed, or
if you'll do it yourself.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 1/29] fs: Add FGP_WRITEBEGIN
  2023-03-24 18:01 ` [PATCH v2 01/29] fs: Add FGP_WRITEBEGIN Matthew Wilcox (Oracle)
@ 2023-04-06 14:56   ` Theodore Ts'o
  2023-04-06 15:04     ` Matthew Wilcox
  0 siblings, 1 reply; 38+ messages in thread
From: Theodore Ts'o @ 2023-04-06 14:56 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-ext4

On Fri, Mar 24, 2023 at 06:01:01PM +0000, Matthew Wilcox wrote:
> This particular combination of flags is used by most filesystems
> in their ->write_begin method, although it does find use in a
> few other places.  Before folios, it warranted its own function
> (grab_cache_page_write_begin()), but I think that just having specialised
> flags is enough.  It certainly helps the few places that have been
> converted from grab_cache_page_write_begin() to __filemap_get_folio().
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Hey Willy,

Which commit/branch did you base this patch series on?  This commit
conflict with Vishal Moola's e8dfc854eef2 ("ext4: convert
mext_page_double_lock() to mext_folio_double_lock()") which landed in
v6.3-rc1.

I'm guessing what happened is that you based it on the ext4 dev branch
that I used when I sent the pull request to Linus, before I moved the
dev branch's origin to be on v6.3-rc3.  And since Vishal's patches
went in via the mm tree, and not the ext4 tree, we have conflicts with
the ext4 folio work done by some of Vishal's work in the last merge
window.

Sorry, I should have noticed this problem earlier (we had some painful
merge conflicts due to the ext4 changes in the mm tree) so I should
have realized this would continue to bite us this cycle.  :-/

I hate to do this, but would you mind rebasing this on the current
ext4 dev branch.  Thanks, and again, sorry for not catching this
sooner.

					- Ted

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 1/29] fs: Add FGP_WRITEBEGIN
  2023-04-06 14:56   ` [PATCH v2 1/29] " Theodore Ts'o
@ 2023-04-06 15:04     ` Matthew Wilcox
  2023-04-06 15:08       ` Matthew Wilcox
  0 siblings, 1 reply; 38+ messages in thread
From: Matthew Wilcox @ 2023-04-06 15:04 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4

On Thu, Apr 06, 2023 at 10:56:19AM -0400, Theodore Ts'o wrote:
> On Fri, Mar 24, 2023 at 06:01:01PM +0000, Matthew Wilcox wrote:
> > This particular combination of flags is used by most filesystems
> > in their ->write_begin method, although it does find use in a
> > few other places.  Before folios, it warranted its own function
> > (grab_cache_page_write_begin()), but I think that just having specialised
> > flags is enough.  It certainly helps the few places that have been
> > converted from grab_cache_page_write_begin() to __filemap_get_folio().
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> Hey Willy,
> 
> Which commit/branch did you base this patch series on?  This commit

next-20230321.  I haven't noticed any conflicts while rebasing to
next-20230404.

> conflict with Vishal Moola's e8dfc854eef2 ("ext4: convert
> mext_page_double_lock() to mext_folio_double_lock()") which landed in
> v6.3-rc1.

I'm not sure why you're seeing that conflict.  The context lines look
like it's applied after mext_folio_double_lock, eg:

@@ -126,7 +126,6 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,

> I'm guessing what happened is that you based it on the ext4 dev branch
> that I used when I sent the pull request to Linus, before I moved the
> dev branch's origin to be on v6.3-rc3.  And since Vishal's patches
> went in via the mm tree, and not the ext4 tree, we have conflicts with
> the ext4 folio work done by some of Vishal's work in the last merge
> window.
> 
> Sorry, I should have noticed this problem earlier (we had some painful
> merge conflicts due to the ext4 changes in the mm tree) so I should
> have realized this would continue to bite us this cycle.  :-/
> 
> I hate to do this, but would you mind rebasing this on the current
> ext4 dev branch.  Thanks, and again, sorry for not catching this
> sooner.
> 
> 					- Ted

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 1/29] fs: Add FGP_WRITEBEGIN
  2023-04-06 15:04     ` Matthew Wilcox
@ 2023-04-06 15:08       ` Matthew Wilcox
  0 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox @ 2023-04-06 15:08 UTC (permalink / raw)
  To: Theodore Ts'o; +Cc: linux-ext4, Christoph Hellwig, Andrew Morton

On Thu, Apr 06, 2023 at 04:04:07PM +0100, Matthew Wilcox wrote:
> On Thu, Apr 06, 2023 at 10:56:19AM -0400, Theodore Ts'o wrote:
> > On Fri, Mar 24, 2023 at 06:01:01PM +0000, Matthew Wilcox wrote:
> > > This particular combination of flags is used by most filesystems
> > > in their ->write_begin method, although it does find use in a
> > > few other places.  Before folios, it warranted its own function
> > > (grab_cache_page_write_begin()), but I think that just having specialised
> > > flags is enough.  It certainly helps the few places that have been
> > > converted from grab_cache_page_write_begin() to __filemap_get_folio().
> > > 
> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > 
> > Hey Willy,
> > 
> > Which commit/branch did you base this patch series on?  This commit
> 
> next-20230321.  I haven't noticed any conflicts while rebasing to
> next-20230404.
> 
> > conflict with Vishal Moola's e8dfc854eef2 ("ext4: convert
> > mext_page_double_lock() to mext_folio_double_lock()") which landed in
> > v6.3-rc1.
> 
> I'm not sure why you're seeing that conflict.  The context lines look
> like it's applied after mext_folio_double_lock, eg:
> 
> @@ -126,7 +126,6 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,

Ah, I see the conflicting patch in -next.  It's hch's

    mm: return an ERR_PTR from __filemap_get_folio

@@ -141,18 +141,18 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
        flags = memalloc_nofs_save();
        folio[0] = __filemap_get_folio(mapping[0], index1, fgp_flags,
                        mapping_gfp_mask(mapping[0]));
-       if (!folio[0]) {
+       if (IS_ERR(folio[0])) {
                memalloc_nofs_restore(flags);
-               return -ENOMEM;
+               return PTR_ERR(folio[0]);

This is a syntactic, not semantic conflict.  I can fix that up, but of
course it will be a conflict for Linus to resolve.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 00/29] Convert most of ext4 to folios
  2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
                   ` (28 preceding siblings ...)
  2023-03-24 18:01 ` [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page Matthew Wilcox (Oracle)
@ 2023-04-15  2:29 ` Theodore Ts'o
  29 siblings, 0 replies; 38+ messages in thread
From: Theodore Ts'o @ 2023-04-15  2:29 UTC (permalink / raw)
  To: adilger.kernel, Matthew Wilcox (Oracle)
  Cc: Theodore Ts'o, linux-ext4, linux-fsdevel


On Fri, 24 Mar 2023 18:01:00 +0000, Matthew Wilcox (Oracle) wrote:
> On top of next-20230321, this converts most of ext4 to use folios instead
> of pages.  It does not enable large folios although it fixes some places
> that will need to be fixed before they can be enabled for ext4.  It does
> not convert mballoc to use folios.  write_begin() and write_end() still
> take a page parameter instead of a folio.
> 
> It does convert a lot of code away from the page APIs that we're trying
> to remove.  It does remove a lot of calls to compound_head().  I'd like
> to see it land in 6.4.
> 
> [...]

Applied, thanks!

[01/29] fs: Add FGP_WRITEBEGIN
        commit: e999a5c5a19cf3b679f3d93c49ad5f5c04e4806c
[02/29] fscrypt: Add some folio helper functions
        commit: c76e14dc13bcf89f3b55fd9dcd036a453a822d79
[03/29] ext4: Convert ext4_bio_write_page() to use a folio
        commit: cd57b77197a434709aec0e7fb8b2e6ec8479aa4e
[04/29] ext4: Convert ext4_finish_bio() to use folios
        commit: bb64c08bff6a6edbd85786c92a2cb980ed99b29f
[05/29] ext4: Turn mpage_process_page() into mpage_process_folio()
        commit: 4da2f6e3c45999e904de1edcd06c8533715cc1b5
[06/29] ext4: Convert mpage_submit_page() to mpage_submit_folio()
        commit: 81a0d3e126a0bb4300d1db259d89b839124f2cff
[07/29] ext4: Convert mpage_page_done() to mpage_folio_done()
        commit: 33483b3b6ee4328f37c3dcf702ba979e6a00bf8f
[08/29] ext4: Convert ext4_bio_write_page() to ext4_bio_write_folio()
        commit: e8d6062c50acbf1aba88ca6adaa1bcda058abeab
[09/29] ext4: Convert ext4_readpage_inline() to take a folio
        commit: 3edde93e07954a8860d67be4a2165514a083b6e8
[10/29] ext4: Convert ext4_convert_inline_data_to_extent() to use a folio
        commit: 83eba701cf6e582afa92987e34abc0b0dbcb690e
[11/29] ext4: Convert ext4_try_to_write_inline_data() to use a folio
        commit: f8f8c89f59f7ab037bfca8797e2cc613a5684f21
[12/29] ext4: Convert ext4_da_convert_inline_data_to_extent() to use a folio
        commit: 4ed9b598ac30913987ab46e0069620e6e8af82f0
[13/29] ext4: Convert ext4_da_write_inline_data_begin() to use a folio
        commit: 9a9d01f081ea29a5a8afc4504b1bc48daffa5cc1
[14/29] ext4: Convert ext4_read_inline_page() to ext4_read_inline_folio()
        commit: 6b87fbe4155007c3ab8e950c72db657f6cd990c6
[15/29] ext4: Convert ext4_write_inline_data_end() to use a folio
        commit: 6b90d4130ac8ee9cf2a179a617cfced71a18d252
[16/29] ext4: Convert ext4_write_begin() to use a folio
        commit: 4d934a5e6caa6dcdd3fbee7b96fe512a455863b6
[17/29] ext4: Convert ext4_write_end() to use a folio
        commit: 64fb31367598188a0a230b81c6f4397fa71fd033
[18/29] ext4: Use a folio in ext4_journalled_write_end()
        commit: feb22b77b855a6529675b4e998970ab461c0f446
[19/29] ext4: Convert ext4_journalled_zero_new_buffers() to use a folio
        commit: 86324a21627a40f949bf787b55c45b9856523f9d
[20/29] ext4: Convert __ext4_block_zero_page_range() to use a folio
        commit: 9d3973de9a3745ea9d38bdfb953a4c4bee81ac2a
[21/29] ext4: Convert ext4_page_nomap_can_writeout to ext4_folio_nomap_can_writeout
        commit: 02e4b04c56d03a518b958783900b22f33c6643d6
[22/29] ext4: Use a folio in ext4_da_write_begin()
        commit: 0b5a254395dc6db5c38d89e606c0298ed4c9e984
[23/29] ext4: Convert ext4_mpage_readpages() to work on folios
        commit: c0be8e6f081b3e966e21f52679b2f809b7df10b8
[24/29] ext4: Convert ext4_block_write_begin() to take a folio
        commit: 86b38c273cc68ce7b50649447d8ac0ddf3228026
[25/29] ext4: Use a folio in ext4_page_mkwrite()
        commit: 9ea0e45bd2f6cbfba787360f5ba8e18deabb7671
[26/29] ext4: Use a folio iterator in __read_end_io()
        commit: f2b229a8c6c2633c35cb7446cfabea5a6f721edc
[27/29] ext4: Convert mext_page_mkuptodate() to take a folio
        commit: 3060b6ef05603cf3c05b2b746f739b0169bd75f9
[28/29] ext4: Convert pagecache_read() to use a folio
        commit: b23fb762785babc1d6194770c88432da037c8a64
[29/29] ext4: Use a folio in ext4_read_merkle_tree_page
        commit: e9ebecf266c6657de5865a02a47c0d6b2460c526

Best regards,
-- 
Theodore Ts'o <tytso@mit.edu>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page
  2023-03-24 18:01 ` [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page Matthew Wilcox (Oracle)
@ 2023-04-18  6:50   ` Eric Biggers
  2023-04-18 13:08     ` Matthew Wilcox
  0 siblings, 1 reply; 38+ messages in thread
From: Eric Biggers @ 2023-04-18  6:50 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: tytso, adilger.kernel, linux-ext4, linux-fsdevel

Hi Matthew,

On Fri, Mar 24, 2023 at 06:01:29PM +0000, Matthew Wilcox (Oracle) wrote:
> This is an implementation of fsverity_operations read_merkle_tree_page,
> so it must still return the precise page asked for, but we can use the
> folio API to reduce the number of conversions between folios & pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  fs/ext4/verity.c | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
> index afe847c967a4..3b01247066dd 100644
> --- a/fs/ext4/verity.c
> +++ b/fs/ext4/verity.c
> @@ -361,21 +361,21 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
>  					       pgoff_t index,
>  					       unsigned long num_ra_pages)
>  {
> -	struct page *page;
> +	struct folio *folio;
>  
>  	index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
>  
> -	page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
> -	if (!page || !PageUptodate(page)) {
> +	folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
> +	if (!folio || !folio_test_uptodate(folio)) {
>  		DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
>  
> -		if (page)
> -			put_page(page);
> +		if (folio)
> +			folio_put(folio);
>  		else if (num_ra_pages > 1)
>  			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
> -		page = read_mapping_page(inode->i_mapping, index, NULL);
> +		folio = read_mapping_folio(inode->i_mapping, index, NULL);
>  	}
> -	return page;
> +	return folio_file_page(folio, index);

This is not working at all, since it dereferences ERR_PTR(-ENOENT).  I think it
needs:

diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
index 3b01247066dd..dbc655a6c443 100644
--- a/fs/ext4/verity.c
+++ b/fs/ext4/verity.c
@@ -366,15 +366,17 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
 	index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
 
 	folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
-	if (!folio || !folio_test_uptodate(folio)) {
+	if (folio == ERR_PTR(-ENOENT) || !folio_test_uptodate(folio)) {
 		DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
 
-		if (folio)
+		if (!IS_ERR(folio))
 			folio_put(folio);
 		else if (num_ra_pages > 1)
 			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
 		folio = read_mapping_folio(inode->i_mapping, index, NULL);
 	}
+	if (IS_ERR(folio))
+		return ERR_CAST(folio);
 	return folio_file_page(folio, index);
 }
 

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page
  2023-04-18  6:50   ` Eric Biggers
@ 2023-04-18 13:08     ` Matthew Wilcox
  0 siblings, 0 replies; 38+ messages in thread
From: Matthew Wilcox @ 2023-04-18 13:08 UTC (permalink / raw)
  To: Eric Biggers
  Cc: tytso, adilger.kernel, linux-ext4, linux-fsdevel, Christoph Hellwig

On Mon, Apr 17, 2023 at 11:50:42PM -0700, Eric Biggers wrote:
> Hi Matthew,
> 
> On Fri, Mar 24, 2023 at 06:01:29PM +0000, Matthew Wilcox (Oracle) wrote:
> > This is an implementation of fsverity_operations read_merkle_tree_page,
> > so it must still return the precise page asked for, but we can use the
> > folio API to reduce the number of conversions between folios & pages.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> >  fs/ext4/verity.c | 14 +++++++-------
> >  1 file changed, 7 insertions(+), 7 deletions(-)
> > 
> > diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
> > index afe847c967a4..3b01247066dd 100644
> > --- a/fs/ext4/verity.c
> > +++ b/fs/ext4/verity.c
> > @@ -361,21 +361,21 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
> >  					       pgoff_t index,
> >  					       unsigned long num_ra_pages)
> >  {
> > -	struct page *page;
> > +	struct folio *folio;
> >  
> >  	index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
> >  
> > -	page = find_get_page_flags(inode->i_mapping, index, FGP_ACCESSED);
> > -	if (!page || !PageUptodate(page)) {
> > +	folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
> > +	if (!folio || !folio_test_uptodate(folio)) {
> >  		DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
> >  
> > -		if (page)
> > -			put_page(page);
> > +		if (folio)
> > +			folio_put(folio);
> >  		else if (num_ra_pages > 1)
> >  			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
> > -		page = read_mapping_page(inode->i_mapping, index, NULL);
> > +		folio = read_mapping_folio(inode->i_mapping, index, NULL);
> >  	}
> > -	return page;
> > +	return folio_file_page(folio, index);
> 
> This is not working at all, since it dereferences ERR_PTR(-ENOENT).  I think it
> needs:

Argh.  Christoph changed the return value of __filemap_get_folio().

>  	folio = __filemap_get_folio(inode->i_mapping, index, FGP_ACCESSED, 0);
> -	if (!folio || !folio_test_uptodate(folio)) {
> +	if (folio == ERR_PTR(-ENOENT) || !folio_test_uptodate(folio)) {

This should be "if (IS_ERR(folio) || !folio_test_uptodate(folio)) {"

But we can't carry this change in Ted's tree because it doesn't have
Christoph's change.  And we can't carry it in Andrew's tree because it
doesn't have my ext4 change.

>  		DEFINE_READAHEAD(ractl, NULL, NULL, inode->i_mapping, index);
>  
> -		if (folio)
> +		if (!IS_ERR(folio))
>  			folio_put(folio);
>  		else if (num_ra_pages > 1)
>  			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
>  		folio = read_mapping_folio(inode->i_mapping, index, NULL);
>  	}
> +	if (IS_ERR(folio))
> +		return ERR_CAST(folio);

return &folio->page;

>  	return folio_file_page(folio, index);
>  }
>  

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2023-04-18 13:08 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-24 18:01 [PATCH v2 00/29] Convert most of ext4 to folios Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 01/29] fs: Add FGP_WRITEBEGIN Matthew Wilcox (Oracle)
2023-04-06 14:56   ` [PATCH v2 1/29] " Theodore Ts'o
2023-04-06 15:04     ` Matthew Wilcox
2023-04-06 15:08       ` Matthew Wilcox
2023-03-24 18:01 ` [PATCH v2 02/29] fscrypt: Add some folio helper functions Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 03/29] ext4: Convert ext4_bio_write_page() to use a folio Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 04/29] ext4: Convert ext4_finish_bio() to use folios Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 05/29] ext4: Turn mpage_process_page() into mpage_process_folio() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 06/29] ext4: Convert mpage_submit_page() to mpage_submit_folio() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 07/29] ext4: Convert mpage_page_done() to mpage_folio_done() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 08/29] ext4: Convert ext4_bio_write_page() to ext4_bio_write_folio() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 09/29] ext4: Convert ext4_readpage_inline() to take a folio Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 10/29] ext4: Convert ext4_convert_inline_data_to_extent() to use " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 11/29] ext4: Convert ext4_try_to_write_inline_data() " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 12/29] ext4: Convert ext4_da_convert_inline_data_to_extent() " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 13/29] ext4: Convert ext4_da_write_inline_data_begin() " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 14/29] ext4: Convert ext4_read_inline_page() to ext4_read_inline_folio() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 15/29] ext4: Convert ext4_write_inline_data_end() to use a folio Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 16/29] ext4: Convert ext4_write_begin() " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 17/29] ext4: Convert ext4_write_end() " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 18/29] ext4: Use a folio in ext4_journalled_write_end() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 19/29] ext4: Convert ext4_journalled_zero_new_buffers() to use a folio Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 20/29] ext4: Convert __ext4_block_zero_page_range() " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 21/29] ext4: Convert ext4_page_nomap_can_writeout to ext4_folio_nomap_can_writeout Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 22/29] ext4: Use a folio in ext4_da_write_begin() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 23/29] ext4: Convert ext4_mpage_readpages() to work on folios Matthew Wilcox (Oracle)
2023-03-24 22:29   ` Eric Biggers
2023-03-26  3:25     ` Matthew Wilcox
2023-03-24 18:01 ` [PATCH v2 24/29] ext4: Convert ext4_block_write_begin() to take a folio Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 25/29] ext4: Use a folio in ext4_page_mkwrite() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 26/29] ext4: Use a folio iterator in __read_end_io() Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 27/29] ext4: Convert mext_page_mkuptodate() to take a folio Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 28/29] ext4: Convert pagecache_read() to use " Matthew Wilcox (Oracle)
2023-03-24 18:01 ` [PATCH v2 29/29] ext4: Use a folio in ext4_read_merkle_tree_page Matthew Wilcox (Oracle)
2023-04-18  6:50   ` Eric Biggers
2023-04-18 13:08     ` Matthew Wilcox
2023-04-15  2:29 ` [PATCH v2 00/29] Convert most of ext4 to folios Theodore Ts'o

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.