linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag()
@ 2022-10-17 20:24 Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
                   ` (22 more replies)
  0 siblings, 23 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

This patch series replaces find_get_pages_range_tag() with
filemap_get_folios_tag(). This also allows the removal of multiple
calls to compound_head() throughout.
It also makes a good chunk of the straightforward conversions to folios,
and takes the opportunity to introduce a function that grabs a folio
from the pagecache.

F2fs and Ceph have quite alot of work to be done regarding folios, so
for now those patches only have the changes necessary for the removal of
find_get_pages_range_tag(), and only support folios of size 1 (which is
all they use right now anyways).

I've run xfstests on btrfs, ext4, f2fs, and nilfs2, but more testing may be
beneficial. The page-writeback and filemap changes implicitly work. Testing
and review of the other changes (afs, ceph, cifs, gfs2) would be appreciated.

---
v3:
  Rebased onto upstream 6.1
  Simplified the ceph patch to only necessary changes
  Changed commit messages throughout to be clearer
  Got an Acked-by for another nilfs patch
  Got Tested-by for afs

v2:
  Got Acked-By tags for nilfs and btrfs changes
  Fixed an error arising in f2fs
  - Reported-by: kernel test robot <lkp@intel.com>

Vishal Moola (Oracle) (23):
  pagemap: Add filemap_grab_folio()
  filemap: Added filemap_get_folios_tag()
  filemap: Convert __filemap_fdatawait_range() to use
    filemap_get_folios_tag()
  page-writeback: Convert write_cache_pages() to use
    filemap_get_folios_tag()
  afs: Convert afs_writepages_region() to use filemap_get_folios_tag()
  btrfs: Convert btree_write_cache_pages() to use
    filemap_get_folio_tag()
  btrfs: Convert extent_write_cache_pages() to use
    filemap_get_folios_tag()
  ceph: Convert ceph_writepages_start() to use filemap_get_folios_tag()
  cifs: Convert wdata_alloc_and_fillpages() to use
    filemap_get_folios_tag()
  ext4: Convert mpage_prepare_extent_to_map() to use
    filemap_get_folios_tag()
  f2fs: Convert f2fs_fsync_node_pages() to use filemap_get_folios_tag()
  f2fs: Convert f2fs_flush_inline_data() to use filemap_get_folios_tag()
  f2fs: Convert f2fs_sync_node_pages() to use filemap_get_folios_tag()
  f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  f2fs: Convert last_fsync_dnode() to use filemap_get_folios_tag()
  f2fs: Convert f2fs_sync_meta_pages() to use filemap_get_folios_tag()
  gfs2: Convert gfs2_write_cache_jdata() to use filemap_get_folios_tag()
  nilfs2: Convert nilfs_lookup_dirty_data_buffers() to use
    filemap_get_folios_tag()
  nilfs2: Convert nilfs_lookup_dirty_node_buffers() to use
    filemap_get_folios_tag()
  nilfs2: Convert nilfs_btree_lookup_dirty_buffers() to use
    filemap_get_folios_tag()
  nilfs2: Convert nilfs_copy_dirty_pages() to use
    filemap_get_folios_tag()
  nilfs2: Convert nilfs_clear_dirty_pages() to use
    filemap_get_folios_tag()
  filemap: Remove find_get_pages_range_tag()

 fs/afs/write.c          | 114 +++++++++++++++++++++-------------------
 fs/btrfs/extent_io.c    |  57 ++++++++++----------
 fs/ceph/addr.c          |  58 ++++++++++----------
 fs/cifs/file.c          |  33 ++++++++++--
 fs/ext4/inode.c         |  55 ++++++++++---------
 fs/f2fs/checkpoint.c    |  49 +++++++++--------
 fs/f2fs/compress.c      |  13 ++---
 fs/f2fs/data.c          |  69 +++++++++++++-----------
 fs/f2fs/f2fs.h          |   5 +-
 fs/f2fs/node.c          |  72 +++++++++++++------------
 fs/gfs2/aops.c          |  64 ++++++++++++----------
 fs/nilfs2/btree.c       |  14 ++---
 fs/nilfs2/page.c        |  59 +++++++++++----------
 fs/nilfs2/segment.c     |  44 ++++++++--------
 include/linux/pagemap.h |  32 +++++++----
 include/linux/pagevec.h |   8 ---
 mm/filemap.c            |  87 +++++++++++++++---------------
 mm/page-writeback.c     |  44 ++++++++--------
 mm/swap.c               |  10 ----
 19 files changed, 467 insertions(+), 420 deletions(-)

-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 19:36   ` Vishal Moola
  2022-10-24 19:38   ` Matthew Wilcox
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (21 subsequent siblings)
  22 siblings, 2 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Add function filemap_grab_folio() to grab a folio from the page cache.
This function is meant to serve as a folio replacement for
grab_cache_page, and is used to facilitate the removal of
find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/pagemap.h | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index bbccb4044222..74d87e37a142 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -547,6 +547,26 @@ static inline struct folio *filemap_lock_folio(struct address_space *mapping,
 	return __filemap_get_folio(mapping, index, FGP_LOCK, 0);
 }
 
+/**
+ * filemap_grab_folio - grab a folio from the page cache
+ * @mapping: The address space to search
+ * @index: The page index
+ *
+ * Looks up the page cache entry at @mapping & @index. If no folio is found,
+ * a new folio is created. The folio is locked, marked as accessed, and
+ * returned.
+ *
+ * Return: A found or created folio. NULL if no folio is found and failed to
+ * create a folio.
+ */
+static inline struct folio *filemap_grab_folio(struct address_space *mapping,
+					pgoff_t index)
+{
+	return __filemap_get_folio(mapping, index,
+			FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+			mapping_gfp_mask(mapping));
+}
+
 /**
  * find_get_page - find and get a page reference
  * @mapping: the address_space to search
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 19:42   ` Matthew Wilcox
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (20 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

This is the equivalent of find_get_pages_range_tag(), except for folios
instead of pages.

One noteable difference is filemap_get_folios_tag() does not take in a
maximum pages argument. It instead tries to fill a folio batch and stops
either once full (15 folios) or reaching the end of the search range.

The new function supports large folios, the initial function did not
since all callers don't use large folios.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/pagemap.h |  2 ++
 mm/filemap.c            | 53 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 74d87e37a142..28275eecb949 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -740,6 +740,8 @@ unsigned filemap_get_folios(struct address_space *mapping, pgoff_t *start,
 		pgoff_t end, struct folio_batch *fbatch);
 unsigned filemap_get_folios_contig(struct address_space *mapping,
 		pgoff_t *start, pgoff_t end, struct folio_batch *fbatch);
+unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
+		pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch);
 unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
 			pgoff_t end, xa_mark_t tag, unsigned int nr_pages,
 			struct page **pages);
diff --git a/mm/filemap.c b/mm/filemap.c
index 08341616ae7a..aa6e90ab0551 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2262,6 +2262,59 @@ unsigned filemap_get_folios_contig(struct address_space *mapping,
 }
 EXPORT_SYMBOL(filemap_get_folios_contig);
 
+/**
+ * filemap_get_folios_tag - Get a batch of folios matching @tag.
+ * @mapping:    The address_space to search
+ * @start:      The starting page index
+ * @end:        The final page index (inclusive)
+ * @tag:        The tag index
+ * @fbatch:     The batch to fill
+ *
+ * Same as filemap_get_folios, but only returning folios tagged with @tag
+ *
+ * Return: The number of folios found
+ * Also update @start to index the next folio for traversal
+ */
+unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
+			pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch)
+{
+	XA_STATE(xas, &mapping->i_pages, *start);
+	struct folio *folio;
+
+	rcu_read_lock();
+	while ((folio = find_get_entry(&xas, end, tag)) != NULL) {
+		/* Shadow entries should never be tagged, but this iteration
+		 * is lockless so there is a window for page reclaim to evict
+		 * a page we saw tagged. Skip over it.
+		 */
+		if (xa_is_value(folio))
+			continue;
+		if (!folio_batch_add(fbatch, folio)) {
+			unsigned long nr = folio_nr_pages(folio);
+
+			if (folio_test_hugetlb(folio))
+				nr = 1;
+			*start = folio->index + nr;
+			goto out;
+		}
+	}
+	/*
+	 * We come here when there is no page beyond @end. We take care to not
+	 * overflow the index @start as it confuses some of the callers. This
+	 * breaks the iteration when there is a page at index -1 but that is
+	 * already broke anyway.
+	 */
+	if (end == (pgoff_t)-1)
+		*start = (pgoff_t)-1;
+	else
+		*start = end + 1;
+out:
+	rcu_read_unlock();
+
+	return folio_batch_count(fbatch);
+}
+EXPORT_SYMBOL(filemap_get_folios_tag);
+
 /**
  * find_get_pages_range_tag - Find and return head pages matching @tag.
  * @mapping:	the address_space to search
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag() Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 20:06   ` Matthew Wilcox
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() " Vishal Moola (Oracle)
                   ` (19 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted function to use folios. This is in preparation for the removal
of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/filemap.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index aa6e90ab0551..d78d62a7e44a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -503,28 +503,30 @@ static void __filemap_fdatawait_range(struct address_space *mapping,
 {
 	pgoff_t index = start_byte >> PAGE_SHIFT;
 	pgoff_t end = end_byte >> PAGE_SHIFT;
-	struct pagevec pvec;
-	int nr_pages;
+	struct folio_batch fbatch;
+	unsigned nr_folios;
 
 	if (end_byte < start_byte)
 		return;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
+
 	while (index <= end) {
 		unsigned i;
 
-		nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index,
-				end, PAGECACHE_TAG_WRITEBACK);
-		if (!nr_pages)
+		nr_folios = filemap_get_folios_tag(mapping, &index, end,
+				PAGECACHE_TAG_WRITEBACK, &fbatch);
+
+		if (!nr_folios)
 			break;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			wait_on_page_writeback(page);
-			ClearPageError(page);
+			folio_wait_writeback(folio);
+			folio_clear_error(folio);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 }
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (2 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag() Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 20:12   ` Matthew Wilcox
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 05/23] afs: Convert afs_writepages_region() " Vishal Moola (Oracle)
                   ` (18 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 mm/page-writeback.c | 44 +++++++++++++++++++++++---------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 7e9d8d857ecc..aeec8b196232 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2286,15 +2286,15 @@ int write_cache_pages(struct address_space *mapping,
 	int ret = 0;
 	int done = 0;
 	int error;
-	struct pagevec pvec;
-	int nr_pages;
+	struct folio_batch fbatch;
+	int nr_folios;
 	pgoff_t index;
 	pgoff_t end;		/* Inclusive */
 	pgoff_t done_index;
 	int range_whole = 0;
 	xa_mark_t tag;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	if (wbc->range_cyclic) {
 		index = mapping->writeback_index; /* prev offset */
 		end = -1;
@@ -2314,17 +2314,18 @@ int write_cache_pages(struct address_space *mapping,
 	while (!done && (index <= end)) {
 		int i;
 
-		nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
-				tag);
-		if (nr_pages == 0)
+		nr_folios = filemap_get_folios_tag(mapping, &index, end,
+				tag, &fbatch);
+
+		if (nr_folios == 0)
 			break;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			done_index = page->index;
+			done_index = folio->index;
 
-			lock_page(page);
+			folio_lock(folio);
 
 			/*
 			 * Page truncated or invalidated. We can freely skip it
@@ -2334,30 +2335,30 @@ int write_cache_pages(struct address_space *mapping,
 			 * even if there is now a new, dirty page at the same
 			 * pagecache address.
 			 */
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
 
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			if (PageWriteback(page)) {
+			if (folio_test_writeback(folio)) {
 				if (wbc->sync_mode != WB_SYNC_NONE)
-					wait_on_page_writeback(page);
+					folio_wait_writeback(folio);
 				else
 					goto continue_unlock;
 			}
 
-			BUG_ON(PageWriteback(page));
-			if (!clear_page_dirty_for_io(page))
+			BUG_ON(folio_test_writeback(folio));
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
 			trace_wbc_writepage(wbc, inode_to_bdi(mapping->host));
-			error = (*writepage)(page, wbc, data);
+			error = writepage(&folio->page, wbc, data);
 			if (unlikely(error)) {
 				/*
 				 * Handle errors according to the type of
@@ -2372,11 +2373,12 @@ int write_cache_pages(struct address_space *mapping,
 				 * the first error.
 				 */
 				if (error == AOP_WRITEPAGE_ACTIVATE) {
-					unlock_page(page);
+					folio_unlock(folio);
 					error = 0;
 				} else if (wbc->sync_mode != WB_SYNC_ALL) {
 					ret = error;
-					done_index = page->index + 1;
+					done_index = folio->index +
+						folio_nr_pages(folio);
 					done = 1;
 					break;
 				}
@@ -2396,7 +2398,7 @@ int write_cache_pages(struct address_space *mapping,
 				break;
 			}
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 05/23] afs: Convert afs_writepages_region() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (3 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 06/23] btrfs: Convert btree_write_cache_pages() to use filemap_get_folio_tag() Vishal Moola (Oracle)
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, David Howells, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert to use folios throughout. This function is in preparation to
remove find_get_pages_range_tag().

Also modified this function to write the whole batch one at a time,
rather than calling for a new set every single write.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Tested-by: David Howells <dhowells@redhat.com>
---
 fs/afs/write.c | 114 +++++++++++++++++++++++++------------------------
 1 file changed, 59 insertions(+), 55 deletions(-)

diff --git a/fs/afs/write.c b/fs/afs/write.c
index 9ebdd36eaf2f..c17dbd82a38c 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -699,82 +699,86 @@ static int afs_writepages_region(struct address_space *mapping,
 				 loff_t start, loff_t end, loff_t *_next)
 {
 	struct folio *folio;
-	struct page *head_page;
+	struct folio_batch fbatch;
 	ssize_t ret;
+	unsigned int i;
 	int n, skips = 0;
 
 	_enter("%llx,%llx,", start, end);
+	folio_batch_init(&fbatch);
 
 	do {
 		pgoff_t index = start / PAGE_SIZE;
 
-		n = find_get_pages_range_tag(mapping, &index, end / PAGE_SIZE,
-					     PAGECACHE_TAG_DIRTY, 1, &head_page);
+		n = filemap_get_folios_tag(mapping, &index, end / PAGE_SIZE,
+					PAGECACHE_TAG_DIRTY, &fbatch);
+
 		if (!n)
 			break;
+		for (i = 0; i < n; i++) {
+			folio = fbatch.folios[i];
+			start = folio_pos(folio); /* May regress with THPs */
 
-		folio = page_folio(head_page);
-		start = folio_pos(folio); /* May regress with THPs */
-
-		_debug("wback %lx", folio_index(folio));
+			_debug("wback %lx", folio_index(folio));
 
-		/* At this point we hold neither the i_pages lock nor the
-		 * page lock: the page may be truncated or invalidated
-		 * (changing page->mapping to NULL), or even swizzled
-		 * back from swapper_space to tmpfs file mapping
-		 */
-		if (wbc->sync_mode != WB_SYNC_NONE) {
-			ret = folio_lock_killable(folio);
-			if (ret < 0) {
-				folio_put(folio);
-				return ret;
-			}
-		} else {
-			if (!folio_trylock(folio)) {
-				folio_put(folio);
-				return 0;
+			/* At this point we hold neither the i_pages lock nor the
+			 * page lock: the page may be truncated or invalidated
+			 * (changing page->mapping to NULL), or even swizzled
+			 * back from swapper_space to tmpfs file mapping
+			 */
+			if (wbc->sync_mode != WB_SYNC_NONE) {
+				ret = folio_lock_killable(folio);
+				if (ret < 0) {
+					folio_batch_release(&fbatch);
+					return ret;
+				}
+			} else {
+				if (!folio_trylock(folio))
+					continue;
 			}
-		}
 
-		if (folio_mapping(folio) != mapping ||
-		    !folio_test_dirty(folio)) {
-			start += folio_size(folio);
-			folio_unlock(folio);
-			folio_put(folio);
-			continue;
-		}
+			if (folio->mapping != mapping ||
+			    !folio_test_dirty(folio)) {
+				start += folio_size(folio);
+				folio_unlock(folio);
+				continue;
+			}
 
-		if (folio_test_writeback(folio) ||
-		    folio_test_fscache(folio)) {
-			folio_unlock(folio);
-			if (wbc->sync_mode != WB_SYNC_NONE) {
-				folio_wait_writeback(folio);
+			if (folio_test_writeback(folio) ||
+			    folio_test_fscache(folio)) {
+				folio_unlock(folio);
+				if (wbc->sync_mode != WB_SYNC_NONE) {
+					folio_wait_writeback(folio);
 #ifdef CONFIG_AFS_FSCACHE
-				folio_wait_fscache(folio);
+					folio_wait_fscache(folio);
 #endif
-			} else {
-				start += folio_size(folio);
+				} else {
+					start += folio_size(folio);
+				}
+				if (wbc->sync_mode == WB_SYNC_NONE) {
+					if (skips >= 5 || need_resched()) {
+						*_next = start;
+						_leave(" = 0 [%llx]", *_next);
+						return 0;
+					}
+					skips++;
+				}
+				continue;
 			}
-			folio_put(folio);
-			if (wbc->sync_mode == WB_SYNC_NONE) {
-				if (skips >= 5 || need_resched())
-					break;
-				skips++;
+
+			if (!folio_clear_dirty_for_io(folio))
+				BUG();
+			ret = afs_write_back_from_locked_folio(mapping, wbc,
+					folio, start, end);
+			if (ret < 0) {
+				_leave(" = %zd", ret);
+				folio_batch_release(&fbatch);
+				return ret;
 			}
-			continue;
-		}
 
-		if (!folio_clear_dirty_for_io(folio))
-			BUG();
-		ret = afs_write_back_from_locked_folio(mapping, wbc, folio, start, end);
-		folio_put(folio);
-		if (ret < 0) {
-			_leave(" = %zd", ret);
-			return ret;
+			start += ret;
 		}
-
-		start += ret;
-
+		folio_batch_release(&fbatch);
 		cond_resched();
 	} while (wbc->nr_to_write > 0);
 
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 06/23] btrfs: Convert btree_write_cache_pages() to use filemap_get_folio_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (4 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 05/23] afs: Convert afs_writepages_region() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 07/23] btrfs: Convert extent_write_cache_pages() to use filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	David Sterba, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/extent_io.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4dcf22e051ff..9ae75db4d55e 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2960,14 +2960,14 @@ int btree_write_cache_pages(struct address_space *mapping,
 	int ret = 0;
 	int done = 0;
 	int nr_to_write_done = 0;
-	struct pagevec pvec;
-	int nr_pages;
+	struct folio_batch fbatch;
+	unsigned int nr_folios;
 	pgoff_t index;
 	pgoff_t end;		/* Inclusive */
 	int scanned = 0;
 	xa_mark_t tag;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	if (wbc->range_cyclic) {
 		index = mapping->writeback_index; /* Start from prev offset */
 		end = -1;
@@ -2990,14 +2990,15 @@ int btree_write_cache_pages(struct address_space *mapping,
 	if (wbc->sync_mode == WB_SYNC_ALL)
 		tag_pages_for_writeback(mapping, index, end);
 	while (!done && !nr_to_write_done && (index <= end) &&
-	       (nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
-			tag))) {
+	       (nr_folios = filemap_get_folios_tag(mapping, &index, end,
+					    tag, &fbatch))) {
 		unsigned i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			ret = submit_eb_page(page, wbc, &epd, &eb_context);
+			ret = submit_eb_page(&folio->page, wbc, &epd,
+					&eb_context);
 			if (ret == 0)
 				continue;
 			if (ret < 0) {
@@ -3012,7 +3013,7 @@ int btree_write_cache_pages(struct address_space *mapping,
 			 */
 			nr_to_write_done = wbc->nr_to_write <= 0;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 	if (!scanned && !done) {
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 07/23] btrfs: Convert extent_write_cache_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (5 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 06/23] btrfs: Convert btree_write_cache_pages() to use filemap_get_folio_tag() Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() " Vishal Moola (Oracle)
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	David Sterba, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag(). Now also supports large
folios.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/extent_io.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 9ae75db4d55e..983dde83ba93 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3088,8 +3088,8 @@ static int extent_write_cache_pages(struct address_space *mapping,
 	int ret = 0;
 	int done = 0;
 	int nr_to_write_done = 0;
-	struct pagevec pvec;
-	int nr_pages;
+	struct folio_batch fbatch;
+	unsigned int nr_folios;
 	pgoff_t index;
 	pgoff_t end;		/* Inclusive */
 	pgoff_t done_index;
@@ -3109,7 +3109,7 @@ static int extent_write_cache_pages(struct address_space *mapping,
 	if (!igrab(inode))
 		return 0;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	if (wbc->range_cyclic) {
 		index = mapping->writeback_index; /* Start from prev offset */
 		end = -1;
@@ -3147,14 +3147,14 @@ static int extent_write_cache_pages(struct address_space *mapping,
 		tag_pages_for_writeback(mapping, index, end);
 	done_index = index;
 	while (!done && !nr_to_write_done && (index <= end) &&
-			(nr_pages = pagevec_lookup_range_tag(&pvec, mapping,
-						&index, end, tag))) {
+			(nr_folios = filemap_get_folios_tag(mapping, &index,
+							end, tag, &fbatch))) {
 		unsigned i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			done_index = page->index + 1;
+			done_index = folio->index + folio_nr_pages(folio);
 			/*
 			 * At this point we hold neither the i_pages lock nor
 			 * the page lock: the page may be truncated or
@@ -3162,29 +3162,29 @@ static int extent_write_cache_pages(struct address_space *mapping,
 			 * or even swizzled back from swapper_space to
 			 * tmpfs file mapping
 			 */
-			if (!trylock_page(page)) {
+			if (!folio_trylock(folio)) {
 				submit_write_bio(epd, 0);
-				lock_page(page);
+				folio_lock(folio);
 			}
 
-			if (unlikely(page->mapping != mapping)) {
-				unlock_page(page);
+			if (unlikely(folio->mapping != mapping)) {
+				folio_unlock(folio);
 				continue;
 			}
 
 			if (wbc->sync_mode != WB_SYNC_NONE) {
-				if (PageWriteback(page))
+				if (folio_test_writeback(folio))
 					submit_write_bio(epd, 0);
-				wait_on_page_writeback(page);
+				folio_wait_writeback(folio);
 			}
 
-			if (PageWriteback(page) ||
-			    !clear_page_dirty_for_io(page)) {
-				unlock_page(page);
+			if (folio_test_writeback(folio) ||
+			    !folio_clear_dirty_for_io(folio)) {
+				folio_unlock(folio);
 				continue;
 			}
 
-			ret = __extent_writepage(page, wbc, epd);
+			ret = __extent_writepage(&folio->page, wbc, epd);
 			if (ret < 0) {
 				done = 1;
 				break;
@@ -3197,7 +3197,7 @@ static int extent_write_cache_pages(struct address_space *mapping,
 			 */
 			nr_to_write_done = wbc->nr_to_write <= 0;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 	if (!scanned && !done) {
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (6 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 07/23] btrfs: Convert extent_write_cache_pages() to use filemap_get_folios_tag() Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-28 17:20   ` Jeff Layton
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 09/23] cifs: Convert wdata_alloc_and_fillpages() " Vishal Moola (Oracle)
                   ` (14 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use a folio_batch instead of pagevec. This is in
preparation for the removal of find_get_pages_range_tag().

Also some minor renaming for consistency.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/ceph/addr.c | 58 ++++++++++++++++++++++++++------------------------
 1 file changed, 30 insertions(+), 28 deletions(-)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index dcf701b05cc1..d2361d51db39 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -792,7 +792,7 @@ static int ceph_writepages_start(struct address_space *mapping,
 	struct ceph_vino vino = ceph_vino(inode);
 	pgoff_t index, start_index, end = -1;
 	struct ceph_snap_context *snapc = NULL, *last_snapc = NULL, *pgsnapc;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	int rc = 0;
 	unsigned int wsize = i_blocksize(inode);
 	struct ceph_osd_request *req = NULL;
@@ -821,7 +821,7 @@ static int ceph_writepages_start(struct address_space *mapping,
 	if (fsc->mount_options->wsize < wsize)
 		wsize = fsc->mount_options->wsize;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 	start_index = wbc->range_cyclic ? mapping->writeback_index : 0;
 	index = start_index;
@@ -869,7 +869,7 @@ static int ceph_writepages_start(struct address_space *mapping,
 
 	while (!done && index <= end) {
 		int num_ops = 0, op_idx;
-		unsigned i, pvec_pages, max_pages, locked_pages = 0;
+		unsigned i, nr_folios, max_pages, locked_pages = 0;
 		struct page **pages = NULL, **data_pages;
 		struct page *page;
 		pgoff_t strip_unit_end = 0;
@@ -879,13 +879,13 @@ static int ceph_writepages_start(struct address_space *mapping,
 		max_pages = wsize >> PAGE_SHIFT;
 
 get_more_pages:
-		pvec_pages = pagevec_lookup_range_tag(&pvec, mapping, &index,
-						end, PAGECACHE_TAG_DIRTY);
-		dout("pagevec_lookup_range_tag got %d\n", pvec_pages);
-		if (!pvec_pages && !locked_pages)
+		nr_folios = filemap_get_folios_tag(mapping, &index,
+				end, PAGECACHE_TAG_DIRTY, &fbatch);
+		dout("pagevec_lookup_range_tag got %d\n", nr_folios);
+		if (!nr_folios && !locked_pages)
 			break;
-		for (i = 0; i < pvec_pages && locked_pages < max_pages; i++) {
-			page = pvec.pages[i];
+		for (i = 0; i < nr_folios && locked_pages < max_pages; i++) {
+			page = &fbatch.folios[i]->page;
 			dout("? %p idx %lu\n", page, page->index);
 			if (locked_pages == 0)
 				lock_page(page);  /* first page */
@@ -995,7 +995,7 @@ static int ceph_writepages_start(struct address_space *mapping,
 				len = 0;
 			}
 
-			/* note position of first page in pvec */
+			/* note position of first page in fbatch */
 			dout("%p will write page %p idx %lu\n",
 			     inode, page, page->index);
 
@@ -1005,30 +1005,30 @@ static int ceph_writepages_start(struct address_space *mapping,
 				fsc->write_congested = true;
 
 			pages[locked_pages++] = page;
-			pvec.pages[i] = NULL;
+			fbatch.folios[i] = NULL;
 
 			len += thp_size(page);
 		}
 
 		/* did we get anything? */
 		if (!locked_pages)
-			goto release_pvec_pages;
+			goto release_folios;
 		if (i) {
 			unsigned j, n = 0;
-			/* shift unused page to beginning of pvec */
-			for (j = 0; j < pvec_pages; j++) {
-				if (!pvec.pages[j])
+			/* shift unused page to beginning of fbatch */
+			for (j = 0; j < nr_folios; j++) {
+				if (!fbatch.folios[j])
 					continue;
 				if (n < j)
-					pvec.pages[n] = pvec.pages[j];
+					fbatch.folios[n] = fbatch.folios[j];
 				n++;
 			}
-			pvec.nr = n;
+			fbatch.nr = n;
 
-			if (pvec_pages && i == pvec_pages &&
+			if (nr_folios && i == nr_folios &&
 			    locked_pages < max_pages) {
-				dout("reached end pvec, trying for more\n");
-				pagevec_release(&pvec);
+				dout("reached end fbatch, trying for more\n");
+				folio_batch_release(&fbatch);
 				goto get_more_pages;
 			}
 		}
@@ -1164,10 +1164,10 @@ static int ceph_writepages_start(struct address_space *mapping,
 		if (wbc->nr_to_write <= 0 && wbc->sync_mode == WB_SYNC_NONE)
 			done = true;
 
-release_pvec_pages:
-		dout("pagevec_release on %d pages (%p)\n", (int)pvec.nr,
-		     pvec.nr ? pvec.pages[0] : NULL);
-		pagevec_release(&pvec);
+release_folios:
+		dout("folio_batch release on %d folios (%p)\n", (int)fbatch.nr,
+		     fbatch.nr ? fbatch.folios[0] : NULL);
+		folio_batch_release(&fbatch);
 	}
 
 	if (should_loop && !done) {
@@ -1184,15 +1184,17 @@ static int ceph_writepages_start(struct address_space *mapping,
 			unsigned i, nr;
 			index = 0;
 			while ((index <= end) &&
-			       (nr = pagevec_lookup_tag(&pvec, mapping, &index,
-						PAGECACHE_TAG_WRITEBACK))) {
+			       (nr = filemap_get_folios_tag(mapping, &index,
+						(pgoff_t)-1,
+						PAGECACHE_TAG_WRITEBACK,
+						&fbatch))) {
 				for (i = 0; i < nr; i++) {
-					page = pvec.pages[i];
+					page = &fbatch.folios[i]->page;
 					if (page_snap_context(page) != snapc)
 						continue;
 					wait_on_page_writeback(page);
 				}
-				pagevec_release(&pvec);
+				folio_batch_release(&fbatch);
 				cond_resched();
 			}
 		}
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 09/23] cifs: Convert wdata_alloc_and_fillpages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (7 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() " Vishal Moola (Oracle)
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

This is in preparation for the removal of find_get_pages_range_tag(). Now also
supports the use of large folios.

Since tofind might be larger than the max number of folios in a
folio_batch (15), we loop through filling in wdata->pages pulling more
batches until we either reach tofind pages or run out of folios.

This function may not return all pages in the last found folio before
tofind pages are reached.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/cifs/file.c | 33 ++++++++++++++++++++++++++++++---
 1 file changed, 30 insertions(+), 3 deletions(-)

diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index f6ffee514c34..9a0675dd2d3c 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2520,14 +2520,41 @@ wdata_alloc_and_fillpages(pgoff_t tofind, struct address_space *mapping,
 			  unsigned int *found_pages)
 {
 	struct cifs_writedata *wdata;
-
+	struct folio_batch fbatch;
+	unsigned int i, idx, p, nr;
 	wdata = cifs_writedata_alloc((unsigned int)tofind,
 				     cifs_writev_complete);
 	if (!wdata)
 		return NULL;
 
-	*found_pages = find_get_pages_range_tag(mapping, index, end,
-				PAGECACHE_TAG_DIRTY, tofind, wdata->pages);
+	folio_batch_init(&fbatch);
+	*found_pages = 0;
+
+again:
+	nr = filemap_get_folios_tag(mapping, index, end,
+				PAGECACHE_TAG_DIRTY, &fbatch);
+	if (!nr)
+		goto out; /* No dirty pages left in the range */
+
+	for (i = 0; i < nr; i++) {
+		struct folio *folio = fbatch.folios[i];
+
+		idx = 0;
+		p = folio_nr_pages(folio);
+add_more:
+		wdata->pages[*found_pages] = folio_page(folio, idx);
+		if (++*found_pages == tofind) {
+			folio_batch_release(&fbatch);
+			goto out;
+		}
+		if (++idx < p) {
+			folio_ref_inc(folio);
+			goto add_more;
+		}
+	}
+	folio_batch_release(&fbatch);
+	goto again;
+out:
 	return wdata;
 }
 
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (8 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 09/23] cifs: Convert wdata_alloc_and_fillpages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 19:26   ` Vishal Moola
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
                   ` (12 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted the function to use folios throughout. This is in preparation
for the removal of find_get_pages_range_tag(). Now supports large
folios.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/ext4/inode.c | 55 ++++++++++++++++++++++++-------------------------
 1 file changed, 27 insertions(+), 28 deletions(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 2b5ef1b64249..69a0708c8e87 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2572,8 +2572,8 @@ static int ext4_da_writepages_trans_blocks(struct inode *inode)
 static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 {
 	struct address_space *mapping = mpd->inode->i_mapping;
-	struct pagevec pvec;
-	unsigned int nr_pages;
+	struct folio_batch fbatch;
+	unsigned int nr_folios;
 	long left = mpd->wbc->nr_to_write;
 	pgoff_t index = mpd->first_page;
 	pgoff_t end = mpd->last_page;
@@ -2587,18 +2587,17 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 		tag = PAGECACHE_TAG_TOWRITE;
 	else
 		tag = PAGECACHE_TAG_DIRTY;
-
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	mpd->map.m_len = 0;
 	mpd->next_page = index;
 	while (index <= end) {
-		nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
-				tag);
-		if (nr_pages == 0)
+		nr_folios = filemap_get_folios_tag(mapping, &index, end,
+				tag, &fbatch);
+		if (nr_folios == 0)
 			break;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
 			/*
 			 * Accumulated enough dirty pages? This doesn't apply
@@ -2612,10 +2611,10 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 				goto out;
 
 			/* If we can't merge this page, we are done. */
-			if (mpd->map.m_len > 0 && mpd->next_page != page->index)
+			if (mpd->map.m_len > 0 && mpd->next_page != folio->index)
 				goto out;
 
-			lock_page(page);
+			folio_lock(folio);
 			/*
 			 * If the page is no longer dirty, or its mapping no
 			 * longer corresponds to inode we are writing (which
@@ -2623,16 +2622,16 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 			 * page is already under writeback and we are not doing
 			 * a data integrity writeback, skip the page
 			 */
-			if (!PageDirty(page) ||
-			    (PageWriteback(page) &&
+			if (!folio_test_dirty(folio) ||
+			    (folio_test_writeback(folio) &&
 			     (mpd->wbc->sync_mode == WB_SYNC_NONE)) ||
-			    unlikely(page->mapping != mapping)) {
-				unlock_page(page);
+			    unlikely(folio->mapping != mapping)) {
+				folio_unlock(folio);
 				continue;
 			}
 
-			wait_on_page_writeback(page);
-			BUG_ON(PageWriteback(page));
+			folio_wait_writeback(folio);
+			BUG_ON(folio_test_writeback(folio));
 
 			/*
 			 * Should never happen but for buggy code in
@@ -2643,33 +2642,33 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
 			 *
 			 * [1] https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz
 			 */
-			if (!page_has_buffers(page)) {
-				ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", page->index);
-				ClearPageDirty(page);
-				unlock_page(page);
+			if (!folio_buffers(folio)) {
+				ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", folio->index);
+				folio_clear_dirty(folio);
+				folio_unlock(folio);
 				continue;
 			}
 
 			if (mpd->map.m_len == 0)
-				mpd->first_page = page->index;
-			mpd->next_page = page->index + 1;
+				mpd->first_page = folio->index;
+			mpd->next_page = folio->index + folio_nr_pages(folio);
 			/* Add all dirty buffers to mpd */
-			lblk = ((ext4_lblk_t)page->index) <<
+			lblk = ((ext4_lblk_t)folio->index) <<
 				(PAGE_SHIFT - blkbits);
-			head = page_buffers(page);
+			head = folio_buffers(folio);
 			err = mpage_process_page_bufs(mpd, head, head, lblk);
 			if (err <= 0)
 				goto out;
 			err = 0;
-			left--;
+			left -= folio_nr_pages(folio);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 	mpd->scanned_until_end = 1;
 	return 0;
 out:
-	pagevec_release(&pvec);
+	folio_batch_release(&fbatch);
 	return err;
 }
 
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (9 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 19:31   ` Vishal Moola
  2022-10-29  4:46   ` Chao Yu
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() " Vishal Moola (Oracle)
                   ` (11 subsequent siblings)
  22 siblings, 2 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use a folio_batch instead of pagevec. This is in
preparation for the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/f2fs/node.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 983572f23896..e8b72336c096 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1728,12 +1728,12 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 			unsigned int *seq_id)
 {
 	pgoff_t index;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	int ret = 0;
 	struct page *last_page = NULL;
 	bool marked = false;
 	nid_t ino = inode->i_ino;
-	int nr_pages;
+	int nr_folios;
 	int nwritten = 0;
 
 	if (atomic) {
@@ -1742,20 +1742,21 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 			return PTR_ERR_OR_ZERO(last_page);
 	}
 retry:
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	index = 0;
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
+					(pgoff_t)-1, PAGECACHE_TAG_DIRTY,
+					&fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct page *page = &fbatch.folios[i]->page;
 			bool submitted = false;
 
 			if (unlikely(f2fs_cp_error(sbi))) {
 				f2fs_put_page(last_page, 0);
-				pagevec_release(&pvec);
+				folio_batch_release(&fbatch);
 				ret = -EIO;
 				goto out;
 			}
@@ -1821,7 +1822,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
 				break;
 			}
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 
 		if (ret || marked)
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (10 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-29  4:47   ` Chao Yu
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() " Vishal Moola (Oracle)
                   ` (10 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use a folio_batch instead of pagevec. This is in
preparation for the removal of find_get_pages_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/f2fs/node.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index e8b72336c096..a2f477cc48c7 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1887,17 +1887,18 @@ static bool flush_dirty_inode(struct page *page)
 void f2fs_flush_inline_data(struct f2fs_sb_info *sbi)
 {
 	pgoff_t index = 0;
-	struct pagevec pvec;
-	int nr_pages;
+	struct folio_batch fbatch;
+	int nr_folios;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec,
-			NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
+					(pgoff_t)-1, PAGECACHE_TAG_DIRTY,
+					&fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct page *page = &fbatch.folios[i]->page;
 
 			if (!IS_DNODE(page))
 				continue;
@@ -1924,7 +1925,7 @@ void f2fs_flush_inline_data(struct f2fs_sb_info *sbi)
 			}
 			unlock_page(page);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 }
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (11 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-29  4:47   ` Chao Yu
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() " Vishal Moola (Oracle)
                   ` (9 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use a folio_batch instead of pagevec. This is in
preparation for the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/f2fs/node.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index a2f477cc48c7..38f32b4d61dc 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1935,23 +1935,24 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
 				bool do_balance, enum iostat_type io_type)
 {
 	pgoff_t index;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	int step = 0;
 	int nwritten = 0;
 	int ret = 0;
-	int nr_pages, done = 0;
+	int nr_folios, done = 0;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 next_step:
 	index = 0;
 
-	while (!done && (nr_pages = pagevec_lookup_tag(&pvec,
-			NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) {
+	while (!done && (nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi),
+				&index, (pgoff_t)-1, PAGECACHE_TAG_DIRTY,
+				&fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct page *page = &fbatch.folios[i]->page;
 			bool submitted = false;
 
 			/* give a priority to WB_SYNC threads */
@@ -2026,7 +2027,7 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
 			if (--wbc->nr_to_write == 0)
 				break;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 
 		if (wbc->nr_to_write == 0) {
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (12 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-11-14  7:02   ` Chao Yu
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 15/23] f2fs: Convert last_fsync_dnode() " Vishal Moola (Oracle)
                   ` (8 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted the function to use a folio_batch instead of pagevec. This is in
preparation for the removal of find_get_pages_range_tag().

Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
of pagevec. This does NOT support large folios. The function currently
only utilizes folios of size 1 so this shouldn't cause any issues right
now.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/f2fs/compress.c | 13 +++++----
 fs/f2fs/data.c     | 69 +++++++++++++++++++++++++---------------------
 fs/f2fs/f2fs.h     |  5 ++--
 3 files changed, 47 insertions(+), 40 deletions(-)

diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index d315c2de136f..7af6c923e0aa 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -842,10 +842,11 @@ bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index)
 	return is_page_in_cluster(cc, index);
 }
 
-bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
-				int index, int nr_pages, bool uptodate)
+bool f2fs_all_cluster_page_ready(struct compress_ctx *cc,
+				struct folio_batch *fbatch,
+				int index, int nr_folios, bool uptodate)
 {
-	unsigned long pgidx = pages[index]->index;
+	unsigned long pgidx = fbatch->folios[index]->index;
 	int i = uptodate ? 0 : 1;
 
 	/*
@@ -855,13 +856,13 @@ bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
 	if (uptodate && (pgidx % cc->cluster_size))
 		return false;
 
-	if (nr_pages - index < cc->cluster_size)
+	if (nr_folios - index < cc->cluster_size)
 		return false;
 
 	for (; i < cc->cluster_size; i++) {
-		if (pages[index + i]->index != pgidx + i)
+		if (fbatch->folios[index + i]->index != pgidx + i)
 			return false;
-		if (uptodate && !PageUptodate(pages[index + i]))
+		if (uptodate && !folio_test_uptodate(fbatch->folios[index + i]))
 			return false;
 	}
 
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index a71e818cd67b..7511578b73c3 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2938,7 +2938,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 {
 	int ret = 0;
 	int done = 0, retry = 0;
-	struct page *pages[F2FS_ONSTACK_PAGES];
+	struct folio_batch fbatch;
 	struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
 	struct bio *bio = NULL;
 	sector_t last_block;
@@ -2959,7 +2959,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 		.private = NULL,
 	};
 #endif
-	int nr_pages;
+	int nr_folios;
 	pgoff_t index;
 	pgoff_t end;		/* Inclusive */
 	pgoff_t done_index;
@@ -2969,6 +2969,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 	int submitted = 0;
 	int i;
 
+	folio_batch_init(&fbatch);
+
 	if (get_dirty_pages(mapping->host) <=
 				SM_I(F2FS_M_SB(mapping))->min_hot_blocks)
 		set_inode_flag(mapping->host, FI_HOT_DATA);
@@ -2994,13 +2996,13 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 		tag_pages_for_writeback(mapping, index, end);
 	done_index = index;
 	while (!done && !retry && (index <= end)) {
-		nr_pages = find_get_pages_range_tag(mapping, &index, end,
-				tag, F2FS_ONSTACK_PAGES, pages);
-		if (nr_pages == 0)
+		nr_folios = filemap_get_folios_tag(mapping, &index, end,
+				tag, &fbatch);
+		if (nr_folios == 0)
 			break;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 			bool need_readd;
 readd:
 			need_readd = false;
@@ -3017,7 +3019,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				}
 
 				if (!f2fs_cluster_can_merge_page(&cc,
-								page->index)) {
+								folio->index)) {
 					ret = f2fs_write_multi_pages(&cc,
 						&submitted, wbc, io_type);
 					if (!ret)
@@ -3026,27 +3028,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				}
 
 				if (unlikely(f2fs_cp_error(sbi)))
-					goto lock_page;
+					goto lock_folio;
 
 				if (!f2fs_cluster_is_empty(&cc))
-					goto lock_page;
+					goto lock_folio;
 
 				if (f2fs_all_cluster_page_ready(&cc,
-					pages, i, nr_pages, true))
-					goto lock_page;
+					&fbatch, i, nr_folios, true))
+					goto lock_folio;
 
 				ret2 = f2fs_prepare_compress_overwrite(
 							inode, &pagep,
-							page->index, &fsdata);
+							folio->index, &fsdata);
 				if (ret2 < 0) {
 					ret = ret2;
 					done = 1;
 					break;
 				} else if (ret2 &&
 					(!f2fs_compress_write_end(inode,
-						fsdata, page->index, 1) ||
+						fsdata, folio->index, 1) ||
 					 !f2fs_all_cluster_page_ready(&cc,
-						pages, i, nr_pages, false))) {
+						&fbatch, i, nr_folios,
+						false))) {
 					retry = 1;
 					break;
 				}
@@ -3059,46 +3062,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				break;
 			}
 #ifdef CONFIG_F2FS_FS_COMPRESSION
-lock_page:
+lock_folio:
 #endif
-			done_index = page->index;
+			done_index = folio->index;
 retry_write:
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
 
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			if (PageWriteback(page)) {
+			if (folio_test_writeback(folio)) {
 				if (wbc->sync_mode != WB_SYNC_NONE)
-					f2fs_wait_on_page_writeback(page,
+					f2fs_wait_on_page_writeback(
+							&folio->page,
 							DATA, true, true);
 				else
 					goto continue_unlock;
 			}
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
 #ifdef CONFIG_F2FS_FS_COMPRESSION
 			if (f2fs_compressed_file(inode)) {
-				get_page(page);
-				f2fs_compress_ctx_add_page(&cc, page);
+				folio_get(folio);
+				f2fs_compress_ctx_add_page(&cc, &folio->page);
 				continue;
 			}
 #endif
-			ret = f2fs_write_single_data_page(page, &submitted,
-					&bio, &last_block, wbc, io_type,
-					0, true);
+			ret = f2fs_write_single_data_page(&folio->page,
+					&submitted, &bio, &last_block,
+					wbc, io_type, 0, true);
 			if (ret == AOP_WRITEPAGE_ACTIVATE)
-				unlock_page(page);
+				folio_unlock(folio);
 #ifdef CONFIG_F2FS_FS_COMPRESSION
 result:
 #endif
@@ -3122,7 +3126,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 					}
 					goto next;
 				}
-				done_index = page->index + 1;
+				done_index = folio->index +
+					folio_nr_pages(folio);
 				done = 1;
 				break;
 			}
@@ -3136,7 +3141,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 			if (need_readd)
 				goto readd;
 		}
-		release_pages(pages, nr_pages);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 #ifdef CONFIG_F2FS_FS_COMPRESSION
diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index e6355a5683b7..d7bfb88fa341 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -4226,8 +4226,9 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed,
 				block_t blkaddr, bool in_task);
 bool f2fs_cluster_is_empty(struct compress_ctx *cc);
 bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index);
-bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
-				int index, int nr_pages, bool uptodate);
+bool f2fs_all_cluster_page_ready(struct compress_ctx *cc,
+		struct folio_batch *fbatch, int index, int nr_folios,
+		bool uptodate);
 bool f2fs_sanity_check_cluster(struct dnode_of_data *dn);
 void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page);
 int f2fs_write_multi_pages(struct compress_ctx *cc,
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 15/23] f2fs: Convert last_fsync_dnode() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (13 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 16/23] f2fs: Convert f2fs_sync_meta_pages() " Vishal Moola (Oracle)
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert to use a folio_batch instead of pagevec. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/f2fs/node.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
index 38f32b4d61dc..3e1764960a96 100644
--- a/fs/f2fs/node.c
+++ b/fs/f2fs/node.c
@@ -1515,23 +1515,24 @@ static void flush_inline_data(struct f2fs_sb_info *sbi, nid_t ino)
 static struct page *last_fsync_dnode(struct f2fs_sb_info *sbi, nid_t ino)
 {
 	pgoff_t index;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	struct page *last_page = NULL;
-	int nr_pages;
+	int nr_folios;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	index = 0;
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
+					(pgoff_t)-1, PAGECACHE_TAG_DIRTY,
+					&fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct page *page = &fbatch.folios[i]->page;
 
 			if (unlikely(f2fs_cp_error(sbi))) {
 				f2fs_put_page(last_page, 0);
-				pagevec_release(&pvec);
+				folio_batch_release(&fbatch);
 				return ERR_PTR(-EIO);
 			}
 
@@ -1562,7 +1563,7 @@ static struct page *last_fsync_dnode(struct f2fs_sb_info *sbi, nid_t ino)
 			last_page = page;
 			unlock_page(page);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 	return last_page;
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 16/23] f2fs: Convert f2fs_sync_meta_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (14 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 15/23] f2fs: Convert last_fsync_dnode() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() " Vishal Moola (Oracle)
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use folios throughout. This is in preparation for the
removal of find_get_pages_range_tag().

Initially the function was checking if the previous page index is truly the
previous page i.e. 1 index behind the current page. To convert to folios and
maintain this check we need to make the check
folio->index != prev + folio_nr_pages(previous folio) since we don't know
how many pages are in a folio.

At index i == 0 the check is guaranteed to succeed, so to workaround indexing
bounds we can simply ignore the check for that specific index. This makes the
initial assignment of prev trivial, so I removed that as well.

Also modified a comment in commit_checkpoint for consistency.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/f2fs/checkpoint.c | 49 +++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 23 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 0c82dae082aa..82eb26f471c5 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -390,59 +390,62 @@ long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
 {
 	struct address_space *mapping = META_MAPPING(sbi);
 	pgoff_t index = 0, prev = ULONG_MAX;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	long nwritten = 0;
-	int nr_pages;
+	int nr_folios;
 	struct writeback_control wbc = {
 		.for_reclaim = 0,
 	};
 	struct blk_plug plug;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 	blk_start_plug(&plug);
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(mapping, &index,
+					(pgoff_t)-1,
+					PAGECACHE_TAG_DIRTY, &fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			if (prev == ULONG_MAX)
-				prev = page->index - 1;
-			if (nr_to_write != LONG_MAX && page->index != prev + 1) {
-				pagevec_release(&pvec);
+			if (nr_to_write != LONG_MAX && i != 0 &&
+					folio->index != prev +
+					folio_nr_pages(fbatch.folios[i-1])) {
+				folio_batch_release(&fbatch);
 				goto stop;
 			}
 
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			f2fs_wait_on_page_writeback(page, META, true, true);
+			f2fs_wait_on_page_writeback(&folio->page, META,
+					true, true);
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
-			if (__f2fs_write_meta_page(page, &wbc, io_type)) {
-				unlock_page(page);
+			if (__f2fs_write_meta_page(&folio->page, &wbc,
+						io_type)) {
+				folio_unlock(folio);
 				break;
 			}
-			nwritten++;
-			prev = page->index;
+			nwritten += folio_nr_pages(folio);
+			prev = folio->index;
 			if (unlikely(nwritten >= nr_to_write))
 				break;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 stop:
@@ -1398,7 +1401,7 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
 	};
 
 	/*
-	 * pagevec_lookup_tag and lock_page again will take
+	 * filemap_get_folios_tag and lock_page again will take
 	 * some extra time. Therefore, f2fs_update_meta_pages and
 	 * f2fs_sync_meta_pages are combined in this function.
 	 */
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (15 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 16/23] f2fs: Convert f2fs_sync_meta_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-24 19:23   ` Vishal Moola
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 18/23] nilfs2: Convert nilfs_lookup_dirty_data_buffers() " Vishal Moola (Oracle)
                   ` (5 subsequent siblings)
  22 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

Converted function to use folios throughout. This is in preparation for
the removal of find_get_pgaes_range_tag().

Also had to modify and rename gfs2_write_jdata_pagevec() to take in
and utilize folio_batch rather than pagevec and use folios rather
than pages. gfs2_write_jdata_batch() now supports large folios.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 fs/gfs2/aops.c | 64 +++++++++++++++++++++++++++-----------------------
 1 file changed, 35 insertions(+), 29 deletions(-)

diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 05bee80ac7de..8f87c2551a3d 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -195,67 +195,71 @@ static int gfs2_writepages(struct address_space *mapping,
 }
 
 /**
- * gfs2_write_jdata_pagevec - Write back a pagevec's worth of pages
+ * gfs2_write_jdata_batch - Write back a folio batch's worth of folios
  * @mapping: The mapping
  * @wbc: The writeback control
- * @pvec: The vector of pages
- * @nr_pages: The number of pages to write
+ * @fbatch: The batch of folios
  * @done_index: Page index
  *
  * Returns: non-zero if loop should terminate, zero otherwise
  */
 
-static int gfs2_write_jdata_pagevec(struct address_space *mapping,
+static int gfs2_write_jdata_batch(struct address_space *mapping,
 				    struct writeback_control *wbc,
-				    struct pagevec *pvec,
-				    int nr_pages,
+				    struct folio_batch *fbatch,
 				    pgoff_t *done_index)
 {
 	struct inode *inode = mapping->host;
 	struct gfs2_sbd *sdp = GFS2_SB(inode);
-	unsigned nrblocks = nr_pages * (PAGE_SIZE >> inode->i_blkbits);
+	unsigned nrblocks;
 	int i;
 	int ret;
+	int nr_pages = 0;
+	int nr_folios = folio_batch_count(fbatch);
+
+	for (i = 0; i < nr_folios; i++)
+		nr_pages += folio_nr_pages(fbatch->folios[i]);
+	nrblocks = nr_pages * (PAGE_SIZE >> inode->i_blkbits);
 
 	ret = gfs2_trans_begin(sdp, nrblocks, nrblocks);
 	if (ret < 0)
 		return ret;
 
-	for(i = 0; i < nr_pages; i++) {
-		struct page *page = pvec->pages[i];
+	for (i = 0; i < nr_folios; i++) {
+		struct folio *folio = fbatch->folios[i];
 
-		*done_index = page->index;
+		*done_index = folio->index;
 
-		lock_page(page);
+		folio_lock(folio);
 
-		if (unlikely(page->mapping != mapping)) {
+		if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-			unlock_page(page);
+			folio_unlock(folio);
 			continue;
 		}
 
-		if (!PageDirty(page)) {
+		if (!folio_test_dirty(folio)) {
 			/* someone wrote it for us */
 			goto continue_unlock;
 		}
 
-		if (PageWriteback(page)) {
+		if (folio_test_writeback(folio)) {
 			if (wbc->sync_mode != WB_SYNC_NONE)
-				wait_on_page_writeback(page);
+				folio_wait_writeback(folio);
 			else
 				goto continue_unlock;
 		}
 
-		BUG_ON(PageWriteback(page));
-		if (!clear_page_dirty_for_io(page))
+		BUG_ON(folio_test_writeback(folio));
+		if (!folio_clear_dirty_for_io(folio))
 			goto continue_unlock;
 
 		trace_wbc_writepage(wbc, inode_to_bdi(inode));
 
-		ret = __gfs2_jdata_writepage(page, wbc);
+		ret = __gfs2_jdata_writepage(&folio->page, wbc);
 		if (unlikely(ret)) {
 			if (ret == AOP_WRITEPAGE_ACTIVATE) {
-				unlock_page(page);
+				folio_unlock(folio);
 				ret = 0;
 			} else {
 
@@ -268,7 +272,8 @@ static int gfs2_write_jdata_pagevec(struct address_space *mapping,
 				 * not be suitable for data integrity
 				 * writeout).
 				 */
-				*done_index = page->index + 1;
+				*done_index = folio->index +
+					folio_nr_pages(folio);
 				ret = 1;
 				break;
 			}
@@ -305,8 +310,8 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
 {
 	int ret = 0;
 	int done = 0;
-	struct pagevec pvec;
-	int nr_pages;
+	struct folio_batch fbatch;
+	int nr_folios;
 	pgoff_t writeback_index;
 	pgoff_t index;
 	pgoff_t end;
@@ -315,7 +320,7 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
 	int range_whole = 0;
 	xa_mark_t tag;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 	if (wbc->range_cyclic) {
 		writeback_index = mapping->writeback_index; /* prev offset */
 		index = writeback_index;
@@ -341,17 +346,18 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
 		tag_pages_for_writeback(mapping, index, end);
 	done_index = index;
 	while (!done && (index <= end)) {
-		nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
-				tag);
-		if (nr_pages == 0)
+		nr_folios = filemap_get_folios_tag(mapping, &index, end,
+				tag, &fbatch);
+		if (nr_folios == 0)
 			break;
 
-		ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, &done_index);
+		ret = gfs2_write_jdata_batch(mapping, wbc, &fbatch,
+				&done_index);
 		if (ret)
 			done = 1;
 		if (ret > 0)
 			ret = 0;
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 18/23] nilfs2: Convert nilfs_lookup_dirty_data_buffers() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (16 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 19/23] nilfs2: Convert nilfs_lookup_dirty_node_buffers() " Vishal Moola (Oracle)
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	Ryusuke Konishi, linux-kernel, linux-f2fs-devel, cluster-devel,
	linux-mm, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
---
 fs/nilfs2/segment.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
index b4cebad21b48..2183e1698f8e 100644
--- a/fs/nilfs2/segment.c
+++ b/fs/nilfs2/segment.c
@@ -680,7 +680,7 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
 					      loff_t start, loff_t end)
 {
 	struct address_space *mapping = inode->i_mapping;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	pgoff_t index = 0, last = ULONG_MAX;
 	size_t ndirties = 0;
 	int i;
@@ -694,23 +694,26 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
 		index = start >> PAGE_SHIFT;
 		last = end >> PAGE_SHIFT;
 	}
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
  repeat:
 	if (unlikely(index > last) ||
-	    !pagevec_lookup_range_tag(&pvec, mapping, &index, last,
-				PAGECACHE_TAG_DIRTY))
+	      !filemap_get_folios_tag(mapping, &index, last,
+		      PAGECACHE_TAG_DIRTY, &fbatch))
 		return ndirties;
 
-	for (i = 0; i < pagevec_count(&pvec); i++) {
+	for (i = 0; i < folio_batch_count(&fbatch); i++) {
 		struct buffer_head *bh, *head;
-		struct page *page = pvec.pages[i];
+		struct folio *folio = fbatch.folios[i];
 
-		lock_page(page);
-		if (!page_has_buffers(page))
-			create_empty_buffers(page, i_blocksize(inode), 0);
-		unlock_page(page);
+		folio_lock(folio);
+		head = folio_buffers(folio);
+		if (!head) {
+			create_empty_buffers(&folio->page, i_blocksize(inode), 0);
+			head = folio_buffers(folio);
+		}
+		folio_unlock(folio);
 
-		bh = head = page_buffers(page);
+		bh = head;
 		do {
 			if (!buffer_dirty(bh) || buffer_async_write(bh))
 				continue;
@@ -718,13 +721,13 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
 			list_add_tail(&bh->b_assoc_buffers, listp);
 			ndirties++;
 			if (unlikely(ndirties >= nlimit)) {
-				pagevec_release(&pvec);
+				folio_batch_release(&fbatch);
 				cond_resched();
 				return ndirties;
 			}
 		} while (bh = bh->b_this_page, bh != head);
 	}
-	pagevec_release(&pvec);
+	folio_batch_release(&fbatch);
 	cond_resched();
 	goto repeat;
 }
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 19/23] nilfs2: Convert nilfs_lookup_dirty_node_buffers() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (17 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 18/23] nilfs2: Convert nilfs_lookup_dirty_data_buffers() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 20/23] nilfs2: Convert nilfs_btree_lookup_dirty_buffers() " Vishal Moola (Oracle)
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	Ryusuke Konishi, linux-kernel, linux-f2fs-devel, cluster-devel,
	linux-mm, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
---
 fs/nilfs2/segment.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
index 2183e1698f8e..fe984def1b1c 100644
--- a/fs/nilfs2/segment.c
+++ b/fs/nilfs2/segment.c
@@ -737,20 +737,19 @@ static void nilfs_lookup_dirty_node_buffers(struct inode *inode,
 {
 	struct nilfs_inode_info *ii = NILFS_I(inode);
 	struct inode *btnc_inode = ii->i_assoc_inode;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	struct buffer_head *bh, *head;
 	unsigned int i;
 	pgoff_t index = 0;
 
 	if (!btnc_inode)
 		return;
+	folio_batch_init(&fbatch);
 
-	pagevec_init(&pvec);
-
-	while (pagevec_lookup_tag(&pvec, btnc_inode->i_mapping, &index,
-					PAGECACHE_TAG_DIRTY)) {
-		for (i = 0; i < pagevec_count(&pvec); i++) {
-			bh = head = page_buffers(pvec.pages[i]);
+	while (filemap_get_folios_tag(btnc_inode->i_mapping, &index,
+				(pgoff_t)-1, PAGECACHE_TAG_DIRTY, &fbatch)) {
+		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+			bh = head = folio_buffers(fbatch.folios[i]);
 			do {
 				if (buffer_dirty(bh) &&
 						!buffer_async_write(bh)) {
@@ -761,7 +760,7 @@ static void nilfs_lookup_dirty_node_buffers(struct inode *inode,
 				bh = bh->b_this_page;
 			} while (bh != head);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 }
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 20/23] nilfs2: Convert nilfs_btree_lookup_dirty_buffers() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (18 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 19/23] nilfs2: Convert nilfs_lookup_dirty_node_buffers() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 21/23] nilfs2: Convert nilfs_copy_dirty_pages() " Vishal Moola (Oracle)
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	Ryusuke Konishi, linux-kernel, linux-f2fs-devel, cluster-devel,
	linux-mm, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
---
 fs/nilfs2/btree.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/nilfs2/btree.c b/fs/nilfs2/btree.c
index b9d15c3df3cc..da6a19eede9a 100644
--- a/fs/nilfs2/btree.c
+++ b/fs/nilfs2/btree.c
@@ -2141,7 +2141,7 @@ static void nilfs_btree_lookup_dirty_buffers(struct nilfs_bmap *btree,
 	struct inode *btnc_inode = NILFS_BMAP_I(btree)->i_assoc_inode;
 	struct address_space *btcache = btnc_inode->i_mapping;
 	struct list_head lists[NILFS_BTREE_LEVEL_MAX];
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	struct buffer_head *bh, *head;
 	pgoff_t index = 0;
 	int level, i;
@@ -2151,19 +2151,19 @@ static void nilfs_btree_lookup_dirty_buffers(struct nilfs_bmap *btree,
 	     level++)
 		INIT_LIST_HEAD(&lists[level]);
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
-	while (pagevec_lookup_tag(&pvec, btcache, &index,
-					PAGECACHE_TAG_DIRTY)) {
-		for (i = 0; i < pagevec_count(&pvec); i++) {
-			bh = head = page_buffers(pvec.pages[i]);
+	while (filemap_get_folios_tag(btcache, &index, (pgoff_t)-1,
+				PAGECACHE_TAG_DIRTY, &fbatch)) {
+		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+			bh = head = folio_buffers(fbatch.folios[i]);
 			do {
 				if (buffer_dirty(bh))
 					nilfs_btree_add_dirty_buffer(btree,
 								     lists, bh);
 			} while ((bh = bh->b_this_page) != head);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 21/23] nilfs2: Convert nilfs_copy_dirty_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (19 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 20/23] nilfs2: Convert nilfs_btree_lookup_dirty_buffers() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 22/23] nilfs2: Convert nilfs_clear_dirty_pages() " Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 23/23] filemap: Remove find_get_pages_range_tag() Vishal Moola (Oracle)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	Ryusuke Konishi, linux-kernel, linux-f2fs-devel, cluster-devel,
	linux-mm, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
---
 fs/nilfs2/page.c | 39 ++++++++++++++++++++-------------------
 1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 39b7eea2642a..d921542a9593 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -240,42 +240,43 @@ static void nilfs_copy_page(struct page *dst, struct page *src, int copy_dirty)
 int nilfs_copy_dirty_pages(struct address_space *dmap,
 			   struct address_space *smap)
 {
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	unsigned int i;
 	pgoff_t index = 0;
 	int err = 0;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 repeat:
-	if (!pagevec_lookup_tag(&pvec, smap, &index, PAGECACHE_TAG_DIRTY))
+	if (!filemap_get_folios_tag(smap, &index, (pgoff_t)-1,
+				PAGECACHE_TAG_DIRTY, &fbatch))
 		return 0;
 
-	for (i = 0; i < pagevec_count(&pvec); i++) {
-		struct page *page = pvec.pages[i], *dpage;
+	for (i = 0; i < folio_batch_count(&fbatch); i++) {
+		struct folio *folio = fbatch.folios[i], *dfolio;
 
-		lock_page(page);
-		if (unlikely(!PageDirty(page)))
-			NILFS_PAGE_BUG(page, "inconsistent dirty state");
+		folio_lock(folio);
+		if (unlikely(!folio_test_dirty(folio)))
+			NILFS_PAGE_BUG(&folio->page, "inconsistent dirty state");
 
-		dpage = grab_cache_page(dmap, page->index);
-		if (unlikely(!dpage)) {
+		dfolio = filemap_grab_folio(dmap, folio->index);
+		if (unlikely(!dfolio)) {
 			/* No empty page is added to the page cache */
 			err = -ENOMEM;
-			unlock_page(page);
+			folio_unlock(folio);
 			break;
 		}
-		if (unlikely(!page_has_buffers(page)))
-			NILFS_PAGE_BUG(page,
+		if (unlikely(!folio_buffers(folio)))
+			NILFS_PAGE_BUG(&folio->page,
 				       "found empty page in dat page cache");
 
-		nilfs_copy_page(dpage, page, 1);
-		__set_page_dirty_nobuffers(dpage);
+		nilfs_copy_page(&dfolio->page, &folio->page, 1);
+		filemap_dirty_folio(folio_mapping(dfolio), dfolio);
 
-		unlock_page(dpage);
-		put_page(dpage);
-		unlock_page(page);
+		folio_unlock(dfolio);
+		folio_put(dfolio);
+		folio_unlock(folio);
 	}
-	pagevec_release(&pvec);
+	folio_batch_release(&fbatch);
 	cond_resched();
 
 	if (likely(!err))
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 22/23] nilfs2: Convert nilfs_clear_dirty_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (20 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 21/23] nilfs2: Convert nilfs_copy_dirty_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 23/23] filemap: Remove find_get_pages_range_tag() Vishal Moola (Oracle)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	Ryusuke Konishi, linux-kernel, linux-f2fs-devel, cluster-devel,
	linux-mm, ceph-devel, linux-ext4, linux-afs, linux-btrfs

Convert function to use folios throughout. This is in preparation for
the removal of find_get_pages_range_tag().

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
---
 fs/nilfs2/page.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index d921542a9593..41ccd43cd979 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -358,22 +358,22 @@ void nilfs_copy_back_pages(struct address_space *dmap,
  */
 void nilfs_clear_dirty_pages(struct address_space *mapping, bool silent)
 {
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	unsigned int i;
 	pgoff_t index = 0;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
-	while (pagevec_lookup_tag(&pvec, mapping, &index,
-					PAGECACHE_TAG_DIRTY)) {
-		for (i = 0; i < pagevec_count(&pvec); i++) {
-			struct page *page = pvec.pages[i];
+	while (filemap_get_folios_tag(mapping, &index, (pgoff_t)-1,
+				PAGECACHE_TAG_DIRTY, &fbatch)) {
+		for (i = 0; i < folio_batch_count(&fbatch); i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			lock_page(page);
-			nilfs_clear_dirty_page(page, silent);
-			unlock_page(page);
+			folio_lock(folio);
+			nilfs_clear_dirty_page(&folio->page, silent);
+			folio_unlock(folio);
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 }
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [f2fs-dev] [PATCH v3 23/23] filemap: Remove find_get_pages_range_tag()
  2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
                   ` (21 preceding siblings ...)
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 22/23] nilfs2: Convert nilfs_clear_dirty_pages() " Vishal Moola (Oracle)
@ 2022-10-17 20:24 ` Vishal Moola (Oracle)
  22 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-10-17 20:24 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, Vishal Moola (Oracle),
	linux-kernel, linux-f2fs-devel, cluster-devel, linux-mm,
	ceph-devel, linux-ext4, linux-afs, linux-btrfs

All callers to find_get_pages_range_tag(), find_get_pages_tag(),
pagevec_lookup_range_tag(), and pagevec_lookup_tag() have been removed.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 include/linux/pagemap.h | 10 -------
 include/linux/pagevec.h |  8 ------
 mm/filemap.c            | 60 -----------------------------------------
 mm/swap.c               | 10 -------
 4 files changed, 88 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 28275eecb949..c83dfcbc19b3 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -742,16 +742,6 @@ unsigned filemap_get_folios_contig(struct address_space *mapping,
 		pgoff_t *start, pgoff_t end, struct folio_batch *fbatch);
 unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
 		pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch);
-unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
-			pgoff_t end, xa_mark_t tag, unsigned int nr_pages,
-			struct page **pages);
-static inline unsigned find_get_pages_tag(struct address_space *mapping,
-			pgoff_t *index, xa_mark_t tag, unsigned int nr_pages,
-			struct page **pages)
-{
-	return find_get_pages_range_tag(mapping, index, (pgoff_t)-1, tag,
-					nr_pages, pages);
-}
 
 struct page *grab_cache_page_write_begin(struct address_space *mapping,
 			pgoff_t index);
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 215eb6c3bdc9..a520632297ac 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -26,14 +26,6 @@ struct pagevec {
 };
 
 void __pagevec_release(struct pagevec *pvec);
-unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
-		struct address_space *mapping, pgoff_t *index, pgoff_t end,
-		xa_mark_t tag);
-static inline unsigned pagevec_lookup_tag(struct pagevec *pvec,
-		struct address_space *mapping, pgoff_t *index, xa_mark_t tag)
-{
-	return pagevec_lookup_range_tag(pvec, mapping, index, (pgoff_t)-1, tag);
-}
 
 static inline void pagevec_init(struct pagevec *pvec)
 {
diff --git a/mm/filemap.c b/mm/filemap.c
index d78d62a7e44a..f303b8bd7dfa 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2317,66 +2317,6 @@ unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
 }
 EXPORT_SYMBOL(filemap_get_folios_tag);
 
-/**
- * find_get_pages_range_tag - Find and return head pages matching @tag.
- * @mapping:	the address_space to search
- * @index:	the starting page index
- * @end:	The final page index (inclusive)
- * @tag:	the tag index
- * @nr_pages:	the maximum number of pages
- * @pages:	where the resulting pages are placed
- *
- * Like find_get_pages_range(), except we only return head pages which are
- * tagged with @tag.  @index is updated to the index immediately after the
- * last page we return, ready for the next iteration.
- *
- * Return: the number of pages which were found.
- */
-unsigned find_get_pages_range_tag(struct address_space *mapping, pgoff_t *index,
-			pgoff_t end, xa_mark_t tag, unsigned int nr_pages,
-			struct page **pages)
-{
-	XA_STATE(xas, &mapping->i_pages, *index);
-	struct folio *folio;
-	unsigned ret = 0;
-
-	if (unlikely(!nr_pages))
-		return 0;
-
-	rcu_read_lock();
-	while ((folio = find_get_entry(&xas, end, tag))) {
-		/*
-		 * Shadow entries should never be tagged, but this iteration
-		 * is lockless so there is a window for page reclaim to evict
-		 * a page we saw tagged.  Skip over it.
-		 */
-		if (xa_is_value(folio))
-			continue;
-
-		pages[ret] = &folio->page;
-		if (++ret == nr_pages) {
-			*index = folio->index + folio_nr_pages(folio);
-			goto out;
-		}
-	}
-
-	/*
-	 * We come here when we got to @end. We take care to not overflow the
-	 * index @index as it confuses some of the callers. This breaks the
-	 * iteration when there is a page at index -1 but that is already
-	 * broken anyway.
-	 */
-	if (end == (pgoff_t)-1)
-		*index = (pgoff_t)-1;
-	else
-		*index = end + 1;
-out:
-	rcu_read_unlock();
-
-	return ret;
-}
-EXPORT_SYMBOL(find_get_pages_range_tag);
-
 /*
  * CD/DVDs are error prone. When a medium error occurs, the driver may fail
  * a _large_ part of the i/o request. Imagine the worst scenario:
diff --git a/mm/swap.c b/mm/swap.c
index 955930f41d20..89351b6dd149 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1098,16 +1098,6 @@ void folio_batch_remove_exceptionals(struct folio_batch *fbatch)
 	fbatch->nr = j;
 }
 
-unsigned pagevec_lookup_range_tag(struct pagevec *pvec,
-		struct address_space *mapping, pgoff_t *index, pgoff_t end,
-		xa_mark_t tag)
-{
-	pvec->nr = find_get_pages_range_tag(mapping, index, end, tag,
-					PAGEVEC_SIZE, pvec->pages);
-	return pagevec_count(pvec);
-}
-EXPORT_SYMBOL(pagevec_lookup_range_tag);
-
 /*
  * Perform any setup for the swap system
  */
-- 
2.36.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() " Vishal Moola (Oracle)
@ 2022-10-24 19:23   ` Vishal Moola
  0 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola @ 2022-10-24 19:23 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, agruenba, linux-kernel,
	linux-f2fs-devel, cluster-devel, linux-mm, rpeterso, ceph-devel,
	linux-ext4, linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 1:25 PM Vishal Moola (Oracle)
<vishal.moola@gmail.com> wrote:
>
> Converted function to use folios throughout. This is in preparation for
> the removal of find_get_pgaes_range_tag().
>
> Also had to modify and rename gfs2_write_jdata_pagevec() to take in
> and utilize folio_batch rather than pagevec and use folios rather
> than pages. gfs2_write_jdata_batch() now supports large folios.
>
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  fs/gfs2/aops.c | 64 +++++++++++++++++++++++++++-----------------------
>  1 file changed, 35 insertions(+), 29 deletions(-)
>
> diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
> index 05bee80ac7de..8f87c2551a3d 100644
> --- a/fs/gfs2/aops.c
> +++ b/fs/gfs2/aops.c
> @@ -195,67 +195,71 @@ static int gfs2_writepages(struct address_space *mapping,
>  }
>
>  /**
> - * gfs2_write_jdata_pagevec - Write back a pagevec's worth of pages
> + * gfs2_write_jdata_batch - Write back a folio batch's worth of folios
>   * @mapping: The mapping
>   * @wbc: The writeback control
> - * @pvec: The vector of pages
> - * @nr_pages: The number of pages to write
> + * @fbatch: The batch of folios
>   * @done_index: Page index
>   *
>   * Returns: non-zero if loop should terminate, zero otherwise
>   */
>
> -static int gfs2_write_jdata_pagevec(struct address_space *mapping,
> +static int gfs2_write_jdata_batch(struct address_space *mapping,
>                                     struct writeback_control *wbc,
> -                                   struct pagevec *pvec,
> -                                   int nr_pages,
> +                                   struct folio_batch *fbatch,
>                                     pgoff_t *done_index)
>  {
>         struct inode *inode = mapping->host;
>         struct gfs2_sbd *sdp = GFS2_SB(inode);
> -       unsigned nrblocks = nr_pages * (PAGE_SIZE >> inode->i_blkbits);
> +       unsigned nrblocks;
>         int i;
>         int ret;
> +       int nr_pages = 0;
> +       int nr_folios = folio_batch_count(fbatch);
> +
> +       for (i = 0; i < nr_folios; i++)
> +               nr_pages += folio_nr_pages(fbatch->folios[i]);
> +       nrblocks = nr_pages * (PAGE_SIZE >> inode->i_blkbits);
>
>         ret = gfs2_trans_begin(sdp, nrblocks, nrblocks);
>         if (ret < 0)
>                 return ret;
>
> -       for(i = 0; i < nr_pages; i++) {
> -               struct page *page = pvec->pages[i];
> +       for (i = 0; i < nr_folios; i++) {
> +               struct folio *folio = fbatch->folios[i];
>
> -               *done_index = page->index;
> +               *done_index = folio->index;
>
> -               lock_page(page);
> +               folio_lock(folio);
>
> -               if (unlikely(page->mapping != mapping)) {
> +               if (unlikely(folio->mapping != mapping)) {
>  continue_unlock:
> -                       unlock_page(page);
> +                       folio_unlock(folio);
>                         continue;
>                 }
>
> -               if (!PageDirty(page)) {
> +               if (!folio_test_dirty(folio)) {
>                         /* someone wrote it for us */
>                         goto continue_unlock;
>                 }
>
> -               if (PageWriteback(page)) {
> +               if (folio_test_writeback(folio)) {
>                         if (wbc->sync_mode != WB_SYNC_NONE)
> -                               wait_on_page_writeback(page);
> +                               folio_wait_writeback(folio);
>                         else
>                                 goto continue_unlock;
>                 }
>
> -               BUG_ON(PageWriteback(page));
> -               if (!clear_page_dirty_for_io(page))
> +               BUG_ON(folio_test_writeback(folio));
> +               if (!folio_clear_dirty_for_io(folio))
>                         goto continue_unlock;
>
>                 trace_wbc_writepage(wbc, inode_to_bdi(inode));
>
> -               ret = __gfs2_jdata_writepage(page, wbc);
> +               ret = __gfs2_jdata_writepage(&folio->page, wbc);
>                 if (unlikely(ret)) {
>                         if (ret == AOP_WRITEPAGE_ACTIVATE) {
> -                               unlock_page(page);
> +                               folio_unlock(folio);
>                                 ret = 0;
>                         } else {
>
> @@ -268,7 +272,8 @@ static int gfs2_write_jdata_pagevec(struct address_space *mapping,
>                                  * not be suitable for data integrity
>                                  * writeout).
>                                  */
> -                               *done_index = page->index + 1;
> +                               *done_index = folio->index +
> +                                       folio_nr_pages(folio);
>                                 ret = 1;
>                                 break;
>                         }
> @@ -305,8 +310,8 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
>  {
>         int ret = 0;
>         int done = 0;
> -       struct pagevec pvec;
> -       int nr_pages;
> +       struct folio_batch fbatch;
> +       int nr_folios;
>         pgoff_t writeback_index;
>         pgoff_t index;
>         pgoff_t end;
> @@ -315,7 +320,7 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
>         int range_whole = 0;
>         xa_mark_t tag;
>
> -       pagevec_init(&pvec);
> +       folio_batch_init(&fbatch);
>         if (wbc->range_cyclic) {
>                 writeback_index = mapping->writeback_index; /* prev offset */
>                 index = writeback_index;
> @@ -341,17 +346,18 @@ static int gfs2_write_cache_jdata(struct address_space *mapping,
>                 tag_pages_for_writeback(mapping, index, end);
>         done_index = index;
>         while (!done && (index <= end)) {
> -               nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
> -                               tag);
> -               if (nr_pages == 0)
> +               nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +                               tag, &fbatch);
> +               if (nr_folios == 0)
>                         break;
>
> -               ret = gfs2_write_jdata_pagevec(mapping, wbc, &pvec, nr_pages, &done_index);
> +               ret = gfs2_write_jdata_batch(mapping, wbc, &fbatch,
> +                               &done_index);
>                 if (ret)
>                         done = 1;
>                 if (ret > 0)
>                         ret = 0;
> -               pagevec_release(&pvec);
> +               folio_batch_release(&fbatch);
>                 cond_resched();
>         }
>
> --
> 2.36.1
>

Would anyone familiar with gfs2 have time to look over this patch (17/23)?
I've cc-ed the gfs2 supporters, feedback would be appreciated.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() " Vishal Moola (Oracle)
@ 2022-10-24 19:26   ` Vishal Moola
  0 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola @ 2022-10-24 19:26 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, tytso, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, ceph-devel, linux-ext4, linux-afs,
	linux-btrfs

On Mon, Oct 17, 2022 at 1:25 PM Vishal Moola (Oracle)
<vishal.moola@gmail.com> wrote:
>
> Converted the function to use folios throughout. This is in preparation
> for the removal of find_get_pages_range_tag(). Now supports large
> folios.
>
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  fs/ext4/inode.c | 55 ++++++++++++++++++++++++-------------------------
>  1 file changed, 27 insertions(+), 28 deletions(-)
>
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 2b5ef1b64249..69a0708c8e87 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -2572,8 +2572,8 @@ static int ext4_da_writepages_trans_blocks(struct inode *inode)
>  static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>  {
>         struct address_space *mapping = mpd->inode->i_mapping;
> -       struct pagevec pvec;
> -       unsigned int nr_pages;
> +       struct folio_batch fbatch;
> +       unsigned int nr_folios;
>         long left = mpd->wbc->nr_to_write;
>         pgoff_t index = mpd->first_page;
>         pgoff_t end = mpd->last_page;
> @@ -2587,18 +2587,17 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>                 tag = PAGECACHE_TAG_TOWRITE;
>         else
>                 tag = PAGECACHE_TAG_DIRTY;
> -
> -       pagevec_init(&pvec);
> +       folio_batch_init(&fbatch);
>         mpd->map.m_len = 0;
>         mpd->next_page = index;
>         while (index <= end) {
> -               nr_pages = pagevec_lookup_range_tag(&pvec, mapping, &index, end,
> -                               tag);
> -               if (nr_pages == 0)
> +               nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +                               tag, &fbatch);
> +               if (nr_folios == 0)
>                         break;
>
> -               for (i = 0; i < nr_pages; i++) {
> -                       struct page *page = pvec.pages[i];
> +               for (i = 0; i < nr_folios; i++) {
> +                       struct folio *folio = fbatch.folios[i];
>
>                         /*
>                          * Accumulated enough dirty pages? This doesn't apply
> @@ -2612,10 +2611,10 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>                                 goto out;
>
>                         /* If we can't merge this page, we are done. */
> -                       if (mpd->map.m_len > 0 && mpd->next_page != page->index)
> +                       if (mpd->map.m_len > 0 && mpd->next_page != folio->index)
>                                 goto out;
>
> -                       lock_page(page);
> +                       folio_lock(folio);
>                         /*
>                          * If the page is no longer dirty, or its mapping no
>                          * longer corresponds to inode we are writing (which
> @@ -2623,16 +2622,16 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>                          * page is already under writeback and we are not doing
>                          * a data integrity writeback, skip the page
>                          */
> -                       if (!PageDirty(page) ||
> -                           (PageWriteback(page) &&
> +                       if (!folio_test_dirty(folio) ||
> +                           (folio_test_writeback(folio) &&
>                              (mpd->wbc->sync_mode == WB_SYNC_NONE)) ||
> -                           unlikely(page->mapping != mapping)) {
> -                               unlock_page(page);
> +                           unlikely(folio->mapping != mapping)) {
> +                               folio_unlock(folio);
>                                 continue;
>                         }
>
> -                       wait_on_page_writeback(page);
> -                       BUG_ON(PageWriteback(page));
> +                       folio_wait_writeback(folio);
> +                       BUG_ON(folio_test_writeback(folio));
>
>                         /*
>                          * Should never happen but for buggy code in
> @@ -2643,33 +2642,33 @@ static int mpage_prepare_extent_to_map(struct mpage_da_data *mpd)
>                          *
>                          * [1] https://lore.kernel.org/linux-mm/20180103100430.GE4911@quack2.suse.cz
>                          */
> -                       if (!page_has_buffers(page)) {
> -                               ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", page->index);
> -                               ClearPageDirty(page);
> -                               unlock_page(page);
> +                       if (!folio_buffers(folio)) {
> +                               ext4_warning_inode(mpd->inode, "page %lu does not have buffers attached", folio->index);
> +                               folio_clear_dirty(folio);
> +                               folio_unlock(folio);
>                                 continue;
>                         }
>
>                         if (mpd->map.m_len == 0)
> -                               mpd->first_page = page->index;
> -                       mpd->next_page = page->index + 1;
> +                               mpd->first_page = folio->index;
> +                       mpd->next_page = folio->index + folio_nr_pages(folio);
>                         /* Add all dirty buffers to mpd */
> -                       lblk = ((ext4_lblk_t)page->index) <<
> +                       lblk = ((ext4_lblk_t)folio->index) <<
>                                 (PAGE_SHIFT - blkbits);
> -                       head = page_buffers(page);
> +                       head = folio_buffers(folio);
>                         err = mpage_process_page_bufs(mpd, head, head, lblk);
>                         if (err <= 0)
>                                 goto out;
>                         err = 0;
> -                       left--;
> +                       left -= folio_nr_pages(folio);
>                 }
> -               pagevec_release(&pvec);
> +               folio_batch_release(&fbatch);
>                 cond_resched();
>         }
>         mpd->scanned_until_end = 1;
>         return 0;
>  out:
> -       pagevec_release(&pvec);
> +       folio_batch_release(&fbatch);
>         return err;
>  }
>
> --
> 2.36.1
>

Does anyone have some time to look over this ext4 patch this week?
Feedback is appreciated.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
@ 2022-10-24 19:31   ` Vishal Moola
  2022-11-10 18:51     ` Vishal Moola
  2022-10-29  4:46   ` Chao Yu
  1 sibling, 1 reply; 60+ messages in thread
From: Vishal Moola @ 2022-10-24 19:31 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, jaegeuk, ceph-devel, linux-ext4,
	linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 1:25 PM Vishal Moola (Oracle)
<vishal.moola@gmail.com> wrote:
>
> Convert function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
>
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  fs/f2fs/node.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 983572f23896..e8b72336c096 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1728,12 +1728,12 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>                         unsigned int *seq_id)
>  {
>         pgoff_t index;
> -       struct pagevec pvec;
> +       struct folio_batch fbatch;
>         int ret = 0;
>         struct page *last_page = NULL;
>         bool marked = false;
>         nid_t ino = inode->i_ino;
> -       int nr_pages;
> +       int nr_folios;
>         int nwritten = 0;
>
>         if (atomic) {
> @@ -1742,20 +1742,21 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>                         return PTR_ERR_OR_ZERO(last_page);
>         }
>  retry:
> -       pagevec_init(&pvec);
> +       folio_batch_init(&fbatch);
>         index = 0;
>
> -       while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
> -                               PAGECACHE_TAG_DIRTY))) {
> +       while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
> +                                       (pgoff_t)-1, PAGECACHE_TAG_DIRTY,
> +                                       &fbatch))) {
>                 int i;
>
> -               for (i = 0; i < nr_pages; i++) {
> -                       struct page *page = pvec.pages[i];
> +               for (i = 0; i < nr_folios; i++) {
> +                       struct page *page = &fbatch.folios[i]->page;
>                         bool submitted = false;
>
>                         if (unlikely(f2fs_cp_error(sbi))) {
>                                 f2fs_put_page(last_page, 0);
> -                               pagevec_release(&pvec);
> +                               folio_batch_release(&fbatch);
>                                 ret = -EIO;
>                                 goto out;
>                         }
> @@ -1821,7 +1822,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>                                 break;
>                         }
>                 }
> -               pagevec_release(&pvec);
> +               folio_batch_release(&fbatch);
>                 cond_resched();
>
>                 if (ret || marked)
> --
> 2.36.1
>

Following up on these f2fs patches (11/23, 12/23, 13/23, 14/23, 15/23,
16/23). Does anyone have time to review them this week?


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
@ 2022-10-24 19:36   ` Vishal Moola
  2022-10-24 19:38   ` Matthew Wilcox
  1 sibling, 0 replies; 60+ messages in thread
From: Vishal Moola @ 2022-10-24 19:36 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, linux-kernel, Matthew Wilcox,
	linux-f2fs-devel, cluster-devel, linux-mm, ceph-devel,
	linux-ext4, linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 1:24 PM Vishal Moola (Oracle)
<vishal.moola@gmail.com> wrote:
>
> Add function filemap_grab_folio() to grab a folio from the page cache.
> This function is meant to serve as a folio replacement for
> grab_cache_page, and is used to facilitate the removal of
> find_get_pages_range_tag().
>
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  include/linux/pagemap.h | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index bbccb4044222..74d87e37a142 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -547,6 +547,26 @@ static inline struct folio *filemap_lock_folio(struct address_space *mapping,
>         return __filemap_get_folio(mapping, index, FGP_LOCK, 0);
>  }
>
> +/**
> + * filemap_grab_folio - grab a folio from the page cache
> + * @mapping: The address space to search
> + * @index: The page index
> + *
> + * Looks up the page cache entry at @mapping & @index. If no folio is found,
> + * a new folio is created. The folio is locked, marked as accessed, and
> + * returned.
> + *
> + * Return: A found or created folio. NULL if no folio is found and failed to
> + * create a folio.
> + */
> +static inline struct folio *filemap_grab_folio(struct address_space *mapping,
> +                                       pgoff_t index)
> +{
> +       return __filemap_get_folio(mapping, index,
> +                       FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
> +                       mapping_gfp_mask(mapping));
> +}
> +
>  /**
>   * find_get_page - find and get a page reference
>   * @mapping: the address_space to search
> --
> 2.36.1
>

Following up on the filemap-related patches (01/23, 02/23, 03/23, 04/23),
does anyone have time to review them this week?


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
  2022-10-24 19:36   ` Vishal Moola
@ 2022-10-24 19:38   ` Matthew Wilcox
  1 sibling, 0 replies; 60+ messages in thread
From: Matthew Wilcox @ 2022-10-24 19:38 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, linux-fsdevel, ceph-devel, linux-ext4,
	linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 01:24:29PM -0700, Vishal Moola (Oracle) wrote:
> Add function filemap_grab_folio() to grab a folio from the page cache.
> This function is meant to serve as a folio replacement for
> grab_cache_page, and is used to facilitate the removal of
> find_get_pages_range_tag().

I'm still not loving the name, but it does have historical precedent
and I can't think of a better one.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag() Vishal Moola (Oracle)
@ 2022-10-24 19:42   ` Matthew Wilcox
  0 siblings, 0 replies; 60+ messages in thread
From: Matthew Wilcox @ 2022-10-24 19:42 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, linux-fsdevel, ceph-devel, linux-ext4,
	linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 01:24:30PM -0700, Vishal Moola (Oracle) wrote:
> This is the equivalent of find_get_pages_range_tag(), except for folios
> instead of pages.
> 
> One noteable difference is filemap_get_folios_tag() does not take in a
> maximum pages argument. It instead tries to fill a folio batch and stops
> either once full (15 folios) or reaching the end of the search range.
> 
> The new function supports large folios, the initial function did not
> since all callers don't use large folios.

Reviewed-by: Matthew Wilcow (Oracle) <willy@infradead.org>

> +/**
> + * filemap_get_folios_tag - Get a batch of folios matching @tag.
> + * @mapping:    The address_space to search
> + * @start:      The starting page index
> + * @end:        The final page index (inclusive)
> + * @tag:        The tag index
> + * @fbatch:     The batch to fill
> + *
> + * Same as filemap_get_folios, but only returning folios tagged with @tag

If you add () after filemap_get_folios, it turns into a nice link in
the html documentation.

> + *
> + * Return: The number of folios found

Missing full stop at the end of this line.

> + * Also update @start to index the next folio for traversal

Ditto.

> + */
> +unsigned filemap_get_folios_tag(struct address_space *mapping, pgoff_t *start,
> +			pgoff_t end, xa_mark_t tag, struct folio_batch *fbatch)
> +{
> +	XA_STATE(xas, &mapping->i_pages, *start);
> +	struct folio *folio;
> +
> +	rcu_read_lock();
> +	while ((folio = find_get_entry(&xas, end, tag)) != NULL) {
> +		/* Shadow entries should never be tagged, but this iteration
> +		 * is lockless so there is a window for page reclaim to evict
> +		 * a page we saw tagged. Skip over it.
> +		 */

For multiline comments, the "/*" should be on a line by itself.



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag() Vishal Moola (Oracle)
@ 2022-10-24 20:06   ` Matthew Wilcox
  0 siblings, 0 replies; 60+ messages in thread
From: Matthew Wilcox @ 2022-10-24 20:06 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, linux-fsdevel, ceph-devel, linux-ext4,
	linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 01:24:31PM -0700, Vishal Moola (Oracle) wrote:
> Converted function to use folios. This is in preparation for the removal
> of find_get_pages_range_tag().

Yes, it is, but this patch also has some nice advantages of its own:

 - Removes a call to wait_on_page_writeback(), which removes a call
   to compound_head()
 - Removes a call to ClearPageError(), which removes another call
   to compound_head()
 - Removes a call to pagevec_release(), which will eventually
   remove a third call to compound_head() (it doesn't today, but
   one day ...)

So you can definitely say that it removes 50 bytes of text and two
calls to compound_head().  And that way, this patch justifies its
existance by itself ;-)

> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() " Vishal Moola (Oracle)
@ 2022-10-24 20:12   ` Matthew Wilcox
  0 siblings, 0 replies; 60+ messages in thread
From: Matthew Wilcox @ 2022-10-24 20:12 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, linux-fsdevel, ceph-devel, linux-ext4,
	linux-afs, linux-btrfs

On Mon, Oct 17, 2022 at 01:24:32PM -0700, Vishal Moola (Oracle) wrote:
> Converted function to use folios throughout. This is in preparation for
> the removal of find_get_pages_range_tag().

And removes eight calls to compound_head(), saving 296 bytes of kernel
text (!)  It also adds support for large folios to this function.

> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() " Vishal Moola (Oracle)
@ 2022-10-28 17:20   ` Jeff Layton
  0 siblings, 0 replies; 60+ messages in thread
From: Jeff Layton @ 2022-10-28 17:20 UTC (permalink / raw)
  To: Vishal Moola (Oracle), linux-fsdevel, David Howells
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, ceph-devel, linux-ext4, linux-afs,
	linux-btrfs

On Mon, 2022-10-17 at 13:24 -0700, Vishal Moola (Oracle) wrote:
> Convert function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
> 
> Also some minor renaming for consistency.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  fs/ceph/addr.c | 58 ++++++++++++++++++++++++++------------------------
>  1 file changed, 30 insertions(+), 28 deletions(-)
> 
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index dcf701b05cc1..d2361d51db39 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -792,7 +792,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>  	struct ceph_vino vino = ceph_vino(inode);
>  	pgoff_t index, start_index, end = -1;
>  	struct ceph_snap_context *snapc = NULL, *last_snapc = NULL, *pgsnapc;
> -	struct pagevec pvec;
> +	struct folio_batch fbatch;
>  	int rc = 0;
>  	unsigned int wsize = i_blocksize(inode);
>  	struct ceph_osd_request *req = NULL;
> @@ -821,7 +821,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>  	if (fsc->mount_options->wsize < wsize)
>  		wsize = fsc->mount_options->wsize;
>  
> -	pagevec_init(&pvec);
> +	folio_batch_init(&fbatch);
>  
>  	start_index = wbc->range_cyclic ? mapping->writeback_index : 0;
>  	index = start_index;
> @@ -869,7 +869,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>  
>  	while (!done && index <= end) {
>  		int num_ops = 0, op_idx;
> -		unsigned i, pvec_pages, max_pages, locked_pages = 0;
> +		unsigned i, nr_folios, max_pages, locked_pages = 0;
>  		struct page **pages = NULL, **data_pages;
>  		struct page *page;
>  		pgoff_t strip_unit_end = 0;
> @@ -879,13 +879,13 @@ static int ceph_writepages_start(struct address_space *mapping,
>  		max_pages = wsize >> PAGE_SHIFT;
>  
>  get_more_pages:
> -		pvec_pages = pagevec_lookup_range_tag(&pvec, mapping, &index,
> -						end, PAGECACHE_TAG_DIRTY);
> -		dout("pagevec_lookup_range_tag got %d\n", pvec_pages);
> -		if (!pvec_pages && !locked_pages)
> +		nr_folios = filemap_get_folios_tag(mapping, &index,
> +				end, PAGECACHE_TAG_DIRTY, &fbatch);
> +		dout("pagevec_lookup_range_tag got %d\n", nr_folios);
> +		if (!nr_folios && !locked_pages)
>  			break;
> -		for (i = 0; i < pvec_pages && locked_pages < max_pages; i++) {
> -			page = pvec.pages[i];
> +		for (i = 0; i < nr_folios && locked_pages < max_pages; i++) {
> +			page = &fbatch.folios[i]->page;
>  			dout("? %p idx %lu\n", page, page->index);
>  			if (locked_pages == 0)
>  				lock_page(page);  /* first page */
> @@ -995,7 +995,7 @@ static int ceph_writepages_start(struct address_space *mapping,
>  				len = 0;
>  			}
>  
> -			/* note position of first page in pvec */
> +			/* note position of first page in fbatch */
>  			dout("%p will write page %p idx %lu\n",
>  			     inode, page, page->index);
>  
> @@ -1005,30 +1005,30 @@ static int ceph_writepages_start(struct address_space *mapping,
>  				fsc->write_congested = true;
>  
>  			pages[locked_pages++] = page;
> -			pvec.pages[i] = NULL;
> +			fbatch.folios[i] = NULL;
>  
>  			len += thp_size(page);
>  		}
>  
>  		/* did we get anything? */
>  		if (!locked_pages)
> -			goto release_pvec_pages;
> +			goto release_folios;
>  		if (i) {
>  			unsigned j, n = 0;
> -			/* shift unused page to beginning of pvec */
> -			for (j = 0; j < pvec_pages; j++) {
> -				if (!pvec.pages[j])
> +			/* shift unused page to beginning of fbatch */
> +			for (j = 0; j < nr_folios; j++) {
> +				if (!fbatch.folios[j])
>  					continue;
>  				if (n < j)
> -					pvec.pages[n] = pvec.pages[j];
> +					fbatch.folios[n] = fbatch.folios[j];
>  				n++;
>  			}
> -			pvec.nr = n;
> +			fbatch.nr = n;
>  
> -			if (pvec_pages && i == pvec_pages &&
> +			if (nr_folios && i == nr_folios &&
>  			    locked_pages < max_pages) {
> -				dout("reached end pvec, trying for more\n");
> -				pagevec_release(&pvec);
> +				dout("reached end fbatch, trying for more\n");
> +				folio_batch_release(&fbatch);
>  				goto get_more_pages;
>  			}
>  		}
> @@ -1164,10 +1164,10 @@ static int ceph_writepages_start(struct address_space *mapping,
>  		if (wbc->nr_to_write <= 0 && wbc->sync_mode == WB_SYNC_NONE)
>  			done = true;
>  
> -release_pvec_pages:
> -		dout("pagevec_release on %d pages (%p)\n", (int)pvec.nr,
> -		     pvec.nr ? pvec.pages[0] : NULL);
> -		pagevec_release(&pvec);
> +release_folios:
> +		dout("folio_batch release on %d folios (%p)\n", (int)fbatch.nr,
> +		     fbatch.nr ? fbatch.folios[0] : NULL);
> +		folio_batch_release(&fbatch);
>  	}
>  
>  	if (should_loop && !done) {
> @@ -1184,15 +1184,17 @@ static int ceph_writepages_start(struct address_space *mapping,
>  			unsigned i, nr;
>  			index = 0;
>  			while ((index <= end) &&
> -			       (nr = pagevec_lookup_tag(&pvec, mapping, &index,
> -						PAGECACHE_TAG_WRITEBACK))) {
> +			       (nr = filemap_get_folios_tag(mapping, &index,
> +						(pgoff_t)-1,
> +						PAGECACHE_TAG_WRITEBACK,
> +						&fbatch))) {
>  				for (i = 0; i < nr; i++) {
> -					page = pvec.pages[i];
> +					page = &fbatch.folios[i]->page;
>  					if (page_snap_context(page) != snapc)
>  						continue;
>  					wait_on_page_writeback(page);
>  				}
> -				pagevec_release(&pvec);
> +				folio_batch_release(&fbatch);
>  				cond_resched();
>  			}
>  		}

I took a brief look and this looks like a fairly straightforward
conversion. It definitely needs testing however.

The hope was to get ceph converted over to using the netfs write
helpers, but that's taking a lot longer than expected. It's really up to
Xiubo at this point, but I don't have an issue in principle with taking
this patch in before the netfs conversion, particularly if it's blocking
other work.

Acked-by: Jeff Layton <jlayton@kernel.org>


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
  2022-10-24 19:31   ` Vishal Moola
@ 2022-10-29  4:46   ` Chao Yu
  1 sibling, 0 replies; 60+ messages in thread
From: Chao Yu @ 2022-10-29  4:46 UTC (permalink / raw)
  To: Vishal Moola (Oracle), linux-fsdevel
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, ceph-devel, linux-ext4, linux-afs,
	linux-btrfs

On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> Convert function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Acked-by: Chao Yu <chao@kernel.org>

Thanks,

> ---
>   fs/f2fs/node.c | 19 ++++++++++---------
>   1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index 983572f23896..e8b72336c096 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1728,12 +1728,12 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   			unsigned int *seq_id)
>   {
>   	pgoff_t index;
> -	struct pagevec pvec;
> +	struct folio_batch fbatch;
>   	int ret = 0;
>   	struct page *last_page = NULL;
>   	bool marked = false;
>   	nid_t ino = inode->i_ino;
> -	int nr_pages;
> +	int nr_folios;
>   	int nwritten = 0;
>   
>   	if (atomic) {
> @@ -1742,20 +1742,21 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   			return PTR_ERR_OR_ZERO(last_page);
>   	}
>   retry:
> -	pagevec_init(&pvec);
> +	folio_batch_init(&fbatch);
>   	index = 0;
>   
> -	while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
> -				PAGECACHE_TAG_DIRTY))) {
> +	while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
> +					(pgoff_t)-1, PAGECACHE_TAG_DIRTY,
> +					&fbatch))) {
>   		int i;
>   
> -		for (i = 0; i < nr_pages; i++) {
> -			struct page *page = pvec.pages[i];
> +		for (i = 0; i < nr_folios; i++) {
> +			struct page *page = &fbatch.folios[i]->page;
>   			bool submitted = false;
>   
>   			if (unlikely(f2fs_cp_error(sbi))) {
>   				f2fs_put_page(last_page, 0);
> -				pagevec_release(&pvec);
> +				folio_batch_release(&fbatch);
>   				ret = -EIO;
>   				goto out;
>   			}
> @@ -1821,7 +1822,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
>   				break;
>   			}
>   		}
> -		pagevec_release(&pvec);
> +		folio_batch_release(&fbatch);
>   		cond_resched();
>   
>   		if (ret || marked)


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() " Vishal Moola (Oracle)
@ 2022-10-29  4:47   ` Chao Yu
  0 siblings, 0 replies; 60+ messages in thread
From: Chao Yu @ 2022-10-29  4:47 UTC (permalink / raw)
  To: Vishal Moola (Oracle), linux-fsdevel
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, ceph-devel, linux-ext4, linux-afs,
	linux-btrfs

On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> Convert function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_tag().
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Acked-by: Chao Yu <chao@kernel.org>

Thanks,

> ---
>   fs/f2fs/node.c | 17 +++++++++--------
>   1 file changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index e8b72336c096..a2f477cc48c7 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1887,17 +1887,18 @@ static bool flush_dirty_inode(struct page *page)
>   void f2fs_flush_inline_data(struct f2fs_sb_info *sbi)
>   {
>   	pgoff_t index = 0;
> -	struct pagevec pvec;
> -	int nr_pages;
> +	struct folio_batch fbatch;
> +	int nr_folios;
>   
> -	pagevec_init(&pvec);
> +	folio_batch_init(&fbatch);
>   
> -	while ((nr_pages = pagevec_lookup_tag(&pvec,
> -			NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) {
> +	while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
> +					(pgoff_t)-1, PAGECACHE_TAG_DIRTY,
> +					&fbatch))) {
>   		int i;
>   
> -		for (i = 0; i < nr_pages; i++) {
> -			struct page *page = pvec.pages[i];
> +		for (i = 0; i < nr_folios; i++) {
> +			struct page *page = &fbatch.folios[i]->page;
>   
>   			if (!IS_DNODE(page))
>   				continue;
> @@ -1924,7 +1925,7 @@ void f2fs_flush_inline_data(struct f2fs_sb_info *sbi)
>   			}
>   			unlock_page(page);
>   		}
> -		pagevec_release(&pvec);
> +		folio_batch_release(&fbatch);
>   		cond_resched();
>   	}
>   }


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() " Vishal Moola (Oracle)
@ 2022-10-29  4:47   ` Chao Yu
  0 siblings, 0 replies; 60+ messages in thread
From: Chao Yu @ 2022-10-29  4:47 UTC (permalink / raw)
  To: Vishal Moola (Oracle), linux-fsdevel
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, ceph-devel, linux-ext4, linux-afs,
	linux-btrfs

On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> Convert function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>

Acked-by: Chao Yu <chao@kernel.org>

Thanks,

> ---
>   fs/f2fs/node.c | 17 +++++++++--------
>   1 file changed, 9 insertions(+), 8 deletions(-)
> 
> diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> index a2f477cc48c7..38f32b4d61dc 100644
> --- a/fs/f2fs/node.c
> +++ b/fs/f2fs/node.c
> @@ -1935,23 +1935,24 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
>   				bool do_balance, enum iostat_type io_type)
>   {
>   	pgoff_t index;
> -	struct pagevec pvec;
> +	struct folio_batch fbatch;
>   	int step = 0;
>   	int nwritten = 0;
>   	int ret = 0;
> -	int nr_pages, done = 0;
> +	int nr_folios, done = 0;
>   
> -	pagevec_init(&pvec);
> +	folio_batch_init(&fbatch);
>   
>   next_step:
>   	index = 0;
>   
> -	while (!done && (nr_pages = pagevec_lookup_tag(&pvec,
> -			NODE_MAPPING(sbi), &index, PAGECACHE_TAG_DIRTY))) {
> +	while (!done && (nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi),
> +				&index, (pgoff_t)-1, PAGECACHE_TAG_DIRTY,
> +				&fbatch))) {
>   		int i;
>   
> -		for (i = 0; i < nr_pages; i++) {
> -			struct page *page = pvec.pages[i];
> +		for (i = 0; i < nr_folios; i++) {
> +			struct page *page = &fbatch.folios[i]->page;
>   			bool submitted = false;
>   
>   			/* give a priority to WB_SYNC threads */
> @@ -2026,7 +2027,7 @@ int f2fs_sync_node_pages(struct f2fs_sb_info *sbi,
>   			if (--wbc->nr_to_write == 0)
>   				break;
>   		}
> -		pagevec_release(&pvec);
> +		folio_batch_release(&fbatch);
>   		cond_resched();
>   
>   		if (wbc->nr_to_write == 0) {


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() to use filemap_get_folios_tag()
  2022-10-24 19:31   ` Vishal Moola
@ 2022-11-10 18:51     ` Vishal Moola
  0 siblings, 0 replies; 60+ messages in thread
From: Vishal Moola @ 2022-11-10 18:51 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-cifs, linux-nilfs, linux-kernel, linux-f2fs-devel,
	cluster-devel, linux-mm, jaegeuk, ceph-devel, linux-ext4,
	linux-afs, linux-btrfs

On Mon, Oct 24, 2022 at 12:31 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>
> On Mon, Oct 17, 2022 at 1:25 PM Vishal Moola (Oracle)
> <vishal.moola@gmail.com> wrote:
> >
> > Convert function to use a folio_batch instead of pagevec. This is in
> > preparation for the removal of find_get_pages_range_tag().
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >  fs/f2fs/node.c | 19 ++++++++++---------
> >  1 file changed, 10 insertions(+), 9 deletions(-)
> >
> > diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
> > index 983572f23896..e8b72336c096 100644
> > --- a/fs/f2fs/node.c
> > +++ b/fs/f2fs/node.c
> > @@ -1728,12 +1728,12 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >                         unsigned int *seq_id)
> >  {
> >         pgoff_t index;
> > -       struct pagevec pvec;
> > +       struct folio_batch fbatch;
> >         int ret = 0;
> >         struct page *last_page = NULL;
> >         bool marked = false;
> >         nid_t ino = inode->i_ino;
> > -       int nr_pages;
> > +       int nr_folios;
> >         int nwritten = 0;
> >
> >         if (atomic) {
> > @@ -1742,20 +1742,21 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >                         return PTR_ERR_OR_ZERO(last_page);
> >         }
> >  retry:
> > -       pagevec_init(&pvec);
> > +       folio_batch_init(&fbatch);
> >         index = 0;
> >
> > -       while ((nr_pages = pagevec_lookup_tag(&pvec, NODE_MAPPING(sbi), &index,
> > -                               PAGECACHE_TAG_DIRTY))) {
> > +       while ((nr_folios = filemap_get_folios_tag(NODE_MAPPING(sbi), &index,
> > +                                       (pgoff_t)-1, PAGECACHE_TAG_DIRTY,
> > +                                       &fbatch))) {
> >                 int i;
> >
> > -               for (i = 0; i < nr_pages; i++) {
> > -                       struct page *page = pvec.pages[i];
> > +               for (i = 0; i < nr_folios; i++) {
> > +                       struct page *page = &fbatch.folios[i]->page;
> >                         bool submitted = false;
> >
> >                         if (unlikely(f2fs_cp_error(sbi))) {
> >                                 f2fs_put_page(last_page, 0);
> > -                               pagevec_release(&pvec);
> > +                               folio_batch_release(&fbatch);
> >                                 ret = -EIO;
> >                                 goto out;
> >                         }
> > @@ -1821,7 +1822,7 @@ int f2fs_fsync_node_pages(struct f2fs_sb_info *sbi, struct inode *inode,
> >                                 break;
> >                         }
> >                 }
> > -               pagevec_release(&pvec);
> > +               folio_batch_release(&fbatch);
> >                 cond_resched();
> >
> >                 if (ret || marked)
> > --
> > 2.36.1
> >
>
> Following up on these f2fs patches (11/23, 12/23, 13/23, 14/23, 15/23,
> 16/23). Does anyone have time to review them this week?

Chao, thank you for taking a look at some of these patches!
If you have time to look over the remaining patches (14, 15, 16)
feedback on those would be appreciated as well.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() " Vishal Moola (Oracle)
@ 2022-11-14  7:02   ` Chao Yu
  2022-11-14 21:38     ` Vishal Moola
  2022-11-29 19:14     ` [f2fs-dev] [PATCH v3 14/23] " Matthew Wilcox
  0 siblings, 2 replies; 60+ messages in thread
From: Chao Yu @ 2022-11-14  7:02 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> Converted the function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
> 
> Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> of pagevec. This does NOT support large folios. The function currently

Vishal,

It looks this patch tries to revert Fengnan's change:

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486

How about doing some tests to evaluate its performance effect?

+Cc Fengnan Chang

Thanks,

> only utilizes folios of size 1 so this shouldn't cause any issues right
> now.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>   fs/f2fs/compress.c | 13 +++++----
>   fs/f2fs/data.c     | 69 +++++++++++++++++++++++++---------------------
>   fs/f2fs/f2fs.h     |  5 ++--
>   3 files changed, 47 insertions(+), 40 deletions(-)
> 
> diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> index d315c2de136f..7af6c923e0aa 100644
> --- a/fs/f2fs/compress.c
> +++ b/fs/f2fs/compress.c
> @@ -842,10 +842,11 @@ bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index)
>   	return is_page_in_cluster(cc, index);
>   }
>   
> -bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
> -				int index, int nr_pages, bool uptodate)
> +bool f2fs_all_cluster_page_ready(struct compress_ctx *cc,
> +				struct folio_batch *fbatch,
> +				int index, int nr_folios, bool uptodate)
>   {
> -	unsigned long pgidx = pages[index]->index;
> +	unsigned long pgidx = fbatch->folios[index]->index;
>   	int i = uptodate ? 0 : 1;
>   
>   	/*
> @@ -855,13 +856,13 @@ bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
>   	if (uptodate && (pgidx % cc->cluster_size))
>   		return false;
>   
> -	if (nr_pages - index < cc->cluster_size)
> +	if (nr_folios - index < cc->cluster_size)
>   		return false;
>   
>   	for (; i < cc->cluster_size; i++) {
> -		if (pages[index + i]->index != pgidx + i)
> +		if (fbatch->folios[index + i]->index != pgidx + i)
>   			return false;
> -		if (uptodate && !PageUptodate(pages[index + i]))
> +		if (uptodate && !folio_test_uptodate(fbatch->folios[index + i]))
>   			return false;
>   	}
>   
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index a71e818cd67b..7511578b73c3 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2938,7 +2938,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   {
>   	int ret = 0;
>   	int done = 0, retry = 0;
> -	struct page *pages[F2FS_ONSTACK_PAGES];
> +	struct folio_batch fbatch;
>   	struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
>   	struct bio *bio = NULL;
>   	sector_t last_block;
> @@ -2959,7 +2959,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   		.private = NULL,
>   	};
>   #endif
> -	int nr_pages;
> +	int nr_folios;
>   	pgoff_t index;
>   	pgoff_t end;		/* Inclusive */
>   	pgoff_t done_index;
> @@ -2969,6 +2969,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   	int submitted = 0;
>   	int i;
>   
> +	folio_batch_init(&fbatch);
> +
>   	if (get_dirty_pages(mapping->host) <=
>   				SM_I(F2FS_M_SB(mapping))->min_hot_blocks)
>   		set_inode_flag(mapping->host, FI_HOT_DATA);
> @@ -2994,13 +2996,13 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   		tag_pages_for_writeback(mapping, index, end);
>   	done_index = index;
>   	while (!done && !retry && (index <= end)) {
> -		nr_pages = find_get_pages_range_tag(mapping, &index, end,
> -				tag, F2FS_ONSTACK_PAGES, pages);
> -		if (nr_pages == 0)
> +		nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +				tag, &fbatch);
> +		if (nr_folios == 0)
>   			break;
>   
> -		for (i = 0; i < nr_pages; i++) {
> -			struct page *page = pages[i];
> +		for (i = 0; i < nr_folios; i++) {
> +			struct folio *folio = fbatch.folios[i];
>   			bool need_readd;
>   readd:
>   			need_readd = false;
> @@ -3017,7 +3019,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				}
>   
>   				if (!f2fs_cluster_can_merge_page(&cc,
> -								page->index)) {
> +								folio->index)) {
>   					ret = f2fs_write_multi_pages(&cc,
>   						&submitted, wbc, io_type);
>   					if (!ret)
> @@ -3026,27 +3028,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				}
>   
>   				if (unlikely(f2fs_cp_error(sbi)))
> -					goto lock_page;
> +					goto lock_folio;
>   
>   				if (!f2fs_cluster_is_empty(&cc))
> -					goto lock_page;
> +					goto lock_folio;
>   
>   				if (f2fs_all_cluster_page_ready(&cc,
> -					pages, i, nr_pages, true))
> -					goto lock_page;
> +					&fbatch, i, nr_folios, true))
> +					goto lock_folio;
>   
>   				ret2 = f2fs_prepare_compress_overwrite(
>   							inode, &pagep,
> -							page->index, &fsdata);
> +							folio->index, &fsdata);
>   				if (ret2 < 0) {
>   					ret = ret2;
>   					done = 1;
>   					break;
>   				} else if (ret2 &&
>   					(!f2fs_compress_write_end(inode,
> -						fsdata, page->index, 1) ||
> +						fsdata, folio->index, 1) ||
>   					 !f2fs_all_cluster_page_ready(&cc,
> -						pages, i, nr_pages, false))) {
> +						&fbatch, i, nr_folios,
> +						false))) {
>   					retry = 1;
>   					break;
>   				}
> @@ -3059,46 +3062,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				break;
>   			}
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
> -lock_page:
> +lock_folio:
>   #endif
> -			done_index = page->index;
> +			done_index = folio->index;
>   retry_write:
> -			lock_page(page);
> +			folio_lock(folio);
>   
> -			if (unlikely(page->mapping != mapping)) {
> +			if (unlikely(folio->mapping != mapping)) {
>   continue_unlock:
> -				unlock_page(page);
> +				folio_unlock(folio);
>   				continue;
>   			}
>   
> -			if (!PageDirty(page)) {
> +			if (!folio_test_dirty(folio)) {
>   				/* someone wrote it for us */
>   				goto continue_unlock;
>   			}
>   
> -			if (PageWriteback(page)) {
> +			if (folio_test_writeback(folio)) {
>   				if (wbc->sync_mode != WB_SYNC_NONE)
> -					f2fs_wait_on_page_writeback(page,
> +					f2fs_wait_on_page_writeback(
> +							&folio->page,
>   							DATA, true, true);
>   				else
>   					goto continue_unlock;
>   			}
>   
> -			if (!clear_page_dirty_for_io(page))
> +			if (!folio_clear_dirty_for_io(folio))
>   				goto continue_unlock;
>   
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>   			if (f2fs_compressed_file(inode)) {
> -				get_page(page);
> -				f2fs_compress_ctx_add_page(&cc, page);
> +				folio_get(folio);
> +				f2fs_compress_ctx_add_page(&cc, &folio->page);
>   				continue;
>   			}
>   #endif
> -			ret = f2fs_write_single_data_page(page, &submitted,
> -					&bio, &last_block, wbc, io_type,
> -					0, true);
> +			ret = f2fs_write_single_data_page(&folio->page,
> +					&submitted, &bio, &last_block,
> +					wbc, io_type, 0, true);
>   			if (ret == AOP_WRITEPAGE_ACTIVATE)
> -				unlock_page(page);
> +				folio_unlock(folio);
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>   result:
>   #endif
> @@ -3122,7 +3126,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   					}
>   					goto next;
>   				}
> -				done_index = page->index + 1;
> +				done_index = folio->index +
> +					folio_nr_pages(folio);
>   				done = 1;
>   				break;
>   			}
> @@ -3136,7 +3141,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   			if (need_readd)
>   				goto readd;
>   		}
> -		release_pages(pages, nr_pages);
> +		folio_batch_release(&fbatch);
>   		cond_resched();
>   	}
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index e6355a5683b7..d7bfb88fa341 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -4226,8 +4226,9 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed,
>   				block_t blkaddr, bool in_task);
>   bool f2fs_cluster_is_empty(struct compress_ctx *cc);
>   bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index);
> -bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
> -				int index, int nr_pages, bool uptodate);
> +bool f2fs_all_cluster_page_ready(struct compress_ctx *cc,
> +		struct folio_batch *fbatch, int index, int nr_folios,
> +		bool uptodate);
>   bool f2fs_sanity_check_cluster(struct dnode_of_data *dn);
>   void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page);
>   int f2fs_write_multi_pages(struct compress_ctx *cc,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-11-14  7:02   ` Chao Yu
@ 2022-11-14 21:38     ` Vishal Moola
  2022-11-23  2:26       ` Vishal Moola
  2022-11-29 19:14     ` [f2fs-dev] [PATCH v3 14/23] " Matthew Wilcox
  1 sibling, 1 reply; 60+ messages in thread
From: Vishal Moola @ 2022-11-14 21:38 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

On Sun, Nov 13, 2022 at 11:02 PM Chao Yu <chao@kernel.org> wrote:
>
> On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> > Converted the function to use a folio_batch instead of pagevec. This is in
> > preparation for the removal of find_get_pages_range_tag().
> >
> > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> > of pagevec. This does NOT support large folios. The function currently
>
> Vishal,
>
> It looks this patch tries to revert Fengnan's change:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
>
> How about doing some tests to evaluate its performance effect?

Yeah I'll play around with it to see how much of a difference it makes.

> +Cc Fengnan Chang
>
> Thanks,
>
> > only utilizes folios of size 1 so this shouldn't cause any issues right
> > now.
> >
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >   fs/f2fs/compress.c | 13 +++++----
> >   fs/f2fs/data.c     | 69 +++++++++++++++++++++++++---------------------
> >   fs/f2fs/f2fs.h     |  5 ++--
> >   3 files changed, 47 insertions(+), 40 deletions(-)
> >
> > diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
> > index d315c2de136f..7af6c923e0aa 100644
> > --- a/fs/f2fs/compress.c
> > +++ b/fs/f2fs/compress.c
> > @@ -842,10 +842,11 @@ bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index)
> >       return is_page_in_cluster(cc, index);
> >   }
> >
> > -bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
> > -                             int index, int nr_pages, bool uptodate)
> > +bool f2fs_all_cluster_page_ready(struct compress_ctx *cc,
> > +                             struct folio_batch *fbatch,
> > +                             int index, int nr_folios, bool uptodate)
> >   {
> > -     unsigned long pgidx = pages[index]->index;
> > +     unsigned long pgidx = fbatch->folios[index]->index;
> >       int i = uptodate ? 0 : 1;
> >
> >       /*
> > @@ -855,13 +856,13 @@ bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
> >       if (uptodate && (pgidx % cc->cluster_size))
> >               return false;
> >
> > -     if (nr_pages - index < cc->cluster_size)
> > +     if (nr_folios - index < cc->cluster_size)
> >               return false;
> >
> >       for (; i < cc->cluster_size; i++) {
> > -             if (pages[index + i]->index != pgidx + i)
> > +             if (fbatch->folios[index + i]->index != pgidx + i)
> >                       return false;
> > -             if (uptodate && !PageUptodate(pages[index + i]))
> > +             if (uptodate && !folio_test_uptodate(fbatch->folios[index + i]))
> >                       return false;
> >       }
> >
> > diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> > index a71e818cd67b..7511578b73c3 100644
> > --- a/fs/f2fs/data.c
> > +++ b/fs/f2fs/data.c
> > @@ -2938,7 +2938,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >   {
> >       int ret = 0;
> >       int done = 0, retry = 0;
> > -     struct page *pages[F2FS_ONSTACK_PAGES];
> > +     struct folio_batch fbatch;
> >       struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
> >       struct bio *bio = NULL;
> >       sector_t last_block;
> > @@ -2959,7 +2959,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >               .private = NULL,
> >       };
> >   #endif
> > -     int nr_pages;
> > +     int nr_folios;
> >       pgoff_t index;
> >       pgoff_t end;            /* Inclusive */
> >       pgoff_t done_index;
> > @@ -2969,6 +2969,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >       int submitted = 0;
> >       int i;
> >
> > +     folio_batch_init(&fbatch);
> > +
> >       if (get_dirty_pages(mapping->host) <=
> >                               SM_I(F2FS_M_SB(mapping))->min_hot_blocks)
> >               set_inode_flag(mapping->host, FI_HOT_DATA);
> > @@ -2994,13 +2996,13 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >               tag_pages_for_writeback(mapping, index, end);
> >       done_index = index;
> >       while (!done && !retry && (index <= end)) {
> > -             nr_pages = find_get_pages_range_tag(mapping, &index, end,
> > -                             tag, F2FS_ONSTACK_PAGES, pages);
> > -             if (nr_pages == 0)
> > +             nr_folios = filemap_get_folios_tag(mapping, &index, end,
> > +                             tag, &fbatch);
> > +             if (nr_folios == 0)
> >                       break;
> >
> > -             for (i = 0; i < nr_pages; i++) {
> > -                     struct page *page = pages[i];
> > +             for (i = 0; i < nr_folios; i++) {
> > +                     struct folio *folio = fbatch.folios[i];
> >                       bool need_readd;
> >   readd:
> >                       need_readd = false;
> > @@ -3017,7 +3019,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >                               }
> >
> >                               if (!f2fs_cluster_can_merge_page(&cc,
> > -                                                             page->index)) {
> > +                                                             folio->index)) {
> >                                       ret = f2fs_write_multi_pages(&cc,
> >                                               &submitted, wbc, io_type);
> >                                       if (!ret)
> > @@ -3026,27 +3028,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >                               }
> >
> >                               if (unlikely(f2fs_cp_error(sbi)))
> > -                                     goto lock_page;
> > +                                     goto lock_folio;
> >
> >                               if (!f2fs_cluster_is_empty(&cc))
> > -                                     goto lock_page;
> > +                                     goto lock_folio;
> >
> >                               if (f2fs_all_cluster_page_ready(&cc,
> > -                                     pages, i, nr_pages, true))
> > -                                     goto lock_page;
> > +                                     &fbatch, i, nr_folios, true))
> > +                                     goto lock_folio;
> >
> >                               ret2 = f2fs_prepare_compress_overwrite(
> >                                                       inode, &pagep,
> > -                                                     page->index, &fsdata);
> > +                                                     folio->index, &fsdata);
> >                               if (ret2 < 0) {
> >                                       ret = ret2;
> >                                       done = 1;
> >                                       break;
> >                               } else if (ret2 &&
> >                                       (!f2fs_compress_write_end(inode,
> > -                                             fsdata, page->index, 1) ||
> > +                                             fsdata, folio->index, 1) ||
> >                                        !f2fs_all_cluster_page_ready(&cc,
> > -                                             pages, i, nr_pages, false))) {
> > +                                             &fbatch, i, nr_folios,
> > +                                             false))) {
> >                                       retry = 1;
> >                                       break;
> >                               }
> > @@ -3059,46 +3062,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >                               break;
> >                       }
> >   #ifdef CONFIG_F2FS_FS_COMPRESSION
> > -lock_page:
> > +lock_folio:
> >   #endif
> > -                     done_index = page->index;
> > +                     done_index = folio->index;
> >   retry_write:
> > -                     lock_page(page);
> > +                     folio_lock(folio);
> >
> > -                     if (unlikely(page->mapping != mapping)) {
> > +                     if (unlikely(folio->mapping != mapping)) {
> >   continue_unlock:
> > -                             unlock_page(page);
> > +                             folio_unlock(folio);
> >                               continue;
> >                       }
> >
> > -                     if (!PageDirty(page)) {
> > +                     if (!folio_test_dirty(folio)) {
> >                               /* someone wrote it for us */
> >                               goto continue_unlock;
> >                       }
> >
> > -                     if (PageWriteback(page)) {
> > +                     if (folio_test_writeback(folio)) {
> >                               if (wbc->sync_mode != WB_SYNC_NONE)
> > -                                     f2fs_wait_on_page_writeback(page,
> > +                                     f2fs_wait_on_page_writeback(
> > +                                                     &folio->page,
> >                                                       DATA, true, true);
> >                               else
> >                                       goto continue_unlock;
> >                       }
> >
> > -                     if (!clear_page_dirty_for_io(page))
> > +                     if (!folio_clear_dirty_for_io(folio))
> >                               goto continue_unlock;
> >
> >   #ifdef CONFIG_F2FS_FS_COMPRESSION
> >                       if (f2fs_compressed_file(inode)) {
> > -                             get_page(page);
> > -                             f2fs_compress_ctx_add_page(&cc, page);
> > +                             folio_get(folio);
> > +                             f2fs_compress_ctx_add_page(&cc, &folio->page);
> >                               continue;
> >                       }
> >   #endif
> > -                     ret = f2fs_write_single_data_page(page, &submitted,
> > -                                     &bio, &last_block, wbc, io_type,
> > -                                     0, true);
> > +                     ret = f2fs_write_single_data_page(&folio->page,
> > +                                     &submitted, &bio, &last_block,
> > +                                     wbc, io_type, 0, true);
> >                       if (ret == AOP_WRITEPAGE_ACTIVATE)
> > -                             unlock_page(page);
> > +                             folio_unlock(folio);
> >   #ifdef CONFIG_F2FS_FS_COMPRESSION
> >   result:
> >   #endif
> > @@ -3122,7 +3126,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >                                       }
> >                                       goto next;
> >                               }
> > -                             done_index = page->index + 1;
> > +                             done_index = folio->index +
> > +                                     folio_nr_pages(folio);
> >                               done = 1;
> >                               break;
> >                       }
> > @@ -3136,7 +3141,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >                       if (need_readd)
> >                               goto readd;
> >               }
> > -             release_pages(pages, nr_pages);
> > +             folio_batch_release(&fbatch);
> >               cond_resched();
> >       }
> >   #ifdef CONFIG_F2FS_FS_COMPRESSION
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index e6355a5683b7..d7bfb88fa341 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -4226,8 +4226,9 @@ void f2fs_end_read_compressed_page(struct page *page, bool failed,
> >                               block_t blkaddr, bool in_task);
> >   bool f2fs_cluster_is_empty(struct compress_ctx *cc);
> >   bool f2fs_cluster_can_merge_page(struct compress_ctx *cc, pgoff_t index);
> > -bool f2fs_all_cluster_page_ready(struct compress_ctx *cc, struct page **pages,
> > -                             int index, int nr_pages, bool uptodate);
> > +bool f2fs_all_cluster_page_ready(struct compress_ctx *cc,
> > +             struct folio_batch *fbatch, int index, int nr_folios,
> > +             bool uptodate);
> >   bool f2fs_sanity_check_cluster(struct dnode_of_data *dn);
> >   void f2fs_compress_ctx_add_page(struct compress_ctx *cc, struct page *page);
> >   int f2fs_write_multi_pages(struct compress_ctx *cc,


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-11-14 21:38     ` Vishal Moola
@ 2022-11-23  2:26       ` Vishal Moola
  2022-11-23  7:51         ` Vishal Moola
  2022-12-05 20:34         ` Vishal Moola
  0 siblings, 2 replies; 60+ messages in thread
From: Vishal Moola @ 2022-11-23  2:26 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

On Mon, Nov 14, 2022 at 1:38 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>
> On Sun, Nov 13, 2022 at 11:02 PM Chao Yu <chao@kernel.org> wrote:
> >
> > On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> > > Converted the function to use a folio_batch instead of pagevec. This is in
> > > preparation for the removal of find_get_pages_range_tag().
> > >
> > > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> > > of pagevec. This does NOT support large folios. The function currently
> >
> > Vishal,
> >
> > It looks this patch tries to revert Fengnan's change:
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
> >
> > How about doing some tests to evaluate its performance effect?
>
> Yeah I'll play around with it to see how much of a difference it makes.

I did some testing. Looks like reverting Fengnan's change allows for
occasional, but significant, spikes in write latency. I'll work on a variation
of the patch that maintains the use of F2FS_ONSTACK_PAGES and send
that in the next version of the patch series. Thanks for pointing that out!

How do the remaining f2fs patches in the series look to you?
Patch 16/23 f2fs_sync_meta_pages() in particular seems like it may
be prone to problems. If there are any changes that need to be made to
it I can include those in the next version as well.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-11-23  2:26       ` Vishal Moola
@ 2022-11-23  7:51         ` Vishal Moola
  2022-12-05 20:34         ` Vishal Moola
  1 sibling, 0 replies; 60+ messages in thread
From: Vishal Moola @ 2022-11-23  7:51 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

On Tue, Nov 22, 2022 at 6:26 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>
> On Mon, Nov 14, 2022 at 1:38 PM Vishal Moola <vishal.moola@gmail.com> wrote:
> >
> > On Sun, Nov 13, 2022 at 11:02 PM Chao Yu <chao@kernel.org> wrote:
> > >
> > > On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> > > > Converted the function to use a folio_batch instead of pagevec. This is in
> > > > preparation for the removal of find_get_pages_range_tag().
> > > >
> > > > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> > > > of pagevec. This does NOT support large folios. The function currently
> > >
> > > Vishal,
> > >
> > > It looks this patch tries to revert Fengnan's change:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
> > >
> > > How about doing some tests to evaluate its performance effect?
> >
> > Yeah I'll play around with it to see how much of a difference it makes.
>
> I did some testing. Looks like reverting Fengnan's change allows for
> occasional, but significant, spikes in write latency. I'll work on a variation
> of the patch that maintains the use of F2FS_ONSTACK_PAGES and send
> that in the next version of the patch series. Thanks for pointing that out!

Here are some numbers for reference to performance. I'm thinking we may
want to go with the new version, but I'll let you be the judge of that.
I ran some fio random write tests with block size 64k on a system with 8 cpus.

1 job with 1 io-depth:
Baseline:
  slat (usec): min=8, max=849, avg=16.47, stdev=12.33
  clat (nsec): min=253, max=751838, avg=346.51, stdev=2452.10
  lat (usec): min=9, max=854, avg=17.00, stdev=12.74
  lat (nsec)   : 500=97.09%, 750=1.73%, 1000=0.57%
  lat (usec)   : 2=0.41%, 4=0.09%, 10=0.06%, 20=0.04%, 50=0.01%
  lat (usec)   : 100=0.01%, 1000=0.01%

This patch:
  slat (usec): min=9, max=3690, avg=16.61, stdev=17.36
  clat (nsec): min=28, max=380434, avg=336.59, stdev=1571.23
  lat (usec): min=10, max=3699, avg=17.13, stdev=17.51
  lat (nsec)   : 50=0.01%, 500=97.95%, 750=1.42%, 1000=0.33%
  lat (usec)   : 2=0.19%, 4=0.05%, 10=0.03%, 20=0.03%, 50=0.01%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%

Folios w/ F2FS_ONSTACK_PAGES (next version):
  slat (usec): min=12, max=13623, avg=19.48, stdev=48.94
  clat (nsec): min=265, max=386917, avg=380.97, stdev=1679.85
  lat (usec): min=12, max=13635, avg=20.06, stdev=49.27
  lat (nsec)   : 500=93.55%, 750=4.62%, 1000=0.92%
  lat (usec)   : 2=0.65%, 4=0.09%, 10=0.10%, 20=0.06%, 50=0.01%
  lat (usec)   : 100=0.01%, 250=0.01%, 500=0.01%

1 job with 16 io-depth:
Baseline:
  slat (usec): min=8, max=3907, avg=16.89, stdev=23.39
  clat (usec): min=12, max=15160k, avg=11115.61, stdev=265051.86
  lat (usec): min=137, max=15160k, avg=11132.68, stdev=265051.75
  lat (usec)   : 20=0.01%, 250=57.66%, 500=39.56%, 750=1.96%, 1000=0.22%
  lat (msec)   : 2=0.16%, 4=0.06%, 10=0.01%, 2000=0.29%, >=2000=0.08%

This patch:
  slat (usec): min=9, max=1230, avg=17.15, stdev=12.95
  clat (usec): min=4, max=39471k, avg=14825.22, stdev=588237.30
  lat (usec): min=80, max=39471k, avg=14842.55, stdev=588237.27
  lat (usec)   : 10=0.01%, 250=38.78%, 500=59.53%, 750=1.12%, 1000=0.16%
  lat (msec)   : 2=0.04%, 2000=0.34%, >=2000=0.02%

Folios w/ F2FS_ONSTACK_PAGES (next version):
  slat (usec): min=9, max=1188, avg=18.74, stdev=14.12
  clat (usec): min=5, max=15278k, avg=8936.75, stdev=214230.09
  lat (usec): min=90, max=15278k, avg=8955.67, stdev=214230.10
  lat (usec)   : 10=0.01%, 250=9.68%, 500=86.49%, 750=2.74%, 1000=0.54%
  lat (msec)   : 2=0.18%, 2000=0.32%, >=2000=0.04%


> How do the remaining f2fs patches in the series look to you?
> Patch 16/23 f2fs_sync_meta_pages() in particular seems like it may
> be prone to problems. If there are any changes that need to be made to
> it I can include those in the next version as well.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-11-14  7:02   ` Chao Yu
  2022-11-14 21:38     ` Vishal Moola
@ 2022-11-29 19:14     ` Matthew Wilcox
  2022-11-30 12:48       ` [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs Yangtao Li via Linux-f2fs-devel
  2022-11-30 12:51       ` [f2fs-dev] [PATCH]f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag() Yangtao Li via Linux-f2fs-devel
  1 sibling, 2 replies; 60+ messages in thread
From: Matthew Wilcox @ 2022-11-29 19:14 UTC (permalink / raw)
  To: Chao Yu
  Cc: linux-kernel, linux-f2fs-devel, Vishal Moola (Oracle),
	linux-mm, linux-fsdevel

On Mon, Nov 14, 2022 at 03:02:34PM +0800, Chao Yu wrote:
> On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> > Converted the function to use a folio_batch instead of pagevec. This is in
> > preparation for the removal of find_get_pages_range_tag().
> > 
> > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> > of pagevec. This does NOT support large folios. The function currently
> 
> Vishal,
> 
> It looks this patch tries to revert Fengnan's change:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
> 
> How about doing some tests to evaluate its performance effect?
> 
> +Cc Fengnan Chang

Thanks for reviewing this.  I think the real solution to this is
that f2fs should be using large folios.  That way, the page cache
will keep track of dirtiness on a per-folio basis, and if your folios
are at least as large as your cluster size, you won't need to do the
f2fs_prepare_compress_overwrite() dance.  And you'll get at least fifteen
dirty folios per call instead of fifteen dirty pages, so your costs will
be much lower.

Is anyone interested in doing the work to convert f2fs to support
large folios?  I can help, or you can look at the work done for XFS,
AFS and a few other filesystems.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2022-11-29 19:14     ` [f2fs-dev] [PATCH v3 14/23] " Matthew Wilcox
@ 2022-11-30 12:48       ` Yangtao Li via Linux-f2fs-devel
  2022-11-30 15:18         ` Matthew Wilcox
  2022-11-30 12:51       ` [f2fs-dev] [PATCH]f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag() Yangtao Li via Linux-f2fs-devel
  1 sibling, 1 reply; 60+ messages in thread
From: Yangtao Li via Linux-f2fs-devel @ 2022-11-30 12:48 UTC (permalink / raw)
  To: jaegeuk, chao, willy
  Cc: linux-kernel, linux-f2fs-devel, vishal.moola, linux-mm, linux-fsdevel

Hi,

> Thanks for reviewing this.  I think the real solution to this is
> that f2fs should be using large folios.  That way, the page cache
> will keep track of dirtiness on a per-folio basis, and if your folios
> are at least as large as your cluster size, you won't need to do the
> f2fs_prepare_compress_overwrite() dance.  And you'll get at least fifteen
> dirty folios per call instead of fifteen dirty pages, so your costs will
> be much lower.
>
> Is anyone interested in doing the work to convert f2fs to support
> large folios?  I can help, or you can look at the work done for XFS,
> AFS and a few other filesystems.

Seems like an interesting job. Not sure if I can be of any help.
What needs to be done currently to support large folio?

Are there any roadmaps and reference documents.

Thx,
Yangtao


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH]f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-11-29 19:14     ` [f2fs-dev] [PATCH v3 14/23] " Matthew Wilcox
  2022-11-30 12:48       ` [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs Yangtao Li via Linux-f2fs-devel
@ 2022-11-30 12:51       ` Yangtao Li via Linux-f2fs-devel
  1 sibling, 0 replies; 60+ messages in thread
From: Yangtao Li via Linux-f2fs-devel @ 2022-11-30 12:51 UTC (permalink / raw)
  To: willy
  Cc: linux-kernel, linux-f2fs-devel, vishal.moola, linux-mm, linux-fsdevel

Hi,

> Thanks for reviewing this.  I think the real solution to this is
> that f2fs should be using large folios.  That way, the page cache
> will keep track of dirtiness on a per-folio basis, and if your folios
> are at least as large as your cluster size, you won't need to do the
> f2fs_prepare_compress_overwrite() dance.  And you'll get at least fifteen
> dirty folios per call instead of fifteen dirty pages, so your costs will
> be much lower.
>
> Is anyone interested in doing the work to convert f2fs to support
> large folios?  I can help, or you can look at the work done for XFS,
> AFS and a few other filesystems.

Seems like an interesting job. Not sure if I can be of any help.
What needs to be done currently to support large folio?

Are there any roadmaps and reference documents.

Thx,
Yangtao


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2022-11-30 12:48       ` [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs Yangtao Li via Linux-f2fs-devel
@ 2022-11-30 15:18         ` Matthew Wilcox
  2022-12-07 20:51           ` Luis Chamberlain
  0 siblings, 1 reply; 60+ messages in thread
From: Matthew Wilcox @ 2022-11-30 15:18 UTC (permalink / raw)
  To: Yangtao Li
  Cc: linux-kernel, linux-f2fs-devel, vishal.moola, linux-mm,
	linux-fsdevel, jaegeuk

On Wed, Nov 30, 2022 at 08:48:04PM +0800, Yangtao Li wrote:
> Hi,
> 
> > Thanks for reviewing this.  I think the real solution to this is
> > that f2fs should be using large folios.  That way, the page cache
> > will keep track of dirtiness on a per-folio basis, and if your folios
> > are at least as large as your cluster size, you won't need to do the
> > f2fs_prepare_compress_overwrite() dance.  And you'll get at least fifteen
> > dirty folios per call instead of fifteen dirty pages, so your costs will
> > be much lower.
> >
> > Is anyone interested in doing the work to convert f2fs to support
> > large folios?  I can help, or you can look at the work done for XFS,
> > AFS and a few other filesystems.
> 
> Seems like an interesting job. Not sure if I can be of any help.
> What needs to be done currently to support large folio?
> 
> Are there any roadmaps and reference documents.

From a filesystem point of view, you need to ensure that you handle folios
larger than PAGE_SIZE correctly.  The easiest way is to spread the use
of folios throughout the filesystem.  For example, today the first thing
we do in f2fs_read_data_folio() is convert the folio back into a page.
That works because f2fs hasn't told the kernel that it supports large
folios, so the VFS won't create large folios for it.

It's a lot of subtle things.  Here's an obvious one:
                        zero_user_segment(page, 0, PAGE_SIZE);
There's a folio equivalent that will zero an entire folio.

But then there is code which assumes the number of blocks per page (maybe
not in f2fs?) and so on.  Every filesystem will have its own challenges.

One way to approach this is to just enable large folios (see commit
6795801366da or 8549a26308f9) and see what breaks when you run xfstests
over it.  Probably quite a lot!



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-11-23  2:26       ` Vishal Moola
  2022-11-23  7:51         ` Vishal Moola
@ 2022-12-05 20:34         ` Vishal Moola
  2022-12-12 14:41           ` Chao Yu
  1 sibling, 1 reply; 60+ messages in thread
From: Vishal Moola @ 2022-12-05 20:34 UTC (permalink / raw)
  To: Chao Yu; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

On Tue, Nov 22, 2022 at 6:26 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>
> On Mon, Nov 14, 2022 at 1:38 PM Vishal Moola <vishal.moola@gmail.com> wrote:
> >
> > On Sun, Nov 13, 2022 at 11:02 PM Chao Yu <chao@kernel.org> wrote:
> > >
> > > On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
> > > > Converted the function to use a folio_batch instead of pagevec. This is in
> > > > preparation for the removal of find_get_pages_range_tag().
> > > >
> > > > Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> > > > of pagevec. This does NOT support large folios. The function currently
> > >
> > > Vishal,
> > >
> > > It looks this patch tries to revert Fengnan's change:
> > >
> > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
> > >
> > > How about doing some tests to evaluate its performance effect?
> >
> > Yeah I'll play around with it to see how much of a difference it makes.
>
> I did some testing. Looks like reverting Fengnan's change allows for
> occasional, but significant, spikes in write latency. I'll work on a variation
> of the patch that maintains the use of F2FS_ONSTACK_PAGES and send
> that in the next version of the patch series. Thanks for pointing that out!

Following Matthew's comment, I'm thinking we should go with this patch
as is. The numbers between both variations did not have substantial
differences with regard to latency.

While the new variant would maintain the use of F2FS_ONSTACK_PAGES,
the code becomes messier and would end up limiting the number of
folios written back once large folio support is added. This means it would
have to be converted down to this version later anyways.

Does leaving this patch as is sound good to you?

For reference, here's what the version continuing to use a page
array of size F2FS_ONSTACK_PAGES would change:

+               nr_pages = 0;
+again:
+               nr_folios = filemap_get_folios_tag(mapping, &index, end,
+                               tag, &fbatch);
+               if (nr_folios == 0) {
+                       if (nr_pages)
+                               goto write;
+                               goto write;
                        break;
+               }

+               for (i = 0; i < nr_folios; i++) {
+                       struct folio* folio = fbatch.folios[i];
+
+                       idx = 0;
+                       p = folio_nr_pages(folio);
+add_more:
+                       pages[nr_pages] = folio_page(folio,idx);
+                       folio_ref_inc(folio);
+                       if (++nr_pages == F2FS_ONSTACK_PAGES) {
+                               index = folio->index + idx + 1;
+                               folio_batch_release(&fbatch);
+                               goto write;
+                       }
+                       if (++idx < p)
+                               goto add_more;
+               }
+               folio_batch_release(&fbatch);
+               goto again;
+write:

> How do the remaining f2fs patches in the series look to you?
> Patch 16/23 f2fs_sync_meta_pages() in particular seems like it may
> be prone to problems. If there are any changes that need to be made to
> it I can include those in the next version as well.

Thanks for reviewing the patches so far. I wanted to follow up on asking
for review of the last couple of patches.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2022-11-30 15:18         ` Matthew Wilcox
@ 2022-12-07 20:51           ` Luis Chamberlain
  2024-01-25 20:47             ` Matthew Wilcox
  0 siblings, 1 reply; 60+ messages in thread
From: Luis Chamberlain @ 2022-12-07 20:51 UTC (permalink / raw)
  To: Matthew Wilcox, Pankaj Raghav, Jaegeuk Kim
  Cc: Yangtao Li, linux-kernel, linux-f2fs-devel, vishal.moola,
	linux-mm, Adam Manzanares, linux-fsdevel, Javier González

On Wed, Nov 30, 2022 at 03:18:41PM +0000, Matthew Wilcox wrote:
> On Wed, Nov 30, 2022 at 08:48:04PM +0800, Yangtao Li wrote:
> > Hi,
> > 
> > > Thanks for reviewing this.  I think the real solution to this is
> > > that f2fs should be using large folios.  That way, the page cache
> > > will keep track of dirtiness on a per-folio basis, and if your folios
> > > are at least as large as your cluster size, you won't need to do the
> > > f2fs_prepare_compress_overwrite() dance.  And you'll get at least fifteen
> > > dirty folios per call instead of fifteen dirty pages, so your costs will
> > > be much lower.
> > >
> > > Is anyone interested in doing the work to convert f2fs to support
> > > large folios?  I can help, or you can look at the work done for XFS,
> > > AFS and a few other filesystems.
> > 
> > Seems like an interesting job. Not sure if I can be of any help.
> > What needs to be done currently to support large folio?
> > 
> > Are there any roadmaps and reference documents.
> 
> >From a filesystem point of view, you need to ensure that you handle folios
> larger than PAGE_SIZE correctly.  The easiest way is to spread the use
> of folios throughout the filesystem.  For example, today the first thing
> we do in f2fs_read_data_folio() is convert the folio back into a page.
> That works because f2fs hasn't told the kernel that it supports large
> folios, so the VFS won't create large folios for it.
> 
> It's a lot of subtle things.  Here's an obvious one:
>                         zero_user_segment(page, 0, PAGE_SIZE);
> There's a folio equivalent that will zero an entire folio.
> 
> But then there is code which assumes the number of blocks per page (maybe
> not in f2fs?) and so on.  Every filesystem will have its own challenges.
> 
> One way to approach this is to just enable large folios (see commit
> 6795801366da or 8549a26308f9) and see what breaks when you run xfstests
> over it.  Probably quite a lot!

Me and Pankaj are very interested in helping on this front. And so we'll
start to organize and talk every week about this to see what is missing.
First order of business however will be testing so we'll have to
establish a public baseline to ensure we don't regress. For this we intend
on using kdevops so that'll be done first.

If folks have patches they want to test in consideration for folio /
iomap enhancements feel free to Cc us :)

After we establish a baseline we can move forward with taking on tasks
which will help with this conversion.

[0] https://github.com/linux-kdevops/kdevops

  Luis


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-05 20:34         ` Vishal Moola
@ 2022-12-12 14:41           ` Chao Yu
  2022-12-12 19:13             ` [f2fs-dev] [RFC PATCH] " Vishal Moola (Oracle)
  0 siblings, 1 reply; 60+ messages in thread
From: Chao Yu @ 2022-12-12 14:41 UTC (permalink / raw)
  To: Vishal Moola; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

Hi Vishal,

Sorry for the delay reply.

On 2022/12/6 4:34, Vishal Moola wrote:
> On Tue, Nov 22, 2022 at 6:26 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>>
>> On Mon, Nov 14, 2022 at 1:38 PM Vishal Moola <vishal.moola@gmail.com> wrote:
>>>
>>> On Sun, Nov 13, 2022 at 11:02 PM Chao Yu <chao@kernel.org> wrote:
>>>>
>>>> On 2022/10/18 4:24, Vishal Moola (Oracle) wrote:
>>>>> Converted the function to use a folio_batch instead of pagevec. This is in
>>>>> preparation for the removal of find_get_pages_range_tag().
>>>>>
>>>>> Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
>>>>> of pagevec. This does NOT support large folios. The function currently
>>>>
>>>> Vishal,
>>>>
>>>> It looks this patch tries to revert Fengnan's change:
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=01fc4b9a6ed8eacb64e5609bab7ac963e1c7e486
>>>>
>>>> How about doing some tests to evaluate its performance effect?
>>>
>>> Yeah I'll play around with it to see how much of a difference it makes.
>>
>> I did some testing. Looks like reverting Fengnan's change allows for
>> occasional, but significant, spikes in write latency. I'll work on a variation
>> of the patch that maintains the use of F2FS_ONSTACK_PAGES and send
>> that in the next version of the patch series. Thanks for pointing that out!
> 
> Following Matthew's comment, I'm thinking we should go with this patch
> as is. The numbers between both variations did not have substantial
> differences with regard to latency.
> 
> While the new variant would maintain the use of F2FS_ONSTACK_PAGES,
> the code becomes messier and would end up limiting the number of
> folios written back once large folio support is added. This means it would
> have to be converted down to this version later anyways.
> 
> Does leaving this patch as is sound good to you?
> 
> For reference, here's what the version continuing to use a page
> array of size F2FS_ONSTACK_PAGES would change:
> 
> +               nr_pages = 0;
> +again:
> +               nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +                               tag, &fbatch);
> +               if (nr_folios == 0) {
> +                       if (nr_pages)
> +                               goto write;
> +                               goto write;

Duplicated code.

>                          break;
> +               }
> 
> +               for (i = 0; i < nr_folios; i++) {
> +                       struct folio* folio = fbatch.folios[i];
> +
> +                       idx = 0;
> +                       p = folio_nr_pages(folio);
> +add_more:
> +                       pages[nr_pages] = folio_page(folio,idx);
> +                       folio_ref_inc(folio);
> +                       if (++nr_pages == F2FS_ONSTACK_PAGES) {
> +                               index = folio->index + idx + 1;
> +                               folio_batch_release(&fbatch);
> +                               goto write;
> +                       }
> +                       if (++idx < p)
> +                               goto add_more;
> +               }
> +               folio_batch_release(&fbatch);
> +               goto again;
> +write:

Looks fine to me, can you please send a formal patch?

Thanks,

> 
>> How do the remaining f2fs patches in the series look to you?
>> Patch 16/23 f2fs_sync_meta_pages() in particular seems like it may
>> be prone to problems. If there are any changes that need to be made to
>> it I can include those in the next version as well.
> 
> Thanks for reviewing the patches so far. I wanted to follow up on asking
> for review of the last couple of patches.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-12 14:41           ` Chao Yu
@ 2022-12-12 19:13             ` Vishal Moola (Oracle)
  2022-12-15  1:48               ` Chao Yu
  2022-12-15 19:02               ` Jaegeuk Kim
  0 siblings, 2 replies; 60+ messages in thread
From: Vishal Moola (Oracle) @ 2022-12-12 19:13 UTC (permalink / raw)
  To: chao
  Cc: linux-kernel, linux-f2fs-devel, Vishal Moola (Oracle),
	linux-mm, linux-fsdevel

Converted the function to use a folio_batch instead of pagevec. This is in
preparation for the removal of find_get_pages_range_tag().

Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
of pagevec. This does NOT support large folios. The function currently
only utilizes folios of size 1 so this shouldn't cause any issues right
now.

This version of the patch limits the number of pages fetched to
F2FS_ONSTACK_PAGES. If that ever happens, update the start index here
since filemap_get_folios_tag() updates the index to be after the last
found folio, not necessarily the last used page.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---

Let me know if you prefer this version and I'll include it in v5
of the patch series when I rebase it after the merge window.

---
 fs/f2fs/data.c | 86 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 59 insertions(+), 27 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index a71e818cd67b..1703e353f0e0 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2939,6 +2939,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 	int ret = 0;
 	int done = 0, retry = 0;
 	struct page *pages[F2FS_ONSTACK_PAGES];
+	struct folio_batch fbatch;
 	struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
 	struct bio *bio = NULL;
 	sector_t last_block;
@@ -2959,6 +2960,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 		.private = NULL,
 	};
 #endif
+	int nr_folios, p, idx;
 	int nr_pages;
 	pgoff_t index;
 	pgoff_t end;		/* Inclusive */
@@ -2969,6 +2971,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 	int submitted = 0;
 	int i;
 
+	folio_batch_init(&fbatch);
+
 	if (get_dirty_pages(mapping->host) <=
 				SM_I(F2FS_M_SB(mapping))->min_hot_blocks)
 		set_inode_flag(mapping->host, FI_HOT_DATA);
@@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 		tag_pages_for_writeback(mapping, index, end);
 	done_index = index;
 	while (!done && !retry && (index <= end)) {
-		nr_pages = find_get_pages_range_tag(mapping, &index, end,
-				tag, F2FS_ONSTACK_PAGES, pages);
-		if (nr_pages == 0)
+		nr_pages = 0;
+again:
+		nr_folios = filemap_get_folios_tag(mapping, &index, end,
+				tag, &fbatch);
+		if (nr_folios == 0) {
+			if (nr_pages)
+				goto write;
 			break;
+		}
 
+		for (i = 0; i < nr_folios; i++) {
+			struct folio* folio = fbatch.folios[i];
+
+			idx = 0;
+			p = folio_nr_pages(folio);
+add_more:
+			pages[nr_pages] = folio_page(folio,idx);
+			folio_ref_inc(folio);
+			if (++nr_pages == F2FS_ONSTACK_PAGES) {
+				index = folio->index + idx + 1;
+				folio_batch_release(&fbatch);
+				goto write;
+			}
+			if (++idx < p)
+				goto add_more;
+		}
+		folio_batch_release(&fbatch);
+		goto again;
+write:
 		for (i = 0; i < nr_pages; i++) {
 			struct page *page = pages[i];
+			struct folio *folio = page_folio(page);
 			bool need_readd;
 readd:
 			need_readd = false;
@@ -3017,7 +3046,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				}
 
 				if (!f2fs_cluster_can_merge_page(&cc,
-								page->index)) {
+								folio->index)) {
 					ret = f2fs_write_multi_pages(&cc,
 						&submitted, wbc, io_type);
 					if (!ret)
@@ -3026,27 +3055,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				}
 
 				if (unlikely(f2fs_cp_error(sbi)))
-					goto lock_page;
+					goto lock_folio;
 
 				if (!f2fs_cluster_is_empty(&cc))
-					goto lock_page;
+					goto lock_folio;
 
 				if (f2fs_all_cluster_page_ready(&cc,
 					pages, i, nr_pages, true))
-					goto lock_page;
+					goto lock_folio;
 
 				ret2 = f2fs_prepare_compress_overwrite(
 							inode, &pagep,
-							page->index, &fsdata);
+							folio->index, &fsdata);
 				if (ret2 < 0) {
 					ret = ret2;
 					done = 1;
 					break;
 				} else if (ret2 &&
 					(!f2fs_compress_write_end(inode,
-						fsdata, page->index, 1) ||
+						fsdata, folio->index, 1) ||
 					 !f2fs_all_cluster_page_ready(&cc,
-						pages, i, nr_pages, false))) {
+						pages, i, nr_pages,
+						false))) {
 					retry = 1;
 					break;
 				}
@@ -3059,46 +3089,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 				break;
 			}
 #ifdef CONFIG_F2FS_FS_COMPRESSION
-lock_page:
+lock_folio:
 #endif
-			done_index = page->index;
+			done_index = folio->index;
 retry_write:
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
 
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			if (PageWriteback(page)) {
+			if (folio_test_writeback(folio)) {
 				if (wbc->sync_mode != WB_SYNC_NONE)
-					f2fs_wait_on_page_writeback(page,
+					f2fs_wait_on_page_writeback(
+							&folio->page,
 							DATA, true, true);
 				else
 					goto continue_unlock;
 			}
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
 #ifdef CONFIG_F2FS_FS_COMPRESSION
 			if (f2fs_compressed_file(inode)) {
-				get_page(page);
-				f2fs_compress_ctx_add_page(&cc, page);
+				folio_get(folio);
+				f2fs_compress_ctx_add_page(&cc, &folio->page);
 				continue;
 			}
 #endif
-			ret = f2fs_write_single_data_page(page, &submitted,
-					&bio, &last_block, wbc, io_type,
-					0, true);
+			ret = f2fs_write_single_data_page(&folio->page,
+					&submitted, &bio, &last_block,
+					wbc, io_type, 0, true);
 			if (ret == AOP_WRITEPAGE_ACTIVATE)
-				unlock_page(page);
+				folio_unlock(folio);
 #ifdef CONFIG_F2FS_FS_COMPRESSION
 result:
 #endif
@@ -3122,7 +3153,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 					}
 					goto next;
 				}
-				done_index = page->index + 1;
+				done_index = folio->index +
+					folio_nr_pages(folio);
 				done = 1;
 				break;
 			}
@@ -3136,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
 			if (need_readd)
 				goto readd;
 		}
-		release_pages(pages, nr_pages);
+		release_pages(pages,nr_pages);
 		cond_resched();
 	}
 #ifdef CONFIG_F2FS_FS_COMPRESSION
-- 
2.38.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-12 19:13             ` [f2fs-dev] [RFC PATCH] " Vishal Moola (Oracle)
@ 2022-12-15  1:48               ` Chao Yu
  2022-12-15 18:45                 ` Matthew Wilcox
  2022-12-15 19:02               ` Jaegeuk Kim
  1 sibling, 1 reply; 60+ messages in thread
From: Chao Yu @ 2022-12-15  1:48 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-f2fs-devel

On 2022/12/13 3:13, Vishal Moola (Oracle) wrote:
> Converted the function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
> 
> Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> of pagevec. This does NOT support large folios. The function currently
> only utilizes folios of size 1 so this shouldn't cause any issues right
> now.
> 
> This version of the patch limits the number of pages fetched to
> F2FS_ONSTACK_PAGES. If that ever happens, update the start index here
> since filemap_get_folios_tag() updates the index to be after the last
> found folio, not necessarily the last used page.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
> 
> Let me know if you prefer this version and I'll include it in v5
> of the patch series when I rebase it after the merge window.
> 
> ---
>   fs/f2fs/data.c | 86 ++++++++++++++++++++++++++++++++++----------------
>   1 file changed, 59 insertions(+), 27 deletions(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index a71e818cd67b..1703e353f0e0 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2939,6 +2939,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   	int ret = 0;
>   	int done = 0, retry = 0;
>   	struct page *pages[F2FS_ONSTACK_PAGES];
> +	struct folio_batch fbatch;
>   	struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
>   	struct bio *bio = NULL;
>   	sector_t last_block;
> @@ -2959,6 +2960,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   		.private = NULL,
>   	};
>   #endif
> +	int nr_folios, p, idx;
>   	int nr_pages;
>   	pgoff_t index;
>   	pgoff_t end;		/* Inclusive */
> @@ -2969,6 +2971,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   	int submitted = 0;
>   	int i;
>   
> +	folio_batch_init(&fbatch);
> +
>   	if (get_dirty_pages(mapping->host) <=
>   				SM_I(F2FS_M_SB(mapping))->min_hot_blocks)
>   		set_inode_flag(mapping->host, FI_HOT_DATA);
> @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   		tag_pages_for_writeback(mapping, index, end);
>   	done_index = index;
>   	while (!done && !retry && (index <= end)) {
> -		nr_pages = find_get_pages_range_tag(mapping, &index, end,
> -				tag, F2FS_ONSTACK_PAGES, pages);
> -		if (nr_pages == 0)
> +		nr_pages = 0;
> +again:
> +		nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +				tag, &fbatch);
> +		if (nr_folios == 0) {
> +			if (nr_pages)
> +				goto write;
>   			break;
> +		}
>   
> +		for (i = 0; i < nr_folios; i++) {
> +			struct folio* folio = fbatch.folios[i];
> +
> +			idx = 0;
> +			p = folio_nr_pages(folio);
> +add_more:
> +			pages[nr_pages] = folio_page(folio,idx);
> +			folio_ref_inc(folio);

It looks if CONFIG_LRU_GEN is not set, folio_ref_inc() does nothing. For those
folios recorded in pages array, we need to call folio_get() here to add one more
reference on each of them?

> +			if (++nr_pages == F2FS_ONSTACK_PAGES) {
> +				index = folio->index + idx + 1;
> +				folio_batch_release(&fbatch);

Otherwise after folio_batch_release(), it may cause use-after-free issue
when accessing pages array? Or am I missing something?

> +				goto write;
> +			}
> +			if (++idx < p)
> +				goto add_more;
> +		}
> +		folio_batch_release(&fbatch);
> +		goto again;
> +write:
>   		for (i = 0; i < nr_pages; i++) {
>   			struct page *page = pages[i];
> +			struct folio *folio = page_folio(page);
>   			bool need_readd;
>   readd:
>   			need_readd = false;
> @@ -3017,7 +3046,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				}
>   
>   				if (!f2fs_cluster_can_merge_page(&cc,
> -								page->index)) {
> +								folio->index)) {
>   					ret = f2fs_write_multi_pages(&cc,
>   						&submitted, wbc, io_type);
>   					if (!ret)
> @@ -3026,27 +3055,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				}
>   
>   				if (unlikely(f2fs_cp_error(sbi)))
> -					goto lock_page;
> +					goto lock_folio;
>   
>   				if (!f2fs_cluster_is_empty(&cc))
> -					goto lock_page;
> +					goto lock_folio;
>   
>   				if (f2fs_all_cluster_page_ready(&cc,
>   					pages, i, nr_pages, true))
> -					goto lock_page;
> +					goto lock_folio;
>   
>   				ret2 = f2fs_prepare_compress_overwrite(
>   							inode, &pagep,
> -							page->index, &fsdata);
> +							folio->index, &fsdata);
>   				if (ret2 < 0) {
>   					ret = ret2;
>   					done = 1;
>   					break;
>   				} else if (ret2 &&
>   					(!f2fs_compress_write_end(inode,
> -						fsdata, page->index, 1) ||
> +						fsdata, folio->index, 1) ||
>   					 !f2fs_all_cluster_page_ready(&cc,
> -						pages, i, nr_pages, false))) {
> +						pages, i, nr_pages,
> +						false))) {
>   					retry = 1;
>   					break;
>   				}
> @@ -3059,46 +3089,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   				break;
>   			}
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
> -lock_page:
> +lock_folio:
>   #endif
> -			done_index = page->index;
> +			done_index = folio->index;
>   retry_write:
> -			lock_page(page);
> +			folio_lock(folio);
>   
> -			if (unlikely(page->mapping != mapping)) {
> +			if (unlikely(folio->mapping != mapping)) {
>   continue_unlock:
> -				unlock_page(page);
> +				folio_unlock(folio);
>   				continue;
>   			}
>   
> -			if (!PageDirty(page)) {
> +			if (!folio_test_dirty(folio)) {
>   				/* someone wrote it for us */
>   				goto continue_unlock;
>   			}
>   
> -			if (PageWriteback(page)) {
> +			if (folio_test_writeback(folio)) {
>   				if (wbc->sync_mode != WB_SYNC_NONE)
> -					f2fs_wait_on_page_writeback(page,
> +					f2fs_wait_on_page_writeback(
> +							&folio->page,
>   							DATA, true, true);
>   				else
>   					goto continue_unlock;
>   			}
>   
> -			if (!clear_page_dirty_for_io(page))
> +			if (!folio_clear_dirty_for_io(folio))
>   				goto continue_unlock;
>   
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>   			if (f2fs_compressed_file(inode)) {
> -				get_page(page);
> -				f2fs_compress_ctx_add_page(&cc, page);
> +				folio_get(folio);
> +				f2fs_compress_ctx_add_page(&cc, &folio->page);
>   				continue;
>   			}
>   #endif
> -			ret = f2fs_write_single_data_page(page, &submitted,
> -					&bio, &last_block, wbc, io_type,
> -					0, true);
> +			ret = f2fs_write_single_data_page(&folio->page,
> +					&submitted, &bio, &last_block,
> +					wbc, io_type, 0, true);
>   			if (ret == AOP_WRITEPAGE_ACTIVATE)
> -				unlock_page(page);
> +				folio_unlock(folio);
>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>   result:
>   #endif
> @@ -3122,7 +3153,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   					}
>   					goto next;
>   				}
> -				done_index = page->index + 1;
> +				done_index = folio->index +
> +					folio_nr_pages(folio);
>   				done = 1;
>   				break;
>   			}
> @@ -3136,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>   			if (need_readd)
>   				goto readd;
>   		}
> -		release_pages(pages, nr_pages);
> +		release_pages(pages,nr_pages);

No need to change?

Thanks,

>   		cond_resched();
>   	}
>   #ifdef CONFIG_F2FS_FS_COMPRESSION


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-15  1:48               ` Chao Yu
@ 2022-12-15 18:45                 ` Matthew Wilcox
  2022-12-21 17:17                   ` Vishal Moola
  0 siblings, 1 reply; 60+ messages in thread
From: Matthew Wilcox @ 2022-12-15 18:45 UTC (permalink / raw)
  To: Chao Yu
  Cc: linux-kernel, linux-f2fs-devel, Vishal Moola (Oracle),
	linux-mm, linux-fsdevel

On Thu, Dec 15, 2022 at 09:48:41AM +0800, Chao Yu wrote:
> On 2022/12/13 3:13, Vishal Moola (Oracle) wrote:
> > +add_more:
> > +			pages[nr_pages] = folio_page(folio,idx);
> > +			folio_ref_inc(folio);
> 
> It looks if CONFIG_LRU_GEN is not set, folio_ref_inc() does nothing. For those
> folios recorded in pages array, we need to call folio_get() here to add one more
> reference on each of them?

static inline void folio_get(struct folio *folio)
{
        VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio);
        folio_ref_inc(folio);
}

That said, folio_ref_inct() is very much MM-internal and filesystems
should be using folio_get(), so please make that modification in the
next revision, Vishal.



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-12 19:13             ` [f2fs-dev] [RFC PATCH] " Vishal Moola (Oracle)
  2022-12-15  1:48               ` Chao Yu
@ 2022-12-15 19:02               ` Jaegeuk Kim
  2023-01-03 20:53                 ` Matthew Wilcox
  1 sibling, 1 reply; 60+ messages in thread
From: Jaegeuk Kim @ 2022-12-15 19:02 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-kernel, linux-f2fs-devel, linux-mm, linux-fsdevel

On 12/12, Vishal Moola (Oracle) wrote:
> Converted the function to use a folio_batch instead of pagevec. This is in
> preparation for the removal of find_get_pages_range_tag().
> 
> Also modified f2fs_all_cluster_page_ready to take in a folio_batch instead
> of pagevec. This does NOT support large folios. The function currently
> only utilizes folios of size 1 so this shouldn't cause any issues right
> now.
> 
> This version of the patch limits the number of pages fetched to
> F2FS_ONSTACK_PAGES. If that ever happens, update the start index here
> since filemap_get_folios_tag() updates the index to be after the last
> found folio, not necessarily the last used page.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
> 
> Let me know if you prefer this version and I'll include it in v5
> of the patch series when I rebase it after the merge window.
> 
> ---
>  fs/f2fs/data.c | 86 ++++++++++++++++++++++++++++++++++----------------
>  1 file changed, 59 insertions(+), 27 deletions(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index a71e818cd67b..1703e353f0e0 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2939,6 +2939,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  	int ret = 0;
>  	int done = 0, retry = 0;
>  	struct page *pages[F2FS_ONSTACK_PAGES];
> +	struct folio_batch fbatch;
>  	struct f2fs_sb_info *sbi = F2FS_M_SB(mapping);
>  	struct bio *bio = NULL;
>  	sector_t last_block;
> @@ -2959,6 +2960,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  		.private = NULL,
>  	};
>  #endif
> +	int nr_folios, p, idx;
>  	int nr_pages;
>  	pgoff_t index;
>  	pgoff_t end;		/* Inclusive */
> @@ -2969,6 +2971,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  	int submitted = 0;
>  	int i;
>  
> +	folio_batch_init(&fbatch);
> +
>  	if (get_dirty_pages(mapping->host) <=
>  				SM_I(F2FS_M_SB(mapping))->min_hot_blocks)
>  		set_inode_flag(mapping->host, FI_HOT_DATA);
> @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  		tag_pages_for_writeback(mapping, index, end);
>  	done_index = index;
>  	while (!done && !retry && (index <= end)) {
> -		nr_pages = find_get_pages_range_tag(mapping, &index, end,
> -				tag, F2FS_ONSTACK_PAGES, pages);
> -		if (nr_pages == 0)
> +		nr_pages = 0;
> +again:
> +		nr_folios = filemap_get_folios_tag(mapping, &index, end,
> +				tag, &fbatch);

Can't folio handle this internally with F2FS_ONSTACK_PAGES and pages?

> +		if (nr_folios == 0) {
> +			if (nr_pages)
> +				goto write;
>  			break;
> +		}
>  
> +		for (i = 0; i < nr_folios; i++) {
> +			struct folio* folio = fbatch.folios[i];
> +
> +			idx = 0;
> +			p = folio_nr_pages(folio);
> +add_more:
> +			pages[nr_pages] = folio_page(folio,idx);
> +			folio_ref_inc(folio);
> +			if (++nr_pages == F2FS_ONSTACK_PAGES) {
> +				index = folio->index + idx + 1;
> +				folio_batch_release(&fbatch);
> +				goto write;
> +			}
> +			if (++idx < p)
> +				goto add_more;
> +		}
> +		folio_batch_release(&fbatch);
> +		goto again;
> +write:
>  		for (i = 0; i < nr_pages; i++) {
>  			struct page *page = pages[i];
> +			struct folio *folio = page_folio(page);
>  			bool need_readd;
>  readd:
>  			need_readd = false;
> @@ -3017,7 +3046,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  				}
>  
>  				if (!f2fs_cluster_can_merge_page(&cc,
> -								page->index)) {
> +								folio->index)) {
>  					ret = f2fs_write_multi_pages(&cc,
>  						&submitted, wbc, io_type);
>  					if (!ret)
> @@ -3026,27 +3055,28 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  				}
>  
>  				if (unlikely(f2fs_cp_error(sbi)))
> -					goto lock_page;
> +					goto lock_folio;
>  
>  				if (!f2fs_cluster_is_empty(&cc))
> -					goto lock_page;
> +					goto lock_folio;
>  
>  				if (f2fs_all_cluster_page_ready(&cc,
>  					pages, i, nr_pages, true))
> -					goto lock_page;
> +					goto lock_folio;
>  
>  				ret2 = f2fs_prepare_compress_overwrite(
>  							inode, &pagep,
> -							page->index, &fsdata);
> +							folio->index, &fsdata);
>  				if (ret2 < 0) {
>  					ret = ret2;
>  					done = 1;
>  					break;
>  				} else if (ret2 &&
>  					(!f2fs_compress_write_end(inode,
> -						fsdata, page->index, 1) ||
> +						fsdata, folio->index, 1) ||
>  					 !f2fs_all_cluster_page_ready(&cc,
> -						pages, i, nr_pages, false))) {
> +						pages, i, nr_pages,
> +						false))) {
>  					retry = 1;
>  					break;
>  				}
> @@ -3059,46 +3089,47 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  				break;
>  			}
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> -lock_page:
> +lock_folio:
>  #endif
> -			done_index = page->index;
> +			done_index = folio->index;
>  retry_write:
> -			lock_page(page);
> +			folio_lock(folio);
>  
> -			if (unlikely(page->mapping != mapping)) {
> +			if (unlikely(folio->mapping != mapping)) {
>  continue_unlock:
> -				unlock_page(page);
> +				folio_unlock(folio);
>  				continue;
>  			}
>  
> -			if (!PageDirty(page)) {
> +			if (!folio_test_dirty(folio)) {
>  				/* someone wrote it for us */
>  				goto continue_unlock;
>  			}
>  
> -			if (PageWriteback(page)) {
> +			if (folio_test_writeback(folio)) {
>  				if (wbc->sync_mode != WB_SYNC_NONE)
> -					f2fs_wait_on_page_writeback(page,
> +					f2fs_wait_on_page_writeback(
> +							&folio->page,
>  							DATA, true, true);
>  				else
>  					goto continue_unlock;
>  			}
>  
> -			if (!clear_page_dirty_for_io(page))
> +			if (!folio_clear_dirty_for_io(folio))
>  				goto continue_unlock;
>  
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
>  			if (f2fs_compressed_file(inode)) {
> -				get_page(page);
> -				f2fs_compress_ctx_add_page(&cc, page);
> +				folio_get(folio);
> +				f2fs_compress_ctx_add_page(&cc, &folio->page);
>  				continue;
>  			}
>  #endif
> -			ret = f2fs_write_single_data_page(page, &submitted,
> -					&bio, &last_block, wbc, io_type,
> -					0, true);
> +			ret = f2fs_write_single_data_page(&folio->page,
> +					&submitted, &bio, &last_block,
> +					wbc, io_type, 0, true);
>  			if (ret == AOP_WRITEPAGE_ACTIVATE)
> -				unlock_page(page);
> +				folio_unlock(folio);
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
>  result:
>  #endif
> @@ -3122,7 +3153,8 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  					}
>  					goto next;
>  				}
> -				done_index = page->index + 1;
> +				done_index = folio->index +
> +					folio_nr_pages(folio);
>  				done = 1;
>  				break;
>  			}
> @@ -3136,7 +3168,7 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
>  			if (need_readd)
>  				goto readd;
>  		}
> -		release_pages(pages, nr_pages);
> +		release_pages(pages,nr_pages);
>  		cond_resched();
>  	}
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> -- 
> 2.38.1


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-15 18:45                 ` Matthew Wilcox
@ 2022-12-21 17:17                   ` Vishal Moola
  2022-12-23  8:07                     ` Christoph Hellwig
  0 siblings, 1 reply; 60+ messages in thread
From: Vishal Moola @ 2022-12-21 17:17 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-kernel, linux-f2fs-devel, linux-mm, linux-fsdevel

On Thu, Dec 15, 2022 at 10:45 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Dec 15, 2022 at 09:48:41AM +0800, Chao Yu wrote:
> > On 2022/12/13 3:13, Vishal Moola (Oracle) wrote:
> > > +add_more:
> > > +                   pages[nr_pages] = folio_page(folio,idx);
> > > +                   folio_ref_inc(folio);
> >
> > It looks if CONFIG_LRU_GEN is not set, folio_ref_inc() does nothing. For those
> > folios recorded in pages array, we need to call folio_get() here to add one more
> > reference on each of them?
>
> static inline void folio_get(struct folio *folio)
> {
>         VM_BUG_ON_FOLIO(folio_ref_zero_or_close_to_overflow(folio), folio);
>         folio_ref_inc(folio);
> }
>
> That said, folio_ref_inct() is very much MM-internal and filesystems
> should be using folio_get(), so please make that modification in the
> next revision, Vishal.

Ok, I'll go through and fix all of those in the next version.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-21 17:17                   ` Vishal Moola
@ 2022-12-23  8:07                     ` Christoph Hellwig
  0 siblings, 0 replies; 60+ messages in thread
From: Christoph Hellwig @ 2022-12-23  8:07 UTC (permalink / raw)
  To: Vishal Moola
  Cc: linux-kernel, Matthew Wilcox, linux-f2fs-devel, linux-mm, linux-fsdevel

On Wed, Dec 21, 2022 at 09:17:30AM -0800, Vishal Moola wrote:
> > That said, folio_ref_inct() is very much MM-internal and filesystems
> > should be using folio_get(), so please make that modification in the
> > next revision, Vishal.
> 
> Ok, I'll go through and fix all of those in the next version.

Btw, something a lot more productive in this area would be to figure out
how we could convert all these copy and paste versions of
write_cache_pages to use common code.  This might need changes to the
common code, but the amount of duplicate and poorly maintained versions
of this loop is a bit alarming:

 - btree_write_cache_pages
 - extent_write_cache_pages
 - f2fs_write_cache_pages
 - gfs2_write_cache_jdata


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [RFC PATCH] f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag()
  2022-12-15 19:02               ` Jaegeuk Kim
@ 2023-01-03 20:53                 ` Matthew Wilcox
  0 siblings, 0 replies; 60+ messages in thread
From: Matthew Wilcox @ 2023-01-03 20:53 UTC (permalink / raw)
  To: Jaegeuk Kim
  Cc: linux-kernel, linux-f2fs-devel, Vishal Moola (Oracle),
	linux-mm, linux-fsdevel

On Thu, Dec 15, 2022 at 11:02:24AM -0800, Jaegeuk Kim wrote:
> On 12/12, Vishal Moola (Oracle) wrote:
> > @@ -2994,13 +2998,38 @@ static int f2fs_write_cache_pages(struct address_space *mapping,
> >  		tag_pages_for_writeback(mapping, index, end);
> >  	done_index = index;
> >  	while (!done && !retry && (index <= end)) {
> > -		nr_pages = find_get_pages_range_tag(mapping, &index, end,
> > -				tag, F2FS_ONSTACK_PAGES, pages);
> > -		if (nr_pages == 0)
> > +		nr_pages = 0;
> > +again:
> > +		nr_folios = filemap_get_folios_tag(mapping, &index, end,
> > +				tag, &fbatch);
> 
> Can't folio handle this internally with F2FS_ONSTACK_PAGES and pages?

I really want to discourage filesystems from doing this kind of thing.
The folio_batch is the natural size for doing batches of work, and
having the consistency across all these APIs of passing in a folio_batch
is quite valuable.  I understand f2fs wants to get more memory in a
single batch, but the right way to do that is to use larger folios.



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2022-12-07 20:51           ` Luis Chamberlain
@ 2024-01-25 20:47             ` Matthew Wilcox
  2024-01-25 20:54               ` Luis Chamberlain
  0 siblings, 1 reply; 60+ messages in thread
From: Matthew Wilcox @ 2024-01-25 20:47 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Pankaj Raghav, Yangtao Li, linux-kernel, linux-f2fs-devel,
	vishal.moola, linux-mm, Adam Manzanares, Javier González,
	linux-fsdevel, Jaegeuk Kim

On Wed, Dec 07, 2022 at 12:51:13PM -0800, Luis Chamberlain wrote:
> On Wed, Nov 30, 2022 at 03:18:41PM +0000, Matthew Wilcox wrote:
> > From a filesystem point of view, you need to ensure that you handle folios
> > larger than PAGE_SIZE correctly.  The easiest way is to spread the use
> > of folios throughout the filesystem.  For example, today the first thing
> > we do in f2fs_read_data_folio() is convert the folio back into a page.
> > That works because f2fs hasn't told the kernel that it supports large
> > folios, so the VFS won't create large folios for it.
> > 
> > It's a lot of subtle things.  Here's an obvious one:
> >                         zero_user_segment(page, 0, PAGE_SIZE);
> > There's a folio equivalent that will zero an entire folio.
> > 
> > But then there is code which assumes the number of blocks per page (maybe
> > not in f2fs?) and so on.  Every filesystem will have its own challenges.
> > 
> > One way to approach this is to just enable large folios (see commit
> > 6795801366da or 8549a26308f9) and see what breaks when you run xfstests
> > over it.  Probably quite a lot!
> 
> Me and Pankaj are very interested in helping on this front. And so we'll
> start to organize and talk every week about this to see what is missing.
> First order of business however will be testing so we'll have to
> establish a public baseline to ensure we don't regress. For this we intend
> on using kdevops so that'll be done first.
> 
> If folks have patches they want to test in consideration for folio /
> iomap enhancements feel free to Cc us :)
> 
> After we establish a baseline we can move forward with taking on tasks
> which will help with this conversion.

So ... it's been a year.  How is this project coming along?  There
weren't a lot of commits to f2fs in 2023 that were folio related.


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2024-01-25 20:47             ` Matthew Wilcox
@ 2024-01-25 20:54               ` Luis Chamberlain
  2024-01-26 21:01                 ` Matthew Wilcox
  0 siblings, 1 reply; 60+ messages in thread
From: Luis Chamberlain @ 2024-01-25 20:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Pankaj Raghav, Yangtao Li, linux-kernel, linux-f2fs-devel,
	vishal.moola, linux-mm, Adam Manzanares, Javier González,
	linux-fsdevel, Jaegeuk Kim

On Thu, Jan 25, 2024 at 08:47:39PM +0000, Matthew Wilcox wrote:
> On Wed, Dec 07, 2022 at 12:51:13PM -0800, Luis Chamberlain wrote:
> > On Wed, Nov 30, 2022 at 03:18:41PM +0000, Matthew Wilcox wrote:
> > > From a filesystem point of view, you need to ensure that you handle folios
> > > larger than PAGE_SIZE correctly.  The easiest way is to spread the use
> > > of folios throughout the filesystem.  For example, today the first thing
> > > we do in f2fs_read_data_folio() is convert the folio back into a page.
> > > That works because f2fs hasn't told the kernel that it supports large
> > > folios, so the VFS won't create large folios for it.
> > > 
> > > It's a lot of subtle things.  Here's an obvious one:
> > >                         zero_user_segment(page, 0, PAGE_SIZE);
> > > There's a folio equivalent that will zero an entire folio.
> > > 
> > > But then there is code which assumes the number of blocks per page (maybe
> > > not in f2fs?) and so on.  Every filesystem will have its own challenges.
> > > 
> > > One way to approach this is to just enable large folios (see commit
> > > 6795801366da or 8549a26308f9) and see what breaks when you run xfstests
> > > over it.  Probably quite a lot!
> > 
> > Me and Pankaj are very interested in helping on this front. And so we'll
> > start to organize and talk every week about this to see what is missing.
> > First order of business however will be testing so we'll have to
> > establish a public baseline to ensure we don't regress. For this we intend
> > on using kdevops so that'll be done first.
> > 
> > If folks have patches they want to test in consideration for folio /
> > iomap enhancements feel free to Cc us :)
> > 
> > After we establish a baseline we can move forward with taking on tasks
> > which will help with this conversion.
> 
> So ... it's been a year.  How is this project coming along?  There
> weren't a lot of commits to f2fs in 2023 that were folio related.

The review at LSFMM revealed iomap based filesystems were the priority
and so that has been the priority. Once we tackle that and get XFS
support we can revisit which next fs to help out with. Testing has been
a *huge* part of our endeavor, and naturally getting XFS patches up to
what is required has just taken a bit more time. But you can expect
patches for that within a month or so.

  Luis


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2024-01-25 20:54               ` Luis Chamberlain
@ 2024-01-26 21:01                 ` Matthew Wilcox
  2024-01-26 21:32                   ` Luis Chamberlain
  0 siblings, 1 reply; 60+ messages in thread
From: Matthew Wilcox @ 2024-01-26 21:01 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Pankaj Raghav, Yangtao Li, linux-kernel, linux-f2fs-devel,
	vishal.moola, linux-mm, Adam Manzanares, Javier González,
	linux-fsdevel, Jaegeuk Kim

On Thu, Jan 25, 2024 at 12:54:47PM -0800, Luis Chamberlain wrote:
> On Thu, Jan 25, 2024 at 08:47:39PM +0000, Matthew Wilcox wrote:
> > On Wed, Dec 07, 2022 at 12:51:13PM -0800, Luis Chamberlain wrote:
> > > Me and Pankaj are very interested in helping on this front. And so we'll
> > > start to organize and talk every week about this to see what is missing.
> > > First order of business however will be testing so we'll have to
> > > establish a public baseline to ensure we don't regress. For this we intend
> > > on using kdevops so that'll be done first.
> > > 
> > > If folks have patches they want to test in consideration for folio /
> > > iomap enhancements feel free to Cc us :)
> > > 
> > > After we establish a baseline we can move forward with taking on tasks
> > > which will help with this conversion.
> > 
> > So ... it's been a year.  How is this project coming along?  There
> > weren't a lot of commits to f2fs in 2023 that were folio related.
> 
> The review at LSFMM revealed iomap based filesystems were the priority
> and so that has been the priority. Once we tackle that and get XFS
> support we can revisit which next fs to help out with. Testing has been
> a *huge* part of our endeavor, and naturally getting XFS patches up to
> what is required has just taken a bit more time. But you can expect
> patches for that within a month or so.

Is anyone working on the iomap conversion for f2fs?


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2024-01-26 21:01                 ` Matthew Wilcox
@ 2024-01-26 21:32                   ` Luis Chamberlain
  2024-01-27  7:05                     ` Eric Biggers
  0 siblings, 1 reply; 60+ messages in thread
From: Luis Chamberlain @ 2024-01-26 21:32 UTC (permalink / raw)
  To: Matthew Wilcox, Eric Biggers
  Cc: Pankaj Raghav, Yangtao Li, linux-kernel, linux-f2fs-devel,
	vishal.moola, linux-mm, Adam Manzanares, Javier González,
	linux-fsdevel, Jaegeuk Kim

On Fri, Jan 26, 2024 at 09:01:06PM +0000, Matthew Wilcox wrote:
> On Thu, Jan 25, 2024 at 12:54:47PM -0800, Luis Chamberlain wrote:
> > On Thu, Jan 25, 2024 at 08:47:39PM +0000, Matthew Wilcox wrote:
> > > On Wed, Dec 07, 2022 at 12:51:13PM -0800, Luis Chamberlain wrote:
> > > > Me and Pankaj are very interested in helping on this front. And so we'll
> > > > start to organize and talk every week about this to see what is missing.
> > > > First order of business however will be testing so we'll have to
> > > > establish a public baseline to ensure we don't regress. For this we intend
> > > > on using kdevops so that'll be done first.
> > > > 
> > > > If folks have patches they want to test in consideration for folio /
> > > > iomap enhancements feel free to Cc us :)
> > > > 
> > > > After we establish a baseline we can move forward with taking on tasks
> > > > which will help with this conversion.
> > > 
> > > So ... it's been a year.  How is this project coming along?  There
> > > weren't a lot of commits to f2fs in 2023 that were folio related.
> > 
> > The review at LSFMM revealed iomap based filesystems were the priority
> > and so that has been the priority. Once we tackle that and get XFS
> > support we can revisit which next fs to help out with. Testing has been
> > a *huge* part of our endeavor, and naturally getting XFS patches up to
> > what is required has just taken a bit more time. But you can expect
> > patches for that within a month or so.
> 
> Is anyone working on the iomap conversion for f2fs?

It already has been done for direct IO by Eric as per commit a1e09b03e6f5
("f2fs: use iomap for direct I/O"), not clear to me if anyone is working
on buffered-io. Then f2fs_commit_super() seems to be the last buffer-head
user, and its not clear what the replacement could be yet.

Jaegeuk, Eric, have you guys considered this?

  Luis


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs
  2024-01-26 21:32                   ` Luis Chamberlain
@ 2024-01-27  7:05                     ` Eric Biggers
  0 siblings, 0 replies; 60+ messages in thread
From: Eric Biggers @ 2024-01-27  7:05 UTC (permalink / raw)
  To: Luis Chamberlain
  Cc: Pankaj Raghav, Yangtao Li, linux-kernel, Matthew Wilcox,
	linux-f2fs-devel, vishal.moola, linux-mm, Adam Manzanares,
	Jaegeuk Kim, linux-fsdevel, Javier González

On Fri, Jan 26, 2024 at 01:32:05PM -0800, Luis Chamberlain wrote:
> On Fri, Jan 26, 2024 at 09:01:06PM +0000, Matthew Wilcox wrote:
> > On Thu, Jan 25, 2024 at 12:54:47PM -0800, Luis Chamberlain wrote:
> > > On Thu, Jan 25, 2024 at 08:47:39PM +0000, Matthew Wilcox wrote:
> > > > On Wed, Dec 07, 2022 at 12:51:13PM -0800, Luis Chamberlain wrote:
> > > > > Me and Pankaj are very interested in helping on this front. And so we'll
> > > > > start to organize and talk every week about this to see what is missing.
> > > > > First order of business however will be testing so we'll have to
> > > > > establish a public baseline to ensure we don't regress. For this we intend
> > > > > on using kdevops so that'll be done first.
> > > > > 
> > > > > If folks have patches they want to test in consideration for folio /
> > > > > iomap enhancements feel free to Cc us :)
> > > > > 
> > > > > After we establish a baseline we can move forward with taking on tasks
> > > > > which will help with this conversion.
> > > > 
> > > > So ... it's been a year.  How is this project coming along?  There
> > > > weren't a lot of commits to f2fs in 2023 that were folio related.
> > > 
> > > The review at LSFMM revealed iomap based filesystems were the priority
> > > and so that has been the priority. Once we tackle that and get XFS
> > > support we can revisit which next fs to help out with. Testing has been
> > > a *huge* part of our endeavor, and naturally getting XFS patches up to
> > > what is required has just taken a bit more time. But you can expect
> > > patches for that within a month or so.
> > 
> > Is anyone working on the iomap conversion for f2fs?
> 
> It already has been done for direct IO by Eric as per commit a1e09b03e6f5
> ("f2fs: use iomap for direct I/O"), not clear to me if anyone is working
> on buffered-io. Then f2fs_commit_super() seems to be the last buffer-head
> user, and its not clear what the replacement could be yet.
> 
> Jaegeuk, Eric, have you guys considered this?
> 

Sure, I've *considered* that, along with other requested filesystem
modernization projects such as converting f2fs to use the new mount API and
finishing ext4's conversion to iomap.  But, I haven't had time to work on these
projects, nor to get very involved in f2fs beyond what's needed to maintain the
fscrypt and fsverity support.  I'm not anywhere close to a full-time filesystem
developer.  I did implement the f2fs iomap direct I/O support two years ago
because it made the fscrypt direct I/O support easier.  Note that these types of
changes are fairly disruptive, and there were bugs that resulted from my
patches, despite my best efforts.  It's necessary for someone to get deeply
involved in these types of changes and follow them all the way through.

- Eric


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 60+ messages in thread

end of thread, other threads:[~2024-01-27  7:05 UTC | newest]

Thread overview: 60+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-17 20:24 [f2fs-dev] [PATCH v3 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
2022-10-24 19:36   ` Vishal Moola
2022-10-24 19:38   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 02/23] filemap: Added filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-24 19:42   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-24 20:06   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 04/23] page-writeback: Convert write_cache_pages() " Vishal Moola (Oracle)
2022-10-24 20:12   ` Matthew Wilcox
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 05/23] afs: Convert afs_writepages_region() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 06/23] btrfs: Convert btree_write_cache_pages() to use filemap_get_folio_tag() Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 07/23] btrfs: Convert extent_write_cache_pages() to use filemap_get_folios_tag() Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 08/23] ceph: Convert ceph_writepages_start() " Vishal Moola (Oracle)
2022-10-28 17:20   ` Jeff Layton
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 09/23] cifs: Convert wdata_alloc_and_fillpages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 10/23] ext4: Convert mpage_prepare_extent_to_map() " Vishal Moola (Oracle)
2022-10-24 19:26   ` Vishal Moola
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
2022-10-24 19:31   ` Vishal Moola
2022-11-10 18:51     ` Vishal Moola
2022-10-29  4:46   ` Chao Yu
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 12/23] f2fs: Convert f2fs_flush_inline_data() " Vishal Moola (Oracle)
2022-10-29  4:47   ` Chao Yu
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 13/23] f2fs: Convert f2fs_sync_node_pages() " Vishal Moola (Oracle)
2022-10-29  4:47   ` Chao Yu
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 14/23] f2fs: Convert f2fs_write_cache_pages() " Vishal Moola (Oracle)
2022-11-14  7:02   ` Chao Yu
2022-11-14 21:38     ` Vishal Moola
2022-11-23  2:26       ` Vishal Moola
2022-11-23  7:51         ` Vishal Moola
2022-12-05 20:34         ` Vishal Moola
2022-12-12 14:41           ` Chao Yu
2022-12-12 19:13             ` [f2fs-dev] [RFC PATCH] " Vishal Moola (Oracle)
2022-12-15  1:48               ` Chao Yu
2022-12-15 18:45                 ` Matthew Wilcox
2022-12-21 17:17                   ` Vishal Moola
2022-12-23  8:07                     ` Christoph Hellwig
2022-12-15 19:02               ` Jaegeuk Kim
2023-01-03 20:53                 ` Matthew Wilcox
2022-11-29 19:14     ` [f2fs-dev] [PATCH v3 14/23] " Matthew Wilcox
2022-11-30 12:48       ` [f2fs-dev] [PATCH] f2fs: Support enhanced hot/cold data separation for f2fs Yangtao Li via Linux-f2fs-devel
2022-11-30 15:18         ` Matthew Wilcox
2022-12-07 20:51           ` Luis Chamberlain
2024-01-25 20:47             ` Matthew Wilcox
2024-01-25 20:54               ` Luis Chamberlain
2024-01-26 21:01                 ` Matthew Wilcox
2024-01-26 21:32                   ` Luis Chamberlain
2024-01-27  7:05                     ` Eric Biggers
2022-11-30 12:51       ` [f2fs-dev] [PATCH]f2fs: Convert f2fs_write_cache_pages() to use filemap_get_folios_tag() Yangtao Li via Linux-f2fs-devel
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 15/23] f2fs: Convert last_fsync_dnode() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 16/23] f2fs: Convert f2fs_sync_meta_pages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 17/23] gfs2: Convert gfs2_write_cache_jdata() " Vishal Moola (Oracle)
2022-10-24 19:23   ` Vishal Moola
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 18/23] nilfs2: Convert nilfs_lookup_dirty_data_buffers() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 19/23] nilfs2: Convert nilfs_lookup_dirty_node_buffers() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 20/23] nilfs2: Convert nilfs_btree_lookup_dirty_buffers() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 21/23] nilfs2: Convert nilfs_copy_dirty_pages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 22/23] nilfs2: Convert nilfs_clear_dirty_pages() " Vishal Moola (Oracle)
2022-10-17 20:24 ` [f2fs-dev] [PATCH v3 23/23] filemap: Remove find_get_pages_range_tag() Vishal Moola (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).