All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: linux-fsdevel@vger.kernel.org
Cc: linux-afs@lists.infradead.org, linux-kernel@vger.kernel.org,
	linux-btrfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-ext4@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com,
	linux-nilfs@vger.kernel.org, linux-mm@kvack.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	Chao Yu <chao@kernel.org>
Subject: [PATCH v5 16/23] f2fs: Convert f2fs_sync_meta_pages() to use filemap_get_folios_tag()
Date: Wed,  4 Jan 2023 13:14:41 -0800	[thread overview]
Message-ID: <20230104211448.4804-17-vishal.moola@gmail.com> (raw)
In-Reply-To: <20230104211448.4804-1-vishal.moola@gmail.com>

Convert function to use folios throughout. This is in preparation for the
removal of find_get_pages_range_tag(). This change removes 5 calls to
compound_head().

Initially the function was checking if the previous page index is truly the
previous page i.e. 1 index behind the current page. To convert to folios and
maintain this check we need to make the check
folio->index != prev + folio_nr_pages(previous folio) since we don't know
how many pages are in a folio.

At index i == 0 the check is guaranteed to succeed, so to workaround indexing
bounds we can simply ignore the check for that specific index. This makes the
initial assignment of prev trivial, so I removed that as well.

Also modified a comment in commit_checkpoint for consistency.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Chao Yu <chao@kernel.org>
---
 fs/f2fs/checkpoint.c | 49 +++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 23 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 56f7d0d6a8b2..5a5515d83a1b 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -395,59 +395,62 @@ long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
 {
 	struct address_space *mapping = META_MAPPING(sbi);
 	pgoff_t index = 0, prev = ULONG_MAX;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	long nwritten = 0;
-	int nr_pages;
+	int nr_folios;
 	struct writeback_control wbc = {
 		.for_reclaim = 0,
 	};
 	struct blk_plug plug;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 	blk_start_plug(&plug);
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(mapping, &index,
+					(pgoff_t)-1,
+					PAGECACHE_TAG_DIRTY, &fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			if (prev == ULONG_MAX)
-				prev = page->index - 1;
-			if (nr_to_write != LONG_MAX && page->index != prev + 1) {
-				pagevec_release(&pvec);
+			if (nr_to_write != LONG_MAX && i != 0 &&
+					folio->index != prev +
+					folio_nr_pages(fbatch.folios[i-1])) {
+				folio_batch_release(&fbatch);
 				goto stop;
 			}
 
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			f2fs_wait_on_page_writeback(page, META, true, true);
+			f2fs_wait_on_page_writeback(&folio->page, META,
+					true, true);
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
-			if (__f2fs_write_meta_page(page, &wbc, io_type)) {
-				unlock_page(page);
+			if (__f2fs_write_meta_page(&folio->page, &wbc,
+						io_type)) {
+				folio_unlock(folio);
 				break;
 			}
-			nwritten++;
-			prev = page->index;
+			nwritten += folio_nr_pages(folio);
+			prev = folio->index;
 			if (unlikely(nwritten >= nr_to_write))
 				break;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 stop:
@@ -1403,7 +1406,7 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
 	};
 
 	/*
-	 * pagevec_lookup_tag and lock_page again will take
+	 * filemap_get_folios_tag and lock_page again will take
 	 * some extra time. Therefore, f2fs_update_meta_pages and
 	 * f2fs_sync_meta_pages are combined in this function.
 	 */
-- 
2.38.1


WARNING: multiple messages have this Message-ID (diff)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: linux-fsdevel@vger.kernel.org
Cc: linux-cifs@vger.kernel.org, linux-nilfs@vger.kernel.org,
	"Vishal Moola \(Oracle\)" <vishal.moola@gmail.com>,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com,
	linux-mm@kvack.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-afs@lists.infradead.org,
	linux-btrfs@vger.kernel.org
Subject: [f2fs-dev] [PATCH v5 16/23] f2fs: Convert f2fs_sync_meta_pages() to use filemap_get_folios_tag()
Date: Wed,  4 Jan 2023 13:14:41 -0800	[thread overview]
Message-ID: <20230104211448.4804-17-vishal.moola@gmail.com> (raw)
In-Reply-To: <20230104211448.4804-1-vishal.moola@gmail.com>

Convert function to use folios throughout. This is in preparation for the
removal of find_get_pages_range_tag(). This change removes 5 calls to
compound_head().

Initially the function was checking if the previous page index is truly the
previous page i.e. 1 index behind the current page. To convert to folios and
maintain this check we need to make the check
folio->index != prev + folio_nr_pages(previous folio) since we don't know
how many pages are in a folio.

At index i == 0 the check is guaranteed to succeed, so to workaround indexing
bounds we can simply ignore the check for that specific index. This makes the
initial assignment of prev trivial, so I removed that as well.

Also modified a comment in commit_checkpoint for consistency.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Chao Yu <chao@kernel.org>
---
 fs/f2fs/checkpoint.c | 49 +++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 23 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 56f7d0d6a8b2..5a5515d83a1b 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -395,59 +395,62 @@ long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
 {
 	struct address_space *mapping = META_MAPPING(sbi);
 	pgoff_t index = 0, prev = ULONG_MAX;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	long nwritten = 0;
-	int nr_pages;
+	int nr_folios;
 	struct writeback_control wbc = {
 		.for_reclaim = 0,
 	};
 	struct blk_plug plug;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 	blk_start_plug(&plug);
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(mapping, &index,
+					(pgoff_t)-1,
+					PAGECACHE_TAG_DIRTY, &fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			if (prev == ULONG_MAX)
-				prev = page->index - 1;
-			if (nr_to_write != LONG_MAX && page->index != prev + 1) {
-				pagevec_release(&pvec);
+			if (nr_to_write != LONG_MAX && i != 0 &&
+					folio->index != prev +
+					folio_nr_pages(fbatch.folios[i-1])) {
+				folio_batch_release(&fbatch);
 				goto stop;
 			}
 
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			f2fs_wait_on_page_writeback(page, META, true, true);
+			f2fs_wait_on_page_writeback(&folio->page, META,
+					true, true);
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
-			if (__f2fs_write_meta_page(page, &wbc, io_type)) {
-				unlock_page(page);
+			if (__f2fs_write_meta_page(&folio->page, &wbc,
+						io_type)) {
+				folio_unlock(folio);
 				break;
 			}
-			nwritten++;
-			prev = page->index;
+			nwritten += folio_nr_pages(folio);
+			prev = folio->index;
 			if (unlikely(nwritten >= nr_to_write))
 				break;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 stop:
@@ -1403,7 +1406,7 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
 	};
 
 	/*
-	 * pagevec_lookup_tag and lock_page again will take
+	 * filemap_get_folios_tag and lock_page again will take
 	 * some extra time. Therefore, f2fs_update_meta_pages and
 	 * f2fs_sync_meta_pages are combined in this function.
 	 */
-- 
2.38.1



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

WARNING: multiple messages have this Message-ID (diff)
From: Vishal Moola (Oracle) <vishal.moola@gmail.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [PATCH v5 16/23] f2fs: Convert f2fs_sync_meta_pages() to use filemap_get_folios_tag()
Date: Wed,  4 Jan 2023 13:14:41 -0800	[thread overview]
Message-ID: <20230104211448.4804-17-vishal.moola@gmail.com> (raw)
In-Reply-To: <20230104211448.4804-1-vishal.moola@gmail.com>

Convert function to use folios throughout. This is in preparation for the
removal of find_get_pages_range_tag(). This change removes 5 calls to
compound_head().

Initially the function was checking if the previous page index is truly the
previous page i.e. 1 index behind the current page. To convert to folios and
maintain this check we need to make the check
folio->index != prev + folio_nr_pages(previous folio) since we don't know
how many pages are in a folio.

At index i == 0 the check is guaranteed to succeed, so to workaround indexing
bounds we can simply ignore the check for that specific index. This makes the
initial assignment of prev trivial, so I removed that as well.

Also modified a comment in commit_checkpoint for consistency.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Chao Yu <chao@kernel.org>
---
 fs/f2fs/checkpoint.c | 49 +++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 23 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 56f7d0d6a8b2..5a5515d83a1b 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -395,59 +395,62 @@ long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
 {
 	struct address_space *mapping = META_MAPPING(sbi);
 	pgoff_t index = 0, prev = ULONG_MAX;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	long nwritten = 0;
-	int nr_pages;
+	int nr_folios;
 	struct writeback_control wbc = {
 		.for_reclaim = 0,
 	};
 	struct blk_plug plug;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 	blk_start_plug(&plug);
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(mapping, &index,
+					(pgoff_t)-1,
+					PAGECACHE_TAG_DIRTY, &fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			if (prev == ULONG_MAX)
-				prev = page->index - 1;
-			if (nr_to_write != LONG_MAX && page->index != prev + 1) {
-				pagevec_release(&pvec);
+			if (nr_to_write != LONG_MAX && i != 0 &&
+					folio->index != prev +
+					folio_nr_pages(fbatch.folios[i-1])) {
+				folio_batch_release(&fbatch);
 				goto stop;
 			}
 
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			f2fs_wait_on_page_writeback(page, META, true, true);
+			f2fs_wait_on_page_writeback(&folio->page, META,
+					true, true);
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
-			if (__f2fs_write_meta_page(page, &wbc, io_type)) {
-				unlock_page(page);
+			if (__f2fs_write_meta_page(&folio->page, &wbc,
+						io_type)) {
+				folio_unlock(folio);
 				break;
 			}
-			nwritten++;
-			prev = page->index;
+			nwritten += folio_nr_pages(folio);
+			prev = folio->index;
 			if (unlikely(nwritten >= nr_to_write))
 				break;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 stop:
@@ -1403,7 +1406,7 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
 	};
 
 	/*
-	 * pagevec_lookup_tag and lock_page again will take
+	 * filemap_get_folios_tag and lock_page again will take
 	 * some extra time. Therefore, f2fs_update_meta_pages and
 	 * f2fs_sync_meta_pages are combined in this function.
 	 */
-- 
2.38.1


WARNING: multiple messages have this Message-ID (diff)
From: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
To: linux-fsdevel@vger.kernel.org
Cc: linux-cifs@vger.kernel.org, linux-nilfs@vger.kernel.org,
	"Vishal Moola (Oracle)" <vishal.moola@gmail.com>,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com,
	linux-mm@kvack.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-afs@lists.infradead.org,
	linux-btrfs@vger.kernel.org
Subject: [PATCH v5 16/23] f2fs: Convert f2fs_sync_meta_pages() to use filemap_get_folios_tag()
Date: Wed,  4 Jan 2023 13:14:41 -0800	[thread overview]
Message-ID: <20230104211448.4804-17-vishal.moola@gmail.com> (raw)
In-Reply-To: <20230104211448.4804-1-vishal.moola@gmail.com>

Convert function to use folios throughout. This is in preparation for the
removal of find_get_pages_range_tag(). This change removes 5 calls to
compound_head().

Initially the function was checking if the previous page index is truly the
previous page i.e. 1 index behind the current page. To convert to folios and
maintain this check we need to make the check
folio->index != prev + folio_nr_pages(previous folio) since we don't know
how many pages are in a folio.

At index i == 0 the check is guaranteed to succeed, so to workaround indexing
bounds we can simply ignore the check for that specific index. This makes the
initial assignment of prev trivial, so I removed that as well.

Also modified a comment in commit_checkpoint for consistency.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Acked-by: Chao Yu <chao@kernel.org>
---
 fs/f2fs/checkpoint.c | 49 +++++++++++++++++++++++---------------------
 1 file changed, 26 insertions(+), 23 deletions(-)

diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
index 56f7d0d6a8b2..5a5515d83a1b 100644
--- a/fs/f2fs/checkpoint.c
+++ b/fs/f2fs/checkpoint.c
@@ -395,59 +395,62 @@ long f2fs_sync_meta_pages(struct f2fs_sb_info *sbi, enum page_type type,
 {
 	struct address_space *mapping = META_MAPPING(sbi);
 	pgoff_t index = 0, prev = ULONG_MAX;
-	struct pagevec pvec;
+	struct folio_batch fbatch;
 	long nwritten = 0;
-	int nr_pages;
+	int nr_folios;
 	struct writeback_control wbc = {
 		.for_reclaim = 0,
 	};
 	struct blk_plug plug;
 
-	pagevec_init(&pvec);
+	folio_batch_init(&fbatch);
 
 	blk_start_plug(&plug);
 
-	while ((nr_pages = pagevec_lookup_tag(&pvec, mapping, &index,
-				PAGECACHE_TAG_DIRTY))) {
+	while ((nr_folios = filemap_get_folios_tag(mapping, &index,
+					(pgoff_t)-1,
+					PAGECACHE_TAG_DIRTY, &fbatch))) {
 		int i;
 
-		for (i = 0; i < nr_pages; i++) {
-			struct page *page = pvec.pages[i];
+		for (i = 0; i < nr_folios; i++) {
+			struct folio *folio = fbatch.folios[i];
 
-			if (prev == ULONG_MAX)
-				prev = page->index - 1;
-			if (nr_to_write != LONG_MAX && page->index != prev + 1) {
-				pagevec_release(&pvec);
+			if (nr_to_write != LONG_MAX && i != 0 &&
+					folio->index != prev +
+					folio_nr_pages(fbatch.folios[i-1])) {
+				folio_batch_release(&fbatch);
 				goto stop;
 			}
 
-			lock_page(page);
+			folio_lock(folio);
 
-			if (unlikely(page->mapping != mapping)) {
+			if (unlikely(folio->mapping != mapping)) {
 continue_unlock:
-				unlock_page(page);
+				folio_unlock(folio);
 				continue;
 			}
-			if (!PageDirty(page)) {
+			if (!folio_test_dirty(folio)) {
 				/* someone wrote it for us */
 				goto continue_unlock;
 			}
 
-			f2fs_wait_on_page_writeback(page, META, true, true);
+			f2fs_wait_on_page_writeback(&folio->page, META,
+					true, true);
 
-			if (!clear_page_dirty_for_io(page))
+			if (!folio_clear_dirty_for_io(folio))
 				goto continue_unlock;
 
-			if (__f2fs_write_meta_page(page, &wbc, io_type)) {
-				unlock_page(page);
+			if (__f2fs_write_meta_page(&folio->page, &wbc,
+						io_type)) {
+				folio_unlock(folio);
 				break;
 			}
-			nwritten++;
-			prev = page->index;
+			nwritten += folio_nr_pages(folio);
+			prev = folio->index;
 			if (unlikely(nwritten >= nr_to_write))
 				break;
 		}
-		pagevec_release(&pvec);
+		folio_batch_release(&fbatch);
 		cond_resched();
 	}
 stop:
@@ -1403,7 +1406,7 @@ static void commit_checkpoint(struct f2fs_sb_info *sbi,
 	};
 
 	/*
-	 * pagevec_lookup_tag and lock_page again will take
+	 * filemap_get_folios_tag and lock_page again will take
 	 * some extra time. Therefore, f2fs_update_meta_pages and
 	 * f2fs_sync_meta_pages are combined in this function.
 	 */
-- 
2.38.1

  parent reply	other threads:[~2023-01-04 21:16 UTC|newest]

Thread overview: 121+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-04 21:14 [PATCH v5 00/23] Convert to filemap_get_folios_tag() Vishal Moola (Oracle)
2023-01-04 21:14 ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14 ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 01/23] pagemap: Add filemap_grab_folio() Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 02/23] filemap: Added filemap_get_folios_tag() Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 03/23] filemap: Convert __filemap_fdatawait_range() to use filemap_get_folios_tag() Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 04/23] page-writeback: Convert write_cache_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 05/23] afs: Convert afs_writepages_region() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 06/23] btrfs: Convert btree_write_cache_pages() to use filemap_get_folio_tag() Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 07/23] btrfs: Convert extent_write_cache_pages() to use filemap_get_folios_tag() Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 08/23] ceph: Convert ceph_writepages_start() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 09/23] cifs: Convert wdata_alloc_and_fillpages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-12 17:19   ` Vishal Moola
2023-01-12 17:19     ` Vishal Moola
2023-01-12 17:19     ` [Cluster-devel] " Vishal Moola
2023-01-12 17:19     ` [f2fs-dev] " Vishal Moola
2023-01-13  3:03     ` Tom Talpey
2023-01-13  3:03       ` [Cluster-devel] " Tom Talpey
2023-01-13  3:03       ` [f2fs-dev] " Tom Talpey
2023-01-12 19:23   ` Paulo Alcantara
2023-01-12 19:23     ` Paulo Alcantara via Linux-f2fs-devel
2023-01-12 19:23     ` [Cluster-devel] " Paulo Alcantara
2023-01-12 19:23     ` [f2fs-dev] " Paulo Alcantara via Linux-f2fs-devel
2023-01-04 21:14 ` [PATCH v5 10/23] ext4: Convert mpage_prepare_extent_to_map() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-12 17:16   ` Vishal Moola
2023-01-12 17:16     ` Vishal Moola
2023-01-12 17:16     ` [Cluster-devel] " Vishal Moola
2023-01-12 17:16     ` [f2fs-dev] " Vishal Moola
2023-01-04 21:14 ` [PATCH v5 11/23] f2fs: Convert f2fs_fsync_node_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 12/23] f2fs: Convert f2fs_flush_inline_data() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 13/23] f2fs: Convert f2fs_sync_node_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 14/23] f2fs: Convert f2fs_write_cache_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-12 10:17   ` Chao Yu
2023-01-12 10:17     ` [Cluster-devel] " Chao Yu
2023-01-12 10:17     ` Chao Yu
2023-01-04 21:14 ` [PATCH v5 15/23] f2fs: Convert last_fsync_dnode() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` Vishal Moola (Oracle) [this message]
2023-01-04 21:14   ` [PATCH v5 16/23] f2fs: Convert f2fs_sync_meta_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [f2fs-dev] [PATCH v5 17/23] gfs2: Convert gfs2_write_cache_jdata() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-06  7:57   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-06  7:57     ` Andreas Gruenbacher
2023-01-06  7:57     ` [Cluster-devel] " Andreas Gruenbacher
2023-01-06  7:57     ` [f2fs-dev] " Andreas Gruenbacher
2023-01-04 21:14 ` [PATCH v5 18/23] nilfs2: Convert nilfs_lookup_dirty_data_buffers() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 19/23] nilfs2: Convert nilfs_lookup_dirty_node_buffers() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 20/23] nilfs2: Convert nilfs_btree_lookup_dirty_buffers() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 21/23] nilfs2: Convert nilfs_copy_dirty_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 22/23] nilfs2: Convert nilfs_clear_dirty_pages() " Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-01-04 21:14 ` [PATCH v5 23/23] filemap: Remove find_get_pages_range_tag() Vishal Moola (Oracle)
2023-01-04 21:14   ` Vishal Moola (Oracle)
2023-01-04 21:14   ` [Cluster-devel] " Vishal Moola
2023-01-04 21:14   ` [f2fs-dev] " Vishal Moola (Oracle)
2023-02-28  1:01 ` [f2fs-dev] [PATCH v5 00/23] Convert to filemap_get_folios_tag() patchwork-bot+f2fs
2023-02-28  1:01   ` patchwork-bot+f2fs
2023-02-28  1:01   ` [Cluster-devel] [f2fs-dev] " patchwork-bot+f2fs
2023-02-28  1:01   ` patchwork-bot+f2fs

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230104211448.4804-17-vishal.moola@gmail.com \
    --to=vishal.moola@gmail.com \
    --cc=ceph-devel@vger.kernel.org \
    --cc=chao@kernel.org \
    --cc=cluster-devel@redhat.com \
    --cc=linux-afs@lists.infradead.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-cifs@vger.kernel.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-f2fs-devel@lists.sourceforge.net \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nilfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.