linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* return an ERR_PTR from __filemap_get_folio
@ 2023-01-18  9:43 Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file Christoph Hellwig
                   ` (8 more replies)
  0 siblings, 9 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

Hi all,

__filemap_get_folio and its wrappers can return NULL for three different
conditions, which in some cases requires the caller to reverse engineer
the decision making.  This is fixed by returning an ERR_PTR instead of
NULL and thus transporting the reason for the failure.  But to make
that work we first need to ensure that no xa_value special case is
returned and thus return the FGP_ENTRY flag.  It turns out that flag
is barely used and can usually be deal with in a better way.

Note that the shmem patches in here are non-trivial and need some
careful review and testing.

Diffstat
 fs/afs/dir.c             |   10 +++----
 fs/afs/dir_edit.c        |    2 -
 fs/afs/write.c           |    4 +-
 fs/btrfs/disk-io.c       |    2 -
 fs/btrfs/extent_io.c     |    2 +
 fs/ext4/inode.c          |    2 -
 fs/ext4/move_extent.c    |    8 ++---
 fs/gfs2/lops.c           |    2 +
 fs/hugetlbfs/inode.c     |    2 -
 fs/iomap/buffered-io.c   |    6 ++--
 fs/netfs/buffered_read.c |    4 +-
 fs/nilfs2/page.c         |    6 ++--
 include/linux/pagemap.h  |    4 +-
 include/linux/shmem_fs.h |    1 
 mm/filemap.c             |   27 ++++++++-----------
 mm/folio-compat.c        |    4 +-
 mm/huge_memory.c         |    5 +--
 mm/memcontrol.c          |    2 -
 mm/mincore.c             |    2 -
 mm/shmem.c               |   64 +++++++++++++++++++----------------------------
 mm/swap_state.c          |   17 ++++++------
 mm/swapfile.c            |    4 +-
 mm/truncate.c            |   15 +++++------
 23 files changed, 93 insertions(+), 102 deletions(-)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 13:47   ` Matthew Wilcox
  2023-01-18  9:43 ` [PATCH 2/9] mm: make mapping_get_entry available outside of filemap.c Christoph Hellwig
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

split_huge_pages_in_file never wants to do anything with the special
value enties.  Switch to using filemap_get_folio to not even see them.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/huge_memory.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1d6977dc6b31ba..a2830019aaa017 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3100,11 +3100,10 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start,
 	mapping = candidate->f_mapping;
 
 	for (index = off_start; index < off_end; index += nr_pages) {
-		struct folio *folio = __filemap_get_folio(mapping, index,
-						FGP_ENTRY, 0);
+		struct folio *folio = filemap_get_folio(mapping, index);
 
 		nr_pages = 1;
-		if (xa_is_value(folio) || !folio)
+		if (!folio)
 			continue;
 
 		if (!folio_test_large(folio))
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/9] mm: make mapping_get_entry available outside of filemap.c
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 13:49   ` Matthew Wilcox
  2023-01-18  9:43 ` [PATCH 3/9] mm: use filemap_get_entry in filemap_get_incore_folio Christoph Hellwig
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

mapping_get_entry is useful for page cache API users that need to know
about xa_value internals.  Rename it and make it available in pagemap.h.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/pagemap.h | 1 +
 mm/filemap.c            | 6 +++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 9f108168377195..24dedf6b12be49 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -507,6 +507,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
 #define FGP_ENTRY		0x00000080
 #define FGP_STABLE		0x00000100
 
+void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
 struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 		int fgp_flags, gfp_t gfp);
 struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
diff --git a/mm/filemap.c b/mm/filemap.c
index c915ded191f03f..ed0583f9e27512 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1834,7 +1834,7 @@ EXPORT_SYMBOL(page_cache_prev_miss);
  */
 
 /*
- * mapping_get_entry - Get a page cache entry.
+ * filemap_get_entry - Get a page cache entry.
  * @mapping: the address_space to search
  * @index: The page cache index.
  *
@@ -1845,7 +1845,7 @@ EXPORT_SYMBOL(page_cache_prev_miss);
  *
  * Return: The folio, swap or shadow entry, %NULL if nothing is found.
  */
-static void *mapping_get_entry(struct address_space *mapping, pgoff_t index)
+void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
 {
 	XA_STATE(xas, &mapping->i_pages, index);
 	struct folio *folio;
@@ -1915,7 +1915,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 	struct folio *folio;
 
 repeat:
-	folio = mapping_get_entry(mapping, index);
+	folio = filemap_get_entry(mapping, index);
 	if (xa_is_value(folio)) {
 		if (fgp_flags & FGP_ENTRY)
 			return folio;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/9] mm: use filemap_get_entry in filemap_get_incore_folio
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 2/9] mm: make mapping_get_entry available outside of filemap.c Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 13:49   ` Matthew Wilcox
  2023-01-18  9:43 ` [PATCH 4/9] shmem: remove shmem_get_partial_folio Christoph Hellwig
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

filemap_get_incore_folio wants to look at the details of xa_is_value
entries, but doesn't need any of the other logic in filemap_get_folio.
Switch it to use the lower-level filemap_get_entry interface.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/swap_state.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index cb9aaa00951d99..c39ea34bc4fc10 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -380,7 +380,7 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
 {
 	swp_entry_t swp;
 	struct swap_info_struct *si;
-	struct folio *folio = __filemap_get_folio(mapping, index, FGP_ENTRY, 0);
+	struct folio *folio = filemap_get_entry(mapping, index);
 
 	if (!xa_is_value(folio))
 		goto out;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/9] shmem: remove shmem_get_partial_folio
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
                   ` (2 preceding siblings ...)
  2023-01-18  9:43 ` [PATCH 3/9] mm: use filemap_get_entry in filemap_get_incore_folio Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 13:57   ` Brian Foster
  2023-01-18  9:43 ` [PATCH 5/9] shmem: open code the page cache lookup in shmem_get_folio_gfp Christoph Hellwig
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

Add a new SGP_FIND mode for shmem_get_partial_folio that works like
SGP_READ, but does not check i_size.  Use that instead of open coding
the page cache lookup in shmem_get_partial_folio.  Note that this is
a behavior change in that it reads in swap cache entries for offsets
outside i_size, possibly causing a little bit of extra work.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/shmem_fs.h |  1 +
 mm/shmem.c               | 46 ++++++++++++----------------------------
 2 files changed, 15 insertions(+), 32 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index d09d54be4ffd99..7ba160ac066e5e 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -105,6 +105,7 @@ enum sgp_type {
 	SGP_CACHE,	/* don't exceed i_size, may allocate page */
 	SGP_WRITE,	/* may exceed i_size, may allocate !Uptodate page */
 	SGP_FALLOC,	/* like SGP_WRITE, but make existing page Uptodate */
+	SGP_FIND,	/* like SGP_READ, but also read outside i_size */
 };
 
 int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
diff --git a/mm/shmem.c b/mm/shmem.c
index 9e1015cbad29f9..e9500fea43a8dc 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -877,27 +877,6 @@ void shmem_unlock_mapping(struct address_space *mapping)
 	}
 }
 
-static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index)
-{
-	struct folio *folio;
-
-	/*
-	 * At first avoid shmem_get_folio(,,,SGP_READ): that fails
-	 * beyond i_size, and reports fallocated pages as holes.
-	 */
-	folio = __filemap_get_folio(inode->i_mapping, index,
-					FGP_ENTRY | FGP_LOCK, 0);
-	if (!xa_is_value(folio))
-		return folio;
-	/*
-	 * But read a page back from swap if any of it is within i_size
-	 * (although in some cases this is just a waste of time).
-	 */
-	folio = NULL;
-	shmem_get_folio(inode, index, &folio, SGP_READ);
-	return folio;
-}
-
 /*
  * Remove range of pages and swap entries from page cache, and free them.
  * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate.
@@ -957,7 +936,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 		goto whole_folios;
 
 	same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);
-	folio = shmem_get_partial_folio(inode, lstart >> PAGE_SHIFT);
+	folio = NULL;
+	shmem_get_folio(inode, lstart >> PAGE_SHIFT, &folio, SGP_FIND);
 	if (folio) {
 		same_folio = lend < folio_pos(folio) + folio_size(folio);
 		folio_mark_dirty(folio);
@@ -971,14 +951,16 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 		folio = NULL;
 	}
 
-	if (!same_folio)
-		folio = shmem_get_partial_folio(inode, lend >> PAGE_SHIFT);
-	if (folio) {
-		folio_mark_dirty(folio);
-		if (!truncate_inode_partial_folio(folio, lstart, lend))
-			end = folio->index;
-		folio_unlock(folio);
-		folio_put(folio);
+	if (!same_folio) {
+		folio = NULL;
+		shmem_get_folio(inode, lend >> PAGE_SHIFT, &folio, SGP_FIND);
+		if (folio) {
+			folio_mark_dirty(folio);
+			if (!truncate_inode_partial_folio(folio, lstart, lend))
+				end = folio->index;
+			folio_unlock(folio);
+			folio_put(folio);
+		}
 	}
 
 whole_folios:
@@ -1900,7 +1882,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
 		if (folio_test_uptodate(folio))
 			goto out;
 		/* fallocated folio */
-		if (sgp != SGP_READ)
+		if (sgp != SGP_READ && sgp != SGP_FIND)
 			goto clear;
 		folio_unlock(folio);
 		folio_put(folio);
@@ -1911,7 +1893,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
 	 * SGP_NOALLOC: fail on hole, with NULL folio, letting caller fail.
 	 */
 	*foliop = NULL;
-	if (sgp == SGP_READ)
+	if (sgp == SGP_READ || sgp == SGP_FIND)
 		return 0;
 	if (sgp == SGP_NOALLOC)
 		return -ENOENT;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/9] shmem: open code the page cache lookup in shmem_get_folio_gfp
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
                   ` (3 preceding siblings ...)
  2023-01-18  9:43 ` [PATCH 4/9] shmem: remove shmem_get_partial_folio Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 6/9] mm: remove FGP_ENTRY Christoph Hellwig
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

Use the very low level filemap_get_entry helper to look up the
entry in the xarray, and then:

 - don't bother locking the folio if only doing a userfault notification
 - open code locking the page and checking for truncation in a related
   code block

This will allow to eventually remove the FGP_ENTRY flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/shmem.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index e9500fea43a8dc..769107f376562f 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1856,12 +1856,10 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
 	sbinfo = SHMEM_SB(inode->i_sb);
 	charge_mm = vma ? vma->vm_mm : NULL;
 
-	folio = __filemap_get_folio(mapping, index, FGP_ENTRY | FGP_LOCK, 0);
+	folio = filemap_get_entry(mapping, index);
 	if (folio && vma && userfaultfd_minor(vma)) {
-		if (!xa_is_value(folio)) {
-			folio_unlock(folio);
+		if (!xa_is_value(folio))
 			folio_put(folio);
-		}
 		*fault_type = handle_userfault(vmf, VM_UFFD_MINOR);
 		return 0;
 	}
@@ -1877,6 +1875,14 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
 	}
 
 	if (folio) {
+		folio_lock(folio);
+
+		/* Has the page been truncated? */
+		if (unlikely(folio->mapping != mapping)) {
+			folio_unlock(folio);
+			folio_put(folio);
+			goto repeat;
+		}
 		if (sgp == SGP_WRITE)
 			folio_mark_accessed(folio);
 		if (folio_test_uptodate(folio))
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/9] mm: remove FGP_ENTRY
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
                   ` (4 preceding siblings ...)
  2023-01-18  9:43 ` [PATCH 5/9] shmem: open code the page cache lookup in shmem_get_folio_gfp Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page Christoph Hellwig
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

FGP_ENTRY is unused now, so remove it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/pagemap.h | 3 +--
 mm/filemap.c            | 7 +------
 mm/folio-compat.c       | 4 ++--
 3 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 24dedf6b12be49..e2208ee36966ea 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -504,8 +504,7 @@ pgoff_t page_cache_prev_miss(struct address_space *mapping,
 #define FGP_NOFS		0x00000010
 #define FGP_NOWAIT		0x00000020
 #define FGP_FOR_MMAP		0x00000040
-#define FGP_ENTRY		0x00000080
-#define FGP_STABLE		0x00000100
+#define FGP_STABLE		0x00000080
 
 void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
 struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
diff --git a/mm/filemap.c b/mm/filemap.c
index ed0583f9e27512..35baadd130795c 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1889,8 +1889,6 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
  *
  * * %FGP_ACCESSED - The folio will be marked accessed.
  * * %FGP_LOCK - The folio is returned locked.
- * * %FGP_ENTRY - If there is a shadow / swap / DAX entry, return it
- *   instead of allocating a new folio to replace it.
  * * %FGP_CREAT - If no page is present then a new page is allocated using
  *   @gfp and added to the page cache and the VM's LRU list.
  *   The page is returned locked and with an increased refcount.
@@ -1916,11 +1914,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 
 repeat:
 	folio = filemap_get_entry(mapping, index);
-	if (xa_is_value(folio)) {
-		if (fgp_flags & FGP_ENTRY)
-			return folio;
+	if (xa_is_value(folio))
 		folio = NULL;
-	}
 	if (!folio)
 		goto no_page;
 
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index 18c48b55792635..f3841b4977b68e 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -97,8 +97,8 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
 	struct folio *folio;
 
 	folio = __filemap_get_folio(mapping, index, fgp_flags, gfp);
-	if (!folio || xa_is_value(folio))
-		return &folio->page;
+	if (!folio)
+		return NULL;
 	return folio_file_page(folio, index);
 }
 EXPORT_SYMBOL(pagecache_get_page);
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
                   ` (5 preceding siblings ...)
  2023-01-18  9:43 ` [PATCH 6/9] mm: remove FGP_ENTRY Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 16:00   ` Matthew Wilcox
  2023-01-18  9:43 ` [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io Christoph Hellwig
  2023-01-18  9:43 ` [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio Christoph Hellwig
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

filemap_get_folio can return NULL, so exit early for that case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/gfs2/lops.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 1902413d5d123e..51d4b610127cdb 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -472,6 +472,8 @@ static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index,
 	struct folio *folio;
 
 	folio = filemap_get_folio(jd->jd_inode->i_mapping, index);
+	if (!folio)
+		return;
 
 	folio_wait_locked(folio);
 	if (folio_test_error(folio))
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
                   ` (6 preceding siblings ...)
  2023-01-18  9:43 ` [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 16:08   ` Matthew Wilcox
  2023-01-18  9:43 ` [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio Christoph Hellwig
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

filemap_get_folio can return NULL, skip those cases.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/btrfs/extent_io.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index d55e4531ffd212..a54d2cf74ba020 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -230,6 +230,8 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
 
 	while (index <= end_index) {
 		folio = filemap_get_folio(mapping, index);
+		if (!folio)
+			continue;
 		filemap_dirty_folio(mapping, folio);
 		folio_account_redirty(folio);
 		index += folio_nr_pages(folio);
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio
  2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
                   ` (7 preceding siblings ...)
  2023-01-18  9:43 ` [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io Christoph Hellwig
@ 2023-01-18  9:43 ` Christoph Hellwig
  2023-01-18 12:39   ` Ryusuke Konishi
  8 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18  9:43 UTC (permalink / raw)
  To: Andrew Morton, Matthew Wilcox, Hugh Dickins
  Cc: linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

Instead of returning NULL for all errors, distinguish between:

 - no entry found and not asked to allocated (-ENOENT)
 - failed to allocate memory (-ENOMEM)
 - would block (-EAGAIN)

so that callers don't have to guess the error based on the passed
in flags.

Also pass through the error through the direct callers:
filemap_get_folio, filemap_lock_folio filemap_grab_folio
and filemap_get_incore_folio.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/afs/dir.c             | 10 +++++-----
 fs/afs/dir_edit.c        |  2 +-
 fs/afs/write.c           |  4 ++--
 fs/btrfs/disk-io.c       |  2 +-
 fs/btrfs/extent_io.c     |  2 +-
 fs/ext4/inode.c          |  2 +-
 fs/ext4/move_extent.c    |  8 ++++----
 fs/gfs2/lops.c           |  2 +-
 fs/hugetlbfs/inode.c     |  2 +-
 fs/iomap/buffered-io.c   |  6 +++---
 fs/netfs/buffered_read.c |  4 ++--
 fs/nilfs2/page.c         |  6 +++---
 mm/filemap.c             | 14 ++++++++------
 mm/folio-compat.c        |  2 +-
 mm/huge_memory.c         |  2 +-
 mm/memcontrol.c          |  2 +-
 mm/mincore.c             |  2 +-
 mm/shmem.c               |  4 ++--
 mm/swap_state.c          | 15 ++++++++-------
 mm/swapfile.c            |  4 ++--
 mm/truncate.c            | 15 ++++++++-------
 21 files changed, 57 insertions(+), 53 deletions(-)

diff --git a/fs/afs/dir.c b/fs/afs/dir.c
index b7c1f8c84b38aa..41d0b4203870be 100644
--- a/fs/afs/dir.c
+++ b/fs/afs/dir.c
@@ -319,16 +319,16 @@ static struct afs_read *afs_read_dir(struct afs_vnode *dvnode, struct key *key)
 		struct folio *folio;
 
 		folio = filemap_get_folio(mapping, i);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			if (test_and_clear_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
 				afs_stat_v(dvnode, n_inval);
-
-			ret = -ENOMEM;
 			folio = __filemap_get_folio(mapping,
 						    i, FGP_LOCK | FGP_CREAT,
 						    mapping->gfp_mask);
-			if (!folio)
+			if (IS_ERR(folio)) {
+				ret = PTR_ERR(folio);
 				goto error;
+			}
 			folio_attach_private(folio, (void *)1);
 			folio_unlock(folio);
 		}
@@ -524,7 +524,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx,
 		 */
 		folio = __filemap_get_folio(dir->i_mapping, ctx->pos / PAGE_SIZE,
 					    FGP_ACCESSED, 0);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			ret = afs_bad(dvnode, afs_file_error_dir_missing_page);
 			break;
 		}
diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
index 0ab7752d1b758e..f0eddccbdd9541 100644
--- a/fs/afs/dir_edit.c
+++ b/fs/afs/dir_edit.c
@@ -115,7 +115,7 @@ static struct folio *afs_dir_get_folio(struct afs_vnode *vnode, pgoff_t index)
 	folio = __filemap_get_folio(mapping, index,
 				    FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
 				    mapping->gfp_mask);
-	if (!folio)
+	if (IS_ERR(folio))
 		clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
 	else if (folio && !folio_test_private(folio))
 		folio_attach_private(folio, (void *)1);
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 2d3b08b7406ca7..cf1eb0d122c275 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -232,7 +232,7 @@ static void afs_kill_pages(struct address_space *mapping,
 		_debug("kill %lx (to %lx)", index, last);
 
 		folio = filemap_get_folio(mapping, index);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			next = index + 1;
 			continue;
 		}
@@ -270,7 +270,7 @@ static void afs_redirty_pages(struct writeback_control *wbc,
 		_debug("redirty %llx @%llx", len, start);
 
 		folio = filemap_get_folio(mapping, index);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			next = index + 1;
 			continue;
 		}
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 7d5da43a89ee7f..f1035e0bcf8c6a 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4034,7 +4034,7 @@ static int wait_dev_supers(struct btrfs_device *device, int max_mirrors)
 
 		folio = filemap_get_folio(device->bdev->bd_inode->i_mapping,
 				     bytenr >> PAGE_SHIFT);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			errors++;
 			if (i == 0)
 				primary_failed = true;
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index a54d2cf74ba020..faaab9fae66d66 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -230,7 +230,7 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
 
 	while (index <= end_index) {
 		folio = filemap_get_folio(mapping, index);
-		if (!folio)
+		if (IS_ERR(folio))
 			continue;
 		filemap_dirty_folio(mapping, folio);
 		folio_account_redirty(folio);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index fb6cd994e59afa..ee8f82c7acf9ff 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -5391,7 +5391,7 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode)
 	while (1) {
 		struct folio *folio = filemap_lock_folio(inode->i_mapping,
 				      inode->i_size >> PAGE_SHIFT);
-		if (!folio)
+		if (IS_ERR(folio))
 			return;
 		ret = __ext4_journalled_invalidate_folio(folio, offset,
 						folio_size(folio) - offset);
diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
index 2de9829aed63bf..7bf6d069199cbb 100644
--- a/fs/ext4/move_extent.c
+++ b/fs/ext4/move_extent.c
@@ -141,18 +141,18 @@ mext_folio_double_lock(struct inode *inode1, struct inode *inode2,
 	flags = memalloc_nofs_save();
 	folio[0] = __filemap_get_folio(mapping[0], index1, fgp_flags,
 			mapping_gfp_mask(mapping[0]));
-	if (!folio[0]) {
+	if (IS_ERR(folio[0])) {
 		memalloc_nofs_restore(flags);
-		return -ENOMEM;
+		return PTR_ERR(folio[0]);
 	}
 
 	folio[1] = __filemap_get_folio(mapping[1], index2, fgp_flags,
 			mapping_gfp_mask(mapping[1]));
 	memalloc_nofs_restore(flags);
-	if (!folio[1]) {
+	if (IS_ERR(folio[1])) {
 		folio_unlock(folio[0]);
 		folio_put(folio[0]);
-		return -ENOMEM;
+		return PTR_ERR(folio[1]);
 	}
 	/*
 	 * __filemap_get_folio() may not wait on folio's writeback if
diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
index 51d4b610127cdb..9e8a00cee8afc1 100644
--- a/fs/gfs2/lops.c
+++ b/fs/gfs2/lops.c
@@ -472,7 +472,7 @@ static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index,
 	struct folio *folio;
 
 	folio = filemap_get_folio(jd->jd_inode->i_mapping, index);
-	if (!folio)
+	if (IS_ERR(folio))
 		return;
 
 	folio_wait_locked(folio);
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 48f1a8ad22431e..19dac1fbcd3705 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -697,7 +697,7 @@ static void hugetlbfs_zero_partial_page(struct hstate *h,
 	struct folio *folio;
 
 	folio = filemap_lock_folio(mapping, idx);
-	if (!folio)
+	if (IS_ERR(folio))
 		return;
 
 	start = start & ~huge_page_mask(h);
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 356193e44cf07f..ab5a5a5a3e0283 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -614,8 +614,8 @@ static int iomap_write_begin(struct iomap_iter *iter, loff_t pos,
 
 	folio = __filemap_get_folio(iter->inode->i_mapping, pos >> PAGE_SHIFT,
 			fgp, mapping_gfp_mask(iter->inode->i_mapping));
-	if (!folio) {
-		status = (iter->flags & IOMAP_NOWAIT) ? -EAGAIN : -ENOMEM;
+	if (IS_ERR(folio)) {
+		status = PTR_ERR(folio);
 		goto out_no_page;
 	}
 
@@ -882,7 +882,7 @@ static int iomap_write_delalloc_scan(struct inode *inode,
 		/* grab locked page */
 		folio = filemap_lock_folio(inode->i_mapping,
 				start_byte >> PAGE_SHIFT);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			start_byte = ALIGN_DOWN(start_byte, PAGE_SIZE) +
 					PAGE_SIZE;
 			continue;
diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
index 7679a68e819307..209726a9cfdb9c 100644
--- a/fs/netfs/buffered_read.c
+++ b/fs/netfs/buffered_read.c
@@ -350,8 +350,8 @@ int netfs_write_begin(struct netfs_inode *ctx,
 retry:
 	folio = __filemap_get_folio(mapping, index, fgp_flags,
 				    mapping_gfp_mask(mapping));
-	if (!folio)
-		return -ENOMEM;
+	if (IS_ERR(folio))
+		return PTR_ERR(folio);
 
 	if (ctx->ops->check_write_begin) {
 		/* Allow the netfs (eg. ceph) to flush conflicts. */
diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
index 41ccd43cd9797f..5cf30827f244c4 100644
--- a/fs/nilfs2/page.c
+++ b/fs/nilfs2/page.c
@@ -259,10 +259,10 @@ int nilfs_copy_dirty_pages(struct address_space *dmap,
 			NILFS_PAGE_BUG(&folio->page, "inconsistent dirty state");
 
 		dfolio = filemap_grab_folio(dmap, folio->index);
-		if (unlikely(!dfolio)) {
+		if (unlikely(IS_ERR(dfolio))) {
 			/* No empty page is added to the page cache */
-			err = -ENOMEM;
 			folio_unlock(folio);
+			err = PTR_ERR(dfolio);
 			break;
 		}
 		if (unlikely(!folio_buffers(folio)))
@@ -311,7 +311,7 @@ void nilfs_copy_back_pages(struct address_space *dmap,
 
 		folio_lock(folio);
 		dfolio = filemap_lock_folio(dmap, index);
-		if (dfolio) {
+		if (!IS_ERR(dfolio)) {
 			/* overwrite existing folio in the destination cache */
 			WARN_ON(folio_test_dirty(dfolio));
 			nilfs_copy_page(&dfolio->page, &folio->page, 0);
diff --git a/mm/filemap.c b/mm/filemap.c
index 35baadd130795c..4037a132f7adcc 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1905,7 +1905,7 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
  *
  * If there is a page cache page, it is returned with an increased refcount.
  *
- * Return: The found folio or %NULL otherwise.
+ * Return: The found folio or an ERR_PTR() otherwise.
  */
 struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 		int fgp_flags, gfp_t gfp)
@@ -1923,7 +1923,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 		if (fgp_flags & FGP_NOWAIT) {
 			if (!folio_trylock(folio)) {
 				folio_put(folio);
-				return NULL;
+				return ERR_PTR(-EAGAIN);
 			}
 		} else {
 			folio_lock(folio);
@@ -1962,7 +1962,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 
 		folio = filemap_alloc_folio(gfp, 0);
 		if (!folio)
-			return NULL;
+			return ERR_PTR(-ENOMEM);
 
 		if (WARN_ON_ONCE(!(fgp_flags & (FGP_LOCK | FGP_FOR_MMAP))))
 			fgp_flags |= FGP_LOCK;
@@ -1987,6 +1987,8 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
 			folio_unlock(folio);
 	}
 
+	if (!folio)
+		return ERR_PTR(-ENOENT);
 	return folio;
 }
 EXPORT_SYMBOL(__filemap_get_folio);
@@ -3126,7 +3128,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 	 * Do we have something in the page cache already?
 	 */
 	folio = filemap_get_folio(mapping, index);
-	if (likely(folio)) {
+	if (likely(!IS_ERR(folio))) {
 		/*
 		 * We found the page, so try async readahead before waiting for
 		 * the lock.
@@ -3155,7 +3157,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		folio = __filemap_get_folio(mapping, index,
 					  FGP_CREAT|FGP_FOR_MMAP,
 					  vmf->gfp_mask);
-		if (!folio) {
+		if (IS_ERR(folio)) {
 			if (fpin)
 				goto out_retry;
 			filemap_invalidate_unlock_shared(mapping);
@@ -3506,7 +3508,7 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
 		filler = mapping->a_ops->read_folio;
 repeat:
 	folio = filemap_get_folio(mapping, index);
-	if (!folio) {
+	if (IS_ERR(folio)) {
 		folio = filemap_alloc_folio(gfp, 0);
 		if (!folio)
 			return ERR_PTR(-ENOMEM);
diff --git a/mm/folio-compat.c b/mm/folio-compat.c
index f3841b4977b68e..4cd173336d8589 100644
--- a/mm/folio-compat.c
+++ b/mm/folio-compat.c
@@ -97,7 +97,7 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
 	struct folio *folio;
 
 	folio = __filemap_get_folio(mapping, index, fgp_flags, gfp);
-	if (!folio)
+	if (IS_ERR(folio))
 		return NULL;
 	return folio_file_page(folio, index);
 }
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a2830019aaa017..b0c9170632e37c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3103,7 +3103,7 @@ static int split_huge_pages_in_file(const char *file_path, pgoff_t off_start,
 		struct folio *folio = filemap_get_folio(mapping, index);
 
 		nr_pages = 1;
-		if (!folio)
+		if (IS_ERR(folio))
 			continue;
 
 		if (!folio_test_large(folio))
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 893427aded0191..8bcea91099d218 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5693,7 +5693,7 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
 	/* shmem/tmpfs may report page out on swap: account for that too. */
 	index = linear_page_index(vma, addr);
 	folio = filemap_get_incore_folio(vma->vm_file->f_mapping, index);
-	if (!folio)
+	if (IS_ERR(folio))
 		return NULL;
 	return folio_file_page(folio, index);
 }
diff --git a/mm/mincore.c b/mm/mincore.c
index a085a2aeabd8e6..386c1aed1a8aef 100644
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -61,7 +61,7 @@ static unsigned char mincore_page(struct address_space *mapping, pgoff_t index)
 	 * tmpfs's .fault). So swapped out tmpfs mappings are tested here.
 	 */
 	folio = filemap_get_incore_folio(mapping, index);
-	if (folio) {
+	if (!IS_ERR(folio)) {
 		present = folio_test_uptodate(folio);
 		folio_put(folio);
 	}
diff --git a/mm/shmem.c b/mm/shmem.c
index 769107f376562f..676318f95f7b40 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -603,7 +603,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
 
 		index = (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT;
 		folio = filemap_get_folio(inode->i_mapping, index);
-		if (!folio)
+		if (IS_ERR(folio))
 			goto drop;
 
 		/* No huge page at the end of the file: nothing to split */
@@ -3187,7 +3187,7 @@ static const char *shmem_get_link(struct dentry *dentry,
 
 	if (!dentry) {
 		folio = filemap_get_folio(inode->i_mapping, 0);
-		if (!folio)
+		if (IS_ERR(folio))
 			return ERR_PTR(-ECHILD);
 		if (PageHWPoison(folio_page(folio, 0)) ||
 		    !folio_test_uptodate(folio)) {
diff --git a/mm/swap_state.c b/mm/swap_state.c
index c39ea34bc4fc10..e853d3eecf55bb 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -330,7 +330,7 @@ struct folio *swap_cache_get_folio(swp_entry_t entry,
 	struct folio *folio;
 
 	folio = filemap_get_folio(swap_address_space(entry), swp_offset(entry));
-	if (folio) {
+	if (!IS_ERR(folio)) {
 		bool vma_ra = swap_use_vma_readahead();
 		bool readahead;
 
@@ -360,6 +360,8 @@ struct folio *swap_cache_get_folio(swp_entry_t entry,
 			if (!vma || !vma_ra)
 				atomic_inc(&swapin_readahead_hits);
 		}
+	} else {
+		folio = NULL;
 	}
 
 	return folio;
@@ -383,22 +385,21 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping,
 	struct folio *folio = filemap_get_entry(mapping, index);
 
 	if (!xa_is_value(folio))
-		goto out;
+		return folio;
 	if (!shmem_mapping(mapping))
-		return NULL;
+		return ERR_PTR(-ENOENT);
 
 	swp = radix_to_swp_entry(folio);
 	/* There might be swapin error entries in shmem mapping. */
 	if (non_swap_entry(swp))
-		return NULL;
+		return ERR_PTR(-ENOENT);
 	/* Prevent swapoff from happening to us */
 	si = get_swap_device(swp);
 	if (!si)
-		return NULL;
+		return ERR_PTR(-ENOENT);
 	index = swp_offset(swp);
 	folio = filemap_get_folio(swap_address_space(swp), index);
 	put_swap_device(si);
-out:
 	return folio;
 }
 
@@ -425,7 +426,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask,
 		folio = filemap_get_folio(swap_address_space(entry),
 						swp_offset(entry));
 		put_swap_device(si);
-		if (folio)
+		if (!IS_ERR(folio))
 			return folio_file_page(folio, swp_offset(entry));
 
 		/*
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a5729273480e07..a128b61b6b8c91 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -136,7 +136,7 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si,
 	int ret = 0;
 
 	folio = filemap_get_folio(swap_address_space(entry), offset);
-	if (!folio)
+	if (IS_ERR(folio))
 		return 0;
 	/*
 	 * When this function is called from scan_swap_map_slots() and it's
@@ -2096,7 +2096,7 @@ static int try_to_unuse(unsigned int type)
 
 		entry = swp_entry(type, i);
 		folio = filemap_get_folio(swap_address_space(entry), i);
-		if (!folio)
+		if (IS_ERR(folio))
 			continue;
 
 		/*
diff --git a/mm/truncate.c b/mm/truncate.c
index 7b4ea4c4a46b20..86de31ed4d3238 100644
--- a/mm/truncate.c
+++ b/mm/truncate.c
@@ -375,7 +375,7 @@ void truncate_inode_pages_range(struct address_space *mapping,
 
 	same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);
 	folio = __filemap_get_folio(mapping, lstart >> PAGE_SHIFT, FGP_LOCK, 0);
-	if (folio) {
+	if (!IS_ERR(folio)) {
 		same_folio = lend < folio_pos(folio) + folio_size(folio);
 		if (!truncate_inode_partial_folio(folio, lstart, lend)) {
 			start = folio->index + folio_nr_pages(folio);
@@ -387,14 +387,15 @@ void truncate_inode_pages_range(struct address_space *mapping,
 		folio = NULL;
 	}
 
-	if (!same_folio)
+	if (!same_folio) {
 		folio = __filemap_get_folio(mapping, lend >> PAGE_SHIFT,
 						FGP_LOCK, 0);
-	if (folio) {
-		if (!truncate_inode_partial_folio(folio, lstart, lend))
-			end = folio->index;
-		folio_unlock(folio);
-		folio_put(folio);
+		if (!IS_ERR(folio)) {
+			if (!truncate_inode_partial_folio(folio, lstart, lend))
+				end = folio->index;
+			folio_unlock(folio);
+			folio_put(folio);
+		}
 	}
 
 	index = start;
-- 
2.39.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio
  2023-01-18  9:43 ` [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio Christoph Hellwig
@ 2023-01-18 12:39   ` Ryusuke Konishi
  2023-01-18 16:42     ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Ryusuke Konishi @ 2023-01-18 12:39 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Matthew Wilcox, Hugh Dickins, linux-afs,
	linux-btrfs, linux-ext4, cluster-devel, linux-mm, linux-xfs,
	linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 7:41 PM Christoph Hellwig wrote:
>
> Instead of returning NULL for all errors, distinguish between:
>
>  - no entry found and not asked to allocated (-ENOENT)
>  - failed to allocate memory (-ENOMEM)
>  - would block (-EAGAIN)
>
> so that callers don't have to guess the error based on the passed
> in flags.
>
> Also pass through the error through the direct callers:

> filemap_get_folio, filemap_lock_folio filemap_grab_folio
> and filemap_get_incore_folio.

As for the comments describing the return values of these callers,
isn't it necessary to rewrite the value from NULL in case of errors ?

Regards,
Ryusuke Konishi

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file
  2023-01-18  9:43 ` [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file Christoph Hellwig
@ 2023-01-18 13:47   ` Matthew Wilcox
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Wilcox @ 2023-01-18 13:47 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Hugh Dickins, linux-afs, linux-btrfs, linux-ext4,
	cluster-devel, linux-mm, linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 10:43:21AM +0100, Christoph Hellwig wrote:
> split_huge_pages_in_file never wants to do anything with the special
> value enties.  Switch to using filemap_get_folio to not even see them.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 2/9] mm: make mapping_get_entry available outside of filemap.c
  2023-01-18  9:43 ` [PATCH 2/9] mm: make mapping_get_entry available outside of filemap.c Christoph Hellwig
@ 2023-01-18 13:49   ` Matthew Wilcox
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Wilcox @ 2023-01-18 13:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Hugh Dickins, linux-afs, linux-btrfs, linux-ext4,
	cluster-devel, linux-mm, linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 10:43:22AM +0100, Christoph Hellwig wrote:
> mapping_get_entry is useful for page cache API users that need to know
> about xa_value internals.  Rename it and make it available in pagemap.h.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 3/9] mm: use filemap_get_entry in filemap_get_incore_folio
  2023-01-18  9:43 ` [PATCH 3/9] mm: use filemap_get_entry in filemap_get_incore_folio Christoph Hellwig
@ 2023-01-18 13:49   ` Matthew Wilcox
  0 siblings, 0 replies; 23+ messages in thread
From: Matthew Wilcox @ 2023-01-18 13:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Hugh Dickins, linux-afs, linux-btrfs, linux-ext4,
	cluster-devel, linux-mm, linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 10:43:23AM +0100, Christoph Hellwig wrote:
> filemap_get_incore_folio wants to look at the details of xa_is_value
> entries, but doesn't need any of the other logic in filemap_get_folio.
> Switch it to use the lower-level filemap_get_entry interface.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/9] shmem: remove shmem_get_partial_folio
  2023-01-18  9:43 ` [PATCH 4/9] shmem: remove shmem_get_partial_folio Christoph Hellwig
@ 2023-01-18 13:57   ` Brian Foster
  2023-01-18 16:43     ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Brian Foster @ 2023-01-18 13:57 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Matthew Wilcox, Hugh Dickins, linux-afs,
	linux-btrfs, linux-ext4, cluster-devel, linux-mm, linux-xfs,
	linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 10:43:24AM +0100, Christoph Hellwig wrote:
> Add a new SGP_FIND mode for shmem_get_partial_folio that works like
> SGP_READ, but does not check i_size.  Use that instead of open coding
> the page cache lookup in shmem_get_partial_folio.  Note that this is
> a behavior change in that it reads in swap cache entries for offsets
> outside i_size, possibly causing a little bit of extra work.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  include/linux/shmem_fs.h |  1 +
>  mm/shmem.c               | 46 ++++++++++++----------------------------
>  2 files changed, 15 insertions(+), 32 deletions(-)
> 
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index d09d54be4ffd99..7ba160ac066e5e 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -105,6 +105,7 @@ enum sgp_type {
>  	SGP_CACHE,	/* don't exceed i_size, may allocate page */
>  	SGP_WRITE,	/* may exceed i_size, may allocate !Uptodate page */
>  	SGP_FALLOC,	/* like SGP_WRITE, but make existing page Uptodate */
> +	SGP_FIND,	/* like SGP_READ, but also read outside i_size */
>  };
>  
>  int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop,
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 9e1015cbad29f9..e9500fea43a8dc 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -877,27 +877,6 @@ void shmem_unlock_mapping(struct address_space *mapping)
>  	}
>  }
>  
> -static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index)
> -{
> -	struct folio *folio;
> -
> -	/*
> -	 * At first avoid shmem_get_folio(,,,SGP_READ): that fails
> -	 * beyond i_size, and reports fallocated pages as holes.
> -	 */
> -	folio = __filemap_get_folio(inode->i_mapping, index,
> -					FGP_ENTRY | FGP_LOCK, 0);

This all seems reasonable to me at a glance, FWIW, but I am a little
curious why this wouldn't split up into two changes. I.e., switch this
over to filemap_get_entry() to minimally remove the FGP_ENTRY dependency
without a behavior change, then (perhaps after the next patch) introduce
SGP_FIND in a separate patch. That makes it easier to review and
potentially undo if it happens to pose a problem in the future. Hm?

Brian

> -	if (!xa_is_value(folio))
> -		return folio;
> -	/*
> -	 * But read a page back from swap if any of it is within i_size
> -	 * (although in some cases this is just a waste of time).
> -	 */
> -	folio = NULL;
> -	shmem_get_folio(inode, index, &folio, SGP_READ);
> -	return folio;
> -}
> -
>  /*
>   * Remove range of pages and swap entries from page cache, and free them.
>   * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate.
> @@ -957,7 +936,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
>  		goto whole_folios;
>  
>  	same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);
> -	folio = shmem_get_partial_folio(inode, lstart >> PAGE_SHIFT);
> +	folio = NULL;
> +	shmem_get_folio(inode, lstart >> PAGE_SHIFT, &folio, SGP_FIND);
>  	if (folio) {
>  		same_folio = lend < folio_pos(folio) + folio_size(folio);
>  		folio_mark_dirty(folio);
> @@ -971,14 +951,16 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
>  		folio = NULL;
>  	}
>  
> -	if (!same_folio)
> -		folio = shmem_get_partial_folio(inode, lend >> PAGE_SHIFT);
> -	if (folio) {
> -		folio_mark_dirty(folio);
> -		if (!truncate_inode_partial_folio(folio, lstart, lend))
> -			end = folio->index;
> -		folio_unlock(folio);
> -		folio_put(folio);
> +	if (!same_folio) {
> +		folio = NULL;
> +		shmem_get_folio(inode, lend >> PAGE_SHIFT, &folio, SGP_FIND);
> +		if (folio) {
> +			folio_mark_dirty(folio);
> +			if (!truncate_inode_partial_folio(folio, lstart, lend))
> +				end = folio->index;
> +			folio_unlock(folio);
> +			folio_put(folio);
> +		}
>  	}
>  
>  whole_folios:
> @@ -1900,7 +1882,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
>  		if (folio_test_uptodate(folio))
>  			goto out;
>  		/* fallocated folio */
> -		if (sgp != SGP_READ)
> +		if (sgp != SGP_READ && sgp != SGP_FIND)
>  			goto clear;
>  		folio_unlock(folio);
>  		folio_put(folio);
> @@ -1911,7 +1893,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index,
>  	 * SGP_NOALLOC: fail on hole, with NULL folio, letting caller fail.
>  	 */
>  	*foliop = NULL;
> -	if (sgp == SGP_READ)
> +	if (sgp == SGP_READ || sgp == SGP_FIND)
>  		return 0;
>  	if (sgp == SGP_NOALLOC)
>  		return -ENOENT;
> -- 
> 2.39.0
> 
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page
  2023-01-18  9:43 ` [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page Christoph Hellwig
@ 2023-01-18 16:00   ` Matthew Wilcox
  2023-01-18 16:24     ` [Cluster-devel] " Andreas Gruenbacher
  0 siblings, 1 reply; 23+ messages in thread
From: Matthew Wilcox @ 2023-01-18 16:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Hugh Dickins, linux-afs, linux-btrfs, linux-ext4,
	cluster-devel, linux-mm, linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 10:43:27AM +0100, Christoph Hellwig wrote:
> filemap_get_folio can return NULL, so exit early for that case.

I'm not sure it can return NULL in this specific case.  As I understand
this code, we're scanning the journal looking for the log head.  We've
just submitted the bio to read this page.  I suppose memory pressure
could theoretically push the page out, but if it does, we're doing the
wrong thing by just returning here; we need to retry reading the page.

Assuming we're not willing to do the work to add that case, I think I'd
rather see the crash in folio_wait_locked() than get data corruption
from failing to find the head of the log.

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/gfs2/lops.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
> index 1902413d5d123e..51d4b610127cdb 100644
> --- a/fs/gfs2/lops.c
> +++ b/fs/gfs2/lops.c
> @@ -472,6 +472,8 @@ static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index,
>  	struct folio *folio;
>  
>  	folio = filemap_get_folio(jd->jd_inode->i_mapping, index);
> +	if (!folio)
> +		return;
>  
>  	folio_wait_locked(folio);
>  	if (folio_test_error(folio))
> -- 
> 2.39.0
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io
  2023-01-18  9:43 ` [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io Christoph Hellwig
@ 2023-01-18 16:08   ` Matthew Wilcox
  2023-01-18 16:42     ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Matthew Wilcox @ 2023-01-18 16:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Hugh Dickins, linux-afs, linux-btrfs, linux-ext4,
	cluster-devel, linux-mm, linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 10:43:28AM +0100, Christoph Hellwig wrote:
> filemap_get_folio can return NULL, skip those cases.

Hmm, I'm not sure that's true.  We have one place that calls
extent_range_redirty_for_io(), and it previously calls
extent_range_clear_dirty_for_io() which has an explicit

                BUG_ON(!page); /* Pages should be in the extent_io_tree */

so I'm going to say this one can't happen either.  I haven't delved far
enough into btrfs to figure out why it can't happen.

> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/btrfs/extent_io.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index d55e4531ffd212..a54d2cf74ba020 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -230,6 +230,8 @@ void extent_range_redirty_for_io(struct inode *inode, u64 start, u64 end)
>  
>  	while (index <= end_index) {
>  		folio = filemap_get_folio(mapping, index);
> +		if (!folio)
> +			continue;
>  		filemap_dirty_folio(mapping, folio);
>  		folio_account_redirty(folio);
>  		index += folio_nr_pages(folio);
> -- 
> 2.39.0
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Cluster-devel] [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page
  2023-01-18 16:00   ` Matthew Wilcox
@ 2023-01-18 16:24     ` Andreas Gruenbacher
  2023-01-18 16:42       ` Christoph Hellwig
  0 siblings, 1 reply; 23+ messages in thread
From: Andreas Gruenbacher @ 2023-01-18 16:24 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Christoph Hellwig, linux-xfs, linux-nilfs, Hugh Dickins,
	cluster-devel, linux-mm, linux-fsdevel, Andrew Morton,
	linux-ext4, linux-afs, linux-btrfs

[Christoph's email ended up in my spam folder; I hope that was a
one-time-only occurrence.]

On Wed, Jan 18, 2023 at 5:00 PM Matthew Wilcox <willy@infradead.org> wrote:
> On Wed, Jan 18, 2023 at 10:43:27AM +0100, Christoph Hellwig wrote:
> > filemap_get_folio can return NULL, so exit early for that case.
>
> I'm not sure it can return NULL in this specific case.  As I understand
> this code, we're scanning the journal looking for the log head.  We've
> just submitted the bio to read this page.  I suppose memory pressure
> could theoretically push the page out, but if it does, we're doing the
> wrong thing by just returning here; we need to retry reading the page.
>
> Assuming we're not willing to do the work to add that case, I think I'd
> rather see the crash in folio_wait_locked() than get data corruption
> from failing to find the head of the log.
>
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  fs/gfs2/lops.c | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
> > index 1902413d5d123e..51d4b610127cdb 100644
> > --- a/fs/gfs2/lops.c
> > +++ b/fs/gfs2/lops.c
> > @@ -472,6 +472,8 @@ static void gfs2_jhead_process_page(struct gfs2_jdesc *jd, unsigned long index,
> >       struct folio *folio;
> >
> >       folio = filemap_get_folio(jd->jd_inode->i_mapping, index);
> > +     if (!folio)
> > +             return;

We're actually still holding a reference to the folio from the
find_or_create_page() in gfs2_find_jhead() here, so we know that
memory pressure can't push the page out and filemap_get_folio() won't
return NULL.

> >
> >       folio_wait_locked(folio);
> >       if (folio_test_error(folio))
> > --
> > 2.39.0
> >
>

Thanks,
Andreas


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio
  2023-01-18 12:39   ` Ryusuke Konishi
@ 2023-01-18 16:42     ` Christoph Hellwig
  0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18 16:42 UTC (permalink / raw)
  To: Ryusuke Konishi
  Cc: Christoph Hellwig, Andrew Morton, Matthew Wilcox, Hugh Dickins,
	linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 09:39:08PM +0900, Ryusuke Konishi wrote:
> > Also pass through the error through the direct callers:
> 
> > filemap_get_folio, filemap_lock_folio filemap_grab_folio
> > and filemap_get_incore_folio.
> 
> As for the comments describing the return values of these callers,
> isn't it necessary to rewrite the value from NULL in case of errors ?

Yes, thanks for catching this.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [Cluster-devel] [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page
  2023-01-18 16:24     ` [Cluster-devel] " Andreas Gruenbacher
@ 2023-01-18 16:42       ` Christoph Hellwig
  0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18 16:42 UTC (permalink / raw)
  To: Andreas Gruenbacher
  Cc: Matthew Wilcox, Christoph Hellwig, linux-xfs, linux-nilfs,
	Hugh Dickins, cluster-devel, linux-mm, linux-fsdevel,
	Andrew Morton, linux-ext4, linux-afs, linux-btrfs

On Wed, Jan 18, 2023 at 05:24:32PM +0100, Andreas Gruenbacher wrote:
> We're actually still holding a reference to the folio from the
> find_or_create_page() in gfs2_find_jhead() here, so we know that
> memory pressure can't push the page out and filemap_get_folio() won't
> return NULL.

Ok, I'll drop this patch.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io
  2023-01-18 16:08   ` Matthew Wilcox
@ 2023-01-18 16:42     ` Christoph Hellwig
  0 siblings, 0 replies; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18 16:42 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Christoph Hellwig, Andrew Morton, Hugh Dickins, linux-afs,
	linux-btrfs, linux-ext4, cluster-devel, linux-mm, linux-xfs,
	linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 04:08:57PM +0000, Matthew Wilcox wrote:
> On Wed, Jan 18, 2023 at 10:43:28AM +0100, Christoph Hellwig wrote:
> > filemap_get_folio can return NULL, skip those cases.
> 
> Hmm, I'm not sure that's true.  We have one place that calls
> extent_range_redirty_for_io(), and it previously calls
> extent_range_clear_dirty_for_io() which has an explicit
> 
>                 BUG_ON(!page); /* Pages should be in the extent_io_tree */
> 
> so I'm going to say this one can't happen either.  I haven't delved far
> enough into btrfs to figure out why it can't happen.

I'll drop this patch for now.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/9] shmem: remove shmem_get_partial_folio
  2023-01-18 13:57   ` Brian Foster
@ 2023-01-18 16:43     ` Christoph Hellwig
  2023-01-18 16:50       ` Brian Foster
  0 siblings, 1 reply; 23+ messages in thread
From: Christoph Hellwig @ 2023-01-18 16:43 UTC (permalink / raw)
  To: Brian Foster
  Cc: Christoph Hellwig, Andrew Morton, Matthew Wilcox, Hugh Dickins,
	linux-afs, linux-btrfs, linux-ext4, cluster-devel, linux-mm,
	linux-xfs, linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 08:57:05AM -0500, Brian Foster wrote:
> This all seems reasonable to me at a glance, FWIW, but I am a little
> curious why this wouldn't split up into two changes. I.e., switch this
> over to filemap_get_entry() to minimally remove the FGP_ENTRY dependency
> without a behavior change, then (perhaps after the next patch) introduce
> SGP_FIND in a separate patch. That makes it easier to review and
> potentially undo if it happens to pose a problem in the future. Hm?

The minimal change to filemap_get_entry would require to add the
lock, check mapping and retry loop and thus add a fair amount of
code.  So I looked for ways to avoid that and came up with this
version.  But if there is a strong preference to first open code
the logic and then later consolidate it I could do that.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH 4/9] shmem: remove shmem_get_partial_folio
  2023-01-18 16:43     ` Christoph Hellwig
@ 2023-01-18 16:50       ` Brian Foster
  0 siblings, 0 replies; 23+ messages in thread
From: Brian Foster @ 2023-01-18 16:50 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Matthew Wilcox, Hugh Dickins, linux-afs,
	linux-btrfs, linux-ext4, cluster-devel, linux-mm, linux-xfs,
	linux-fsdevel, linux-nilfs

On Wed, Jan 18, 2023 at 05:43:58PM +0100, Christoph Hellwig wrote:
> On Wed, Jan 18, 2023 at 08:57:05AM -0500, Brian Foster wrote:
> > This all seems reasonable to me at a glance, FWIW, but I am a little
> > curious why this wouldn't split up into two changes. I.e., switch this
> > over to filemap_get_entry() to minimally remove the FGP_ENTRY dependency
> > without a behavior change, then (perhaps after the next patch) introduce
> > SGP_FIND in a separate patch. That makes it easier to review and
> > potentially undo if it happens to pose a problem in the future. Hm?
> 
> The minimal change to filemap_get_entry would require to add the
> lock, check mapping and retry loop and thus add a fair amount of
> code.  So I looked for ways to avoid that and came up with this
> version.  But if there is a strong preference to first open code
> the logic and then later consolidate it I could do that.
> 

Ok. Not a strong preference from me. I don't think it's worth
complicating that much just to split up.

Brian


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2023-01-18 16:50 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-18  9:43 return an ERR_PTR from __filemap_get_folio Christoph Hellwig
2023-01-18  9:43 ` [PATCH 1/9] mm: don't look at xarray value entries in split_huge_pages_in_file Christoph Hellwig
2023-01-18 13:47   ` Matthew Wilcox
2023-01-18  9:43 ` [PATCH 2/9] mm: make mapping_get_entry available outside of filemap.c Christoph Hellwig
2023-01-18 13:49   ` Matthew Wilcox
2023-01-18  9:43 ` [PATCH 3/9] mm: use filemap_get_entry in filemap_get_incore_folio Christoph Hellwig
2023-01-18 13:49   ` Matthew Wilcox
2023-01-18  9:43 ` [PATCH 4/9] shmem: remove shmem_get_partial_folio Christoph Hellwig
2023-01-18 13:57   ` Brian Foster
2023-01-18 16:43     ` Christoph Hellwig
2023-01-18 16:50       ` Brian Foster
2023-01-18  9:43 ` [PATCH 5/9] shmem: open code the page cache lookup in shmem_get_folio_gfp Christoph Hellwig
2023-01-18  9:43 ` [PATCH 6/9] mm: remove FGP_ENTRY Christoph Hellwig
2023-01-18  9:43 ` [PATCH 7/9] gfs2: handle a NULL folio in gfs2_jhead_process_page Christoph Hellwig
2023-01-18 16:00   ` Matthew Wilcox
2023-01-18 16:24     ` [Cluster-devel] " Andreas Gruenbacher
2023-01-18 16:42       ` Christoph Hellwig
2023-01-18  9:43 ` [PATCH 8/9] btrfs: handle a NULL folio in extent_range_redirty_for_io Christoph Hellwig
2023-01-18 16:08   ` Matthew Wilcox
2023-01-18 16:42     ` Christoph Hellwig
2023-01-18  9:43 ` [PATCH 9/9] mm: return an ERR_PTR from __filemap_get_folio Christoph Hellwig
2023-01-18 12:39   ` Ryusuke Konishi
2023-01-18 16:42     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).