All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoph Hellwig <hch@lst.de>
To: Chandan Babu R <chandan.babu@oracle.com>,
	"Darrick J. Wong" <djwong@kernel.org>,
	Hugh Dickins <hughd@google.com>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-xfs@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH 15/21] xfs: use shmem_get_folio in in xfile_load
Date: Fri, 26 Jan 2024 14:28:57 +0100	[thread overview]
Message-ID: <20240126132903.2700077-16-hch@lst.de> (raw)
In-Reply-To: <20240126132903.2700077-1-hch@lst.de>

Switch to using shmem_get_folio in xfile_load instead of using
shmem_read_mapping_page_gfp.  This gets us support for large folios
and also optimized reading from unallocated space, as
shmem_get_folio with SGP_READ won't allocate a page for them just
to zero the content.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
 fs/xfs/scrub/xfile.c | 63 ++++++++++++++++++++------------------------
 1 file changed, 28 insertions(+), 35 deletions(-)

diff --git a/fs/xfs/scrub/xfile.c b/fs/xfs/scrub/xfile.c
index c71c853c9ffdd7..077f9ce6e81409 100644
--- a/fs/xfs/scrub/xfile.c
+++ b/fs/xfs/scrub/xfile.c
@@ -34,13 +34,6 @@
  * xfiles assume that the caller will handle all required concurrency
  * management; standard vfs locks (freezer and inode) are not taken.  Reads
  * and writes are satisfied directly from the page cache.
- *
- * NOTE: The current shmemfs implementation has a quirk that in-kernel reads
- * of a hole cause a page to be mapped into the file.  If you are going to
- * create a sparse xfile, please be careful about reading from uninitialized
- * parts of the file.  These pages are !Uptodate and will eventually be
- * reclaimed if not written, but in the short term this boosts memory
- * consumption.
  */
 
 /*
@@ -118,10 +111,7 @@ xfile_load(
 	loff_t			pos)
 {
 	struct inode		*inode = file_inode(xf->file);
-	struct address_space	*mapping = inode->i_mapping;
-	struct page		*page = NULL;
 	unsigned int		pflags;
-	int			error = 0;
 
 	if (count > MAX_RW_COUNT)
 		return -ENOMEM;
@@ -132,43 +122,46 @@ xfile_load(
 
 	pflags = memalloc_nofs_save();
 	while (count > 0) {
+		struct folio	*folio;
 		unsigned int	len;
+		unsigned int	offset;
 
-		len = min_t(ssize_t, count, PAGE_SIZE - offset_in_page(pos));
-
-		/*
-		 * In-kernel reads of a shmem file cause it to allocate a page
-		 * if the mapping shows a hole.  Therefore, if we hit ENOMEM
-		 * we can continue by zeroing the caller's buffer.
-		 */
-		page = shmem_read_mapping_page_gfp(mapping, pos >> PAGE_SHIFT,
-				__GFP_NOWARN);
-		if (IS_ERR(page)) {
-			error = PTR_ERR(page);
-			if (error != -ENOMEM) {
-				error = -ENOMEM;
+		if (shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio,
+				SGP_READ) < 0)
+			break;
+		if (!folio) {
+			/*
+			 * No data stored at this offset, just zero the output
+			 * buffer until the next page boundary.
+			 */
+			len = min_t(ssize_t, count,
+				PAGE_SIZE - offset_in_page(pos));
+			memset(buf, 0, len);
+		} else {
+			if (folio_test_hwpoison(folio) ||
+			    (folio_test_large(folio) &&
+			     folio_test_has_hwpoisoned(folio))) {
+				folio_unlock(folio);
+				folio_put(folio);
 				break;
 			}
 
-			memset(buf, 0, len);
-			goto advance;
-		}
-
-		/*
-		 * xfile pages must never be mapped into userspace, so
-		 * we skip the dcache flush.
-		 */
-		memcpy(buf, page_address(page) + offset_in_page(pos), len);
-		put_page(page);
+			offset = offset_in_folio(folio, pos);
+			len = min_t(ssize_t, count, folio_size(folio) - offset);
+			memcpy(buf, folio_address(folio) + offset, len);
 
-advance:
+			folio_unlock(folio);
+			folio_put(folio);
+		}
 		count -= len;
 		pos += len;
 		buf += len;
 	}
 	memalloc_nofs_restore(pflags);
 
-	return error;
+	if (count)
+		return -ENOMEM;
+	return 0;
 }
 
 /*
-- 
2.39.2


  parent reply	other threads:[~2024-01-26 13:30 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-26 13:28 put the xfs xfile abstraction on a diet v2 Christoph Hellwig
2024-01-26 13:28 ` [PATCH 01/21] mm: move mapping_set_update out of <linux/swap.h> Christoph Hellwig
2024-01-26 14:39   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 02/21] shmem: move shmem_mapping out of line Christoph Hellwig
2024-01-26 14:40   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 03/21] shmem: set a_ops earlier in shmem_symlink Christoph Hellwig
2024-01-26 14:41   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 04/21] shmem: move the shmem_mapping assert into shmem_get_folio_gfp Christoph Hellwig
2024-01-26 15:09   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 05/21] shmem: export shmem_get_folio Christoph Hellwig
2024-01-26 15:15   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 06/21] shmem: export shmem_kernel_file_setup Christoph Hellwig
2024-01-26 15:45   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 07/21] shmem: document how to "persist" data when using shmem_*file_setup Christoph Hellwig
2024-01-26 15:49   ` Matthew Wilcox
2024-01-28 16:54     ` Christoph Hellwig
2024-01-29 15:56       ` Matthew Wilcox
2024-01-29 16:04         ` Christoph Hellwig
2024-01-26 13:28 ` [PATCH 08/21] xfs: remove xfile_stat Christoph Hellwig
2024-01-26 13:28 ` [PATCH 09/21] xfs: remove the xfile_pread/pwrite APIs Christoph Hellwig
2024-01-26 16:21   ` Matthew Wilcox
2024-01-26 16:48     ` Darrick J. Wong
2024-01-26 13:28 ` [PATCH 10/21] xfs: don't try to handle non-update pages in xfile_obj_load Christoph Hellwig
2024-01-26 13:28 ` [PATCH 11/21] xfs: shmem_file_setup can't return NULL Christoph Hellwig
2024-01-26 13:28 ` [PATCH 12/21] xfs: don't modify file and inode flags for shmem files Christoph Hellwig
2024-01-26 13:28 ` [PATCH 13/21] xfs: don't allow highmem pages in xfile mappings Christoph Hellwig
2024-01-26 16:26   ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 14/21] xfs: use shmem_get_folio in xfile_obj_store Christoph Hellwig
2024-01-26 16:29   ` Matthew Wilcox
2024-01-26 13:28 ` Christoph Hellwig [this message]
2024-01-26 13:28 ` [PATCH 16/21] xfs: improve detection of lost xfile contents Christoph Hellwig
2024-01-26 16:33   ` Matthew Wilcox
2024-01-26 16:50     ` Darrick J. Wong
2024-01-28 16:55     ` Christoph Hellwig
2024-01-28 20:33       ` Matthew Wilcox
2024-01-26 13:28 ` [PATCH 17/21] xfs: add file_{get,put}_folio Christoph Hellwig
2024-01-26 16:40   ` Matthew Wilcox
2024-01-26 16:55     ` Darrick J. Wong
2024-01-27  1:26   ` Kent Overstreet
2024-01-27  1:32     ` Darrick J. Wong
2024-01-26 13:29 ` [PATCH 18/21] xfs: remove xfarray_sortinfo.page_kaddr Christoph Hellwig
2024-01-26 13:29 ` [PATCH 19/21] xfs: fix a comment in xfarray.c Christoph Hellwig
2024-01-26 13:29 ` [PATCH 20/21] xfs: convert xfarray_pagesort to deal with large folios Christoph Hellwig
2024-01-27  1:09   ` Kent Overstreet
2024-01-26 13:29 ` [PATCH 21/21] xfs: remove xfile_{get,put}_page Christoph Hellwig
2024-01-26 14:15 ` put the xfs xfile abstraction on a diet v2 Matthew Wilcox
2024-01-26 14:18   ` Christoph Hellwig
2024-01-26 16:47 ` Matthew Wilcox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240126132903.2700077-16-hch@lst.de \
    --to=hch@lst.de \
    --cc=akpm@linux-foundation.org \
    --cc=chandan.babu@oracle.com \
    --cc=djwong@kernel.org \
    --cc=hughd@google.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.