linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/12] Change readahead API
@ 2020-01-25  1:35 Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 01/12] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
                   ` (12 more replies)
  0 siblings, 13 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	linux-mm, linux-kernel, linux-btrfs, linux-erofs, linux-ext4,
	linux-f2fs-devel, linux-xfs, cluster-devel, ocfs2-devel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This series adds a readahead address_space operation to eventually
replace the readpages operation.  The key difference is that
pages are added to the page cache as they are allocated (and
then looked up by the filesystem) instead of passing them on a
list to the readpages operation and having the filesystem add
them to the page cache.  It's a net reduction in code for each
implementation, more efficient than walking a list, and solves
the direct-write vs buffered-read problem reported by yu kuai at
https://lore.kernel.org/linux-fsdevel/20200116063601.39201-1-yukuai3@huawei.com/

Matthew Wilcox (Oracle) (12):
  mm: Fix the return type of __do_page_cache_readahead
  readahead: Ignore return value of ->readpages
  readahead: Put pages in cache earlier
  mm: Add readahead address space operation
  fs: Convert mpage_readpages to mpage_readahead
  btrfs: Convert from readpages to readahead
  erofs: Convert uncompressed files from readpages to readahead
  erofs: Convert compressed files from readpages to readahead
  ext4: Convert from readpages to readahead
  f2fs: Convert from readpages to readahead
  fuse: Convert from readpages to readahead
  iomap: Convert from readpages to readahead

 Documentation/filesystems/locking.rst |  7 ++-
 Documentation/filesystems/vfs.rst     | 11 ++++
 drivers/staging/exfat/exfat_super.c   |  9 ++--
 fs/block_dev.c                        |  9 ++--
 fs/btrfs/extent_io.c                  | 15 ++----
 fs/btrfs/extent_io.h                  |  2 +-
 fs/btrfs/inode.c                      | 18 +++----
 fs/erofs/data.c                       | 34 +++++-------
 fs/erofs/zdata.c                      | 21 +++-----
 fs/ext2/inode.c                       | 12 ++---
 fs/ext4/ext4.h                        |  2 +-
 fs/ext4/inode.c                       | 24 ++++-----
 fs/ext4/readpage.c                    | 20 +++----
 fs/f2fs/data.c                        | 33 +++++-------
 fs/fat/inode.c                        |  8 +--
 fs/fuse/file.c                        | 35 ++++++------
 fs/gfs2/aops.c                        | 20 ++++---
 fs/hpfs/file.c                        |  8 +--
 fs/iomap/buffered-io.c                | 74 ++++++--------------------
 fs/iomap/trace.h                      |  2 +-
 fs/isofs/inode.c                      |  9 ++--
 fs/jfs/inode.c                        |  8 +--
 fs/mpage.c                            | 38 +++++---------
 fs/nilfs2/inode.c                     | 13 ++---
 fs/ocfs2/aops.c                       | 32 +++++------
 fs/omfs/file.c                        |  8 +--
 fs/qnx6/inode.c                       |  8 +--
 fs/reiserfs/inode.c                   | 10 ++--
 fs/udf/inode.c                        |  8 +--
 fs/xfs/xfs_aops.c                     | 10 ++--
 include/linux/fs.h                    |  2 +
 include/linux/iomap.h                 |  2 +-
 include/linux/mpage.h                 |  2 +-
 include/linux/pagemap.h               | 12 +++++
 include/trace/events/erofs.h          |  6 +--
 include/trace/events/f2fs.h           |  6 +--
 mm/internal.h                         |  2 +-
 mm/migrate.c                          |  2 +-
 mm/readahead.c                        | 76 +++++++++++++++++----------
 39 files changed, 289 insertions(+), 329 deletions(-)

-- 
2.24.1


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH 01/12] mm: Fix the return type of __do_page_cache_readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 02/12] readahead: Ignore return value of ->readpages Matthew Wilcox
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

ra_submit() which is a wrapper around __do_page_cache_readahead() already
returns an unsigned long, and the 'nr_to_read' parameter is an unsigned
long, so fix __do_page_cache_readahead() to return an unsigned long,
even though I'm pretty sure we're not going to readahead more than 2^32
pages ever.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h  | 2 +-
 mm/readahead.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 3cf20ab3ca01..41b93c4b3ab7 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,7 +49,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 			     unsigned long addr, unsigned long end,
 			     struct zap_details *details);
 
-extern unsigned int __do_page_cache_readahead(struct address_space *mapping,
+extern unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		struct file *filp, pgoff_t offset, unsigned long nr_to_read,
 		unsigned long lookahead_size);
 
diff --git a/mm/readahead.c b/mm/readahead.c
index 2fe72cd29b47..6bf73ef33b7e 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -152,7 +152,7 @@ static int read_pages(struct address_space *mapping, struct file *filp,
  *
  * Returns the number of pages requested, or the maximum amount of I/O allowed.
  */
-unsigned int __do_page_cache_readahead(struct address_space *mapping,
+unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		struct file *filp, pgoff_t offset, unsigned long nr_to_read,
 		unsigned long lookahead_size)
 {
@@ -161,7 +161,7 @@ unsigned int __do_page_cache_readahead(struct address_space *mapping,
 	unsigned long end_index;	/* The last page we want to read */
 	LIST_HEAD(page_pool);
 	int page_idx;
-	unsigned int nr_pages = 0;
+	unsigned long nr_pages = 0;
 	loff_t isize = i_size_read(inode);
 	gfp_t gfp_mask = readahead_gfp_mask(mapping);
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 02/12] readahead: Ignore return value of ->readpages
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 01/12] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 03/12] readahead: Put pages in cache earlier Matthew Wilcox
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

We used to assign the return value to a variable, which we then ignored.
Remove the pretence of caring.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/readahead.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 6bf73ef33b7e..fc77d13af556 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -113,17 +113,16 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages,
 
 EXPORT_SYMBOL(read_cache_pages);
 
-static int read_pages(struct address_space *mapping, struct file *filp,
+static void read_pages(struct address_space *mapping, struct file *filp,
 		struct list_head *pages, unsigned int nr_pages, gfp_t gfp)
 {
 	struct blk_plug plug;
 	unsigned page_idx;
-	int ret;
 
 	blk_start_plug(&plug);
 
 	if (mapping->a_ops->readpages) {
-		ret = mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
+		mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
 		/* Clean up the remaining pages */
 		put_pages_list(pages);
 		goto out;
@@ -136,12 +135,9 @@ static int read_pages(struct address_space *mapping, struct file *filp,
 			mapping->a_ops->readpage(filp, page);
 		put_page(page);
 	}
-	ret = 0;
 
 out:
 	blk_finish_plug(&plug);
-
-	return ret;
 }
 
 /*
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 03/12] readahead: Put pages in cache earlier
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 01/12] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 02/12] readahead: Ignore return value of ->readpages Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25 19:44   ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 04/12] mm: Add readahead address space operation Matthew Wilcox
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	linux-mm, linux-kernel, linux-btrfs, linux-erofs, linux-ext4,
	linux-f2fs-devel, linux-xfs, cluster-devel, ocfs2-devel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

At allocation time, put the pages in the cache unless we're using
->readpages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-btrfs@vger.kernel.org
Cc: linux-erofs@lists.ozlabs.org
Cc: linux-ext4@vger.kernel.org
Cc: linux-f2fs-devel@lists.sourceforge.net
Cc: linux-xfs@vger.kernel.org
Cc: cluster-devel@redhat.com
Cc: ocfs2-devel@oss.oracle.com
---
 mm/readahead.c | 51 +++++++++++++++++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 19 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index fc77d13af556..5a6676640f20 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -114,10 +114,10 @@ int read_cache_pages(struct address_space *mapping, struct list_head *pages,
 EXPORT_SYMBOL(read_cache_pages);
 
 static void read_pages(struct address_space *mapping, struct file *filp,
-		struct list_head *pages, unsigned int nr_pages, gfp_t gfp)
+		struct list_head *pages, pgoff_t start,
+		unsigned int nr_pages)
 {
 	struct blk_plug plug;
-	unsigned page_idx;
 
 	blk_start_plug(&plug);
 
@@ -125,18 +125,17 @@ static void read_pages(struct address_space *mapping, struct file *filp,
 		mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
 		/* Clean up the remaining pages */
 		put_pages_list(pages);
-		goto out;
-	}
+	} else {
+		struct page *page;
+		unsigned long index;
 
-	for (page_idx = 0; page_idx < nr_pages; page_idx++) {
-		struct page *page = lru_to_page(pages);
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, mapping, page->index, gfp))
+		xa_for_each_range(&mapping->i_pages, index, page, start,
+				start + nr_pages - 1) {
 			mapping->a_ops->readpage(filp, page);
-		put_page(page);
+			put_page(page);
+		}
 	}
 
-out:
 	blk_finish_plug(&plug);
 }
 
@@ -157,9 +156,11 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 	unsigned long end_index;	/* The last page we want to read */
 	LIST_HEAD(page_pool);
 	int page_idx;
+	pgoff_t page_offset;
 	unsigned long nr_pages = 0;
 	loff_t isize = i_size_read(inode);
 	gfp_t gfp_mask = readahead_gfp_mask(mapping);
+	bool use_list = mapping->a_ops->readpages;
 
 	if (isize == 0)
 		goto out;
@@ -170,7 +171,7 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 	 * Preallocate as many pages as we will need.
 	 */
 	for (page_idx = 0; page_idx < nr_to_read; page_idx++) {
-		pgoff_t page_offset = offset + page_idx;
+		page_offset = offset + page_idx;
 
 		if (page_offset > end_index)
 			break;
@@ -178,13 +179,14 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		page = xa_load(&mapping->i_pages, page_offset);
 		if (page && !xa_is_value(page)) {
 			/*
-			 * Page already present?  Kick off the current batch of
-			 * contiguous pages before continuing with the next
-			 * batch.
+			 * Page already present?  Kick off the current batch
+			 * of contiguous pages before continuing with the
+			 * next batch.
 			 */
 			if (nr_pages)
-				read_pages(mapping, filp, &page_pool, nr_pages,
-						gfp_mask);
+				read_pages(mapping, filp, &page_pool,
+						page_offset - nr_pages,
+						nr_pages);
 			nr_pages = 0;
 			continue;
 		}
@@ -192,8 +194,18 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 		page = __page_cache_alloc(gfp_mask);
 		if (!page)
 			break;
-		page->index = page_offset;
-		list_add(&page->lru, &page_pool);
+		if (use_list) {
+			page->index = page_offset;
+			list_add(&page->lru, &page_pool);
+		} else if (!add_to_page_cache_lru(page, mapping, page_offset,
+					gfp_mask)) {
+			if (nr_pages)
+				read_pages(mapping, filp, &page_pool,
+						page_offset - nr_pages,
+						nr_pages);
+			nr_pages = 0;
+			continue;
+		}
 		if (page_idx == nr_to_read - lookahead_size)
 			SetPageReadahead(page);
 		nr_pages++;
@@ -205,7 +217,8 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
 	 * will then handle the error.
 	 */
 	if (nr_pages)
-		read_pages(mapping, filp, &page_pool, nr_pages, gfp_mask);
+		read_pages(mapping, filp, &page_pool, page_offset - nr_pages,
+				nr_pages);
 	BUG_ON(!list_empty(&page_pool));
 out:
 	return nr_pages;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 04/12] mm: Add readahead address space operation
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (2 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 03/12] readahead: Put pages in cache earlier Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  3:57   ` Randy Dunlap
  2020-01-29  0:24   ` Dave Chinner
  2020-01-25  1:35 ` [PATCH 05/12] fs: Convert mpage_readpages to mpage_readahead Matthew Wilcox
                   ` (8 subsequent siblings)
  12 siblings, 2 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	linux-mm, linux-kernel, linux-btrfs, linux-erofs, linux-ext4,
	linux-f2fs-devel, linux-xfs, cluster-devel, ocfs2-devel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

This replaces ->readpages with a saner interface:
 - Return the number of pages not read instead of an ignored error code.
 - Pages are already in the page cache when ->readahead is called.
 - Implementation looks up the pages in the page cache instead of
   having them passed in a linked list.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-btrfs@vger.kernel.org
Cc: linux-erofs@lists.ozlabs.org
Cc: linux-ext4@vger.kernel.org
Cc: linux-f2fs-devel@lists.sourceforge.net
Cc: linux-xfs@vger.kernel.org
Cc: cluster-devel@redhat.com
Cc: ocfs2-devel@oss.oracle.com
---
 Documentation/filesystems/locking.rst |  7 ++++++-
 Documentation/filesystems/vfs.rst     | 11 +++++++++++
 include/linux/fs.h                    |  2 ++
 include/linux/pagemap.h               | 12 ++++++++++++
 mm/readahead.c                        | 13 ++++++++++++-
 5 files changed, 43 insertions(+), 2 deletions(-)

diff --git a/Documentation/filesystems/locking.rst b/Documentation/filesystems/locking.rst
index 5057e4d9dcd1..d8a5dde914b5 100644
--- a/Documentation/filesystems/locking.rst
+++ b/Documentation/filesystems/locking.rst
@@ -239,6 +239,8 @@ prototypes::
 	int (*readpage)(struct file *, struct page *);
 	int (*writepages)(struct address_space *, struct writeback_control *);
 	int (*set_page_dirty)(struct page *page);
+	unsigned (*readahead)(struct file *, struct address_space *,
+				 pgoff_t start, unsigned nr_pages);
 	int (*readpages)(struct file *filp, struct address_space *mapping,
 			struct list_head *pages, unsigned nr_pages);
 	int (*write_begin)(struct file *, struct address_space *mapping,
@@ -271,7 +273,8 @@ writepage:		yes, unlocks (see below)
 readpage:		yes, unlocks
 writepages:
 set_page_dirty		no
-readpages:
+readahead:              yes, unlocks
+readpages:              no
 write_begin:		locks the page		 exclusive
 write_end:		yes, unlocks		 exclusive
 bmap:
@@ -295,6 +298,8 @@ the request handler (/dev/loop).
 ->readpage() unlocks the page, either synchronously or via I/O
 completion.
 
+->readahead() unlocks the page like ->readpage().
+
 ->readpages() populates the pagecache with the passed pages and starts
 I/O against them.  They come unlocked upon I/O completion.
 
diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
index 7d4d09dd5e6d..bb06fb7b120b 100644
--- a/Documentation/filesystems/vfs.rst
+++ b/Documentation/filesystems/vfs.rst
@@ -706,6 +706,8 @@ cache in your filesystem.  The following members are defined:
 		int (*readpage)(struct file *, struct page *);
 		int (*writepages)(struct address_space *, struct writeback_control *);
 		int (*set_page_dirty)(struct page *page);
+		unsigned (*readahead)(struct file *filp, struct address_space *mapping,
+				 pgoff_t start, unsigned nr_pages);
 		int (*readpages)(struct file *filp, struct address_space *mapping,
 				 struct list_head *pages, unsigned nr_pages);
 		int (*write_begin)(struct file *, struct address_space *mapping,
@@ -781,6 +783,15 @@ cache in your filesystem.  The following members are defined:
 	If defined, it should set the PageDirty flag, and the
 	PAGECACHE_TAG_DIRTY tag in the radix tree.
 
+``readahead``
+	called by the VM to read pages associated with the address_space
+	object.  The pages are consecutive in the page cache and are
+        locked.  The implementation should decrement the page refcount after
+        attempting I/O on each page.  Usually the page will be unlocked by
+        the I/O completion handler.  If the function does not attempt I/O on
+        some pages, return the number of pages which were not read so the
+        common code can unlock the pages for you.
+
 ``readpages``
 	called by the VM to read pages associated with the address_space
 	object.  This is essentially just a vector version of readpage.
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 98e0349adb52..a10f3a72e5ac 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -375,6 +375,8 @@ struct address_space_operations {
 	 */
 	int (*readpages)(struct file *filp, struct address_space *mapping,
 			struct list_head *pages, unsigned nr_pages);
+	unsigned (*readahead)(struct file *, struct address_space *,
+			pgoff_t start, unsigned nr_pages);
 
 	int (*write_begin)(struct file *, struct address_space *mapping,
 				loff_t pos, unsigned len, unsigned flags,
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 37a4d9e32cd3..2baafd236a82 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -630,6 +630,18 @@ static inline int add_to_page_cache(struct page *page,
 	return error;
 }
 
+/*
+ * Only call this from a ->readahead implementation.
+ */
+static inline
+struct page *readahead_page(struct address_space *mapping, pgoff_t index)
+{
+	struct page *page = xa_load(&mapping->i_pages, index);
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
+
+	return page;
+}
+
 static inline unsigned long dir_pages(struct inode *inode)
 {
 	return (unsigned long)(inode->i_size + PAGE_SIZE - 1) >>
diff --git a/mm/readahead.c b/mm/readahead.c
index 5a6676640f20..6d65dae6dad0 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -121,7 +121,18 @@ static void read_pages(struct address_space *mapping, struct file *filp,
 
 	blk_start_plug(&plug);
 
-	if (mapping->a_ops->readpages) {
+	if (mapping->a_ops->readahead) {
+		unsigned left = mapping->a_ops->readahead(filp, mapping,
+				start, nr_pages);
+
+		while (left) {
+			struct page *page = readahead_page(mapping,
+					start + nr_pages - left - 1);
+			unlock_page(page);
+			put_page(page);
+			left--;
+		}
+	} else if (mapping->a_ops->readpages) {
 		mapping->a_ops->readpages(filp, mapping, pages, nr_pages);
 		/* Clean up the remaining pages */
 		put_pages_list(pages);
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 05/12] fs: Convert mpage_readpages to mpage_readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (3 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 04/12] mm: Add readahead address space operation Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 06/12] btrfs: Convert from readpages to readahead Matthew Wilcox
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle),
	linux-mm, linux-kernel, cluster-devel, ocfs2-devel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Implement the new readahead aop and convert all callers (block_dev,
exfat, ext2, fat, gfs2, hpfs, isofs, jfs, nilfs2, ocfs2, omfs, qnx6,
reiserfs & udf).  The callers are all trivial except for GFS2 & OCFS2.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: cluster-devel@redhat.com
Cc: ocfs2-devel@oss.oracle.com
---
 drivers/staging/exfat/exfat_super.c |  9 ++++---
 fs/block_dev.c                      |  9 ++++---
 fs/ext2/inode.c                     | 12 ++++-----
 fs/fat/inode.c                      |  8 +++---
 fs/gfs2/aops.c                      | 20 +++++++--------
 fs/hpfs/file.c                      |  8 +++---
 fs/iomap/buffered-io.c              |  2 +-
 fs/isofs/inode.c                    |  9 ++++---
 fs/jfs/inode.c                      |  8 +++---
 fs/mpage.c                          | 38 ++++++++++-------------------
 fs/nilfs2/inode.c                   | 13 +++++-----
 fs/ocfs2/aops.c                     | 32 +++++++++++-------------
 fs/omfs/file.c                      |  8 +++---
 fs/qnx6/inode.c                     |  8 +++---
 fs/reiserfs/inode.c                 | 10 ++++----
 fs/udf/inode.c                      |  8 +++---
 include/linux/mpage.h               |  2 +-
 mm/migrate.c                        |  2 +-
 18 files changed, 96 insertions(+), 110 deletions(-)

diff --git a/drivers/staging/exfat/exfat_super.c b/drivers/staging/exfat/exfat_super.c
index 9f91853b189b..6718f7969376 100644
--- a/drivers/staging/exfat/exfat_super.c
+++ b/drivers/staging/exfat/exfat_super.c
@@ -3005,10 +3005,11 @@ static int exfat_readpage(struct file *file, struct page *page)
 	return  mpage_readpage(page, exfat_get_block);
 }
 
-static int exfat_readpages(struct file *file, struct address_space *mapping,
-			   struct list_head *pages, unsigned int nr_pages)
+static
+unsigned exfat_readahead(struct file *file, struct address_space *mapping,
+			   pgoff_t start, unsigned int nr_pages)
 {
-	return  mpage_readpages(mapping, pages, nr_pages, exfat_get_block);
+	return  mpage_readahead(mapping, start, nr_pages, exfat_get_block);
 }
 
 static int exfat_writepage(struct page *page, struct writeback_control *wbc)
@@ -3107,7 +3108,7 @@ static sector_t _exfat_bmap(struct address_space *mapping, sector_t block)
 
 static const struct address_space_operations exfat_aops = {
 	.readpage    = exfat_readpage,
-	.readpages   = exfat_readpages,
+	.readahead   = exfat_readahead,
 	.writepage   = exfat_writepage,
 	.writepages  = exfat_writepages,
 	.write_begin = exfat_write_begin,
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 69bf2fb6f7cd..826a5104ff56 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -614,10 +614,11 @@ static int blkdev_readpage(struct file * file, struct page * page)
 	return block_read_full_page(page, blkdev_get_block);
 }
 
-static int blkdev_readpages(struct file *file, struct address_space *mapping,
-			struct list_head *pages, unsigned nr_pages)
+static
+unsigned blkdev_readahead(struct file *file, struct address_space *mapping,
+			pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, blkdev_get_block);
+	return mpage_readahead(mapping, start, nr_pages, blkdev_get_block);
 }
 
 static int blkdev_write_begin(struct file *file, struct address_space *mapping,
@@ -2062,7 +2063,7 @@ static int blkdev_writepages(struct address_space *mapping,
 
 static const struct address_space_operations def_blk_aops = {
 	.readpage	= blkdev_readpage,
-	.readpages	= blkdev_readpages,
+	.readahead	= blkdev_readahead,
 	.writepage	= blkdev_writepage,
 	.write_begin	= blkdev_write_begin,
 	.write_end	= blkdev_write_end,
diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c
index 119667e65890..0440eb9f24de 100644
--- a/fs/ext2/inode.c
+++ b/fs/ext2/inode.c
@@ -877,11 +877,11 @@ static int ext2_readpage(struct file *file, struct page *page)
 	return mpage_readpage(page, ext2_get_block);
 }
 
-static int
-ext2_readpages(struct file *file, struct address_space *mapping,
-		struct list_head *pages, unsigned nr_pages)
+static unsigned
+ext2_readahead(struct file *file, struct address_space *mapping,
+		pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, ext2_get_block);
+	return mpage_readahead(mapping, start, nr_pages, ext2_get_block);
 }
 
 static int
@@ -966,7 +966,7 @@ ext2_dax_writepages(struct address_space *mapping, struct writeback_control *wbc
 
 const struct address_space_operations ext2_aops = {
 	.readpage		= ext2_readpage,
-	.readpages		= ext2_readpages,
+	.readahead		= ext2_readahead,
 	.writepage		= ext2_writepage,
 	.write_begin		= ext2_write_begin,
 	.write_end		= ext2_write_end,
@@ -980,7 +980,7 @@ const struct address_space_operations ext2_aops = {
 
 const struct address_space_operations ext2_nobh_aops = {
 	.readpage		= ext2_readpage,
-	.readpages		= ext2_readpages,
+	.readahead		= ext2_readahead,
 	.writepage		= ext2_nobh_writepage,
 	.write_begin		= ext2_nobh_write_begin,
 	.write_end		= nobh_write_end,
diff --git a/fs/fat/inode.c b/fs/fat/inode.c
index 5f04c5c810fb..42569cdc206a 100644
--- a/fs/fat/inode.c
+++ b/fs/fat/inode.c
@@ -209,10 +209,10 @@ static int fat_readpage(struct file *file, struct page *page)
 	return mpage_readpage(page, fat_get_block);
 }
 
-static int fat_readpages(struct file *file, struct address_space *mapping,
-			 struct list_head *pages, unsigned nr_pages)
+static unsigned fat_readahead(struct file *file, struct address_space *mapping,
+			 pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, fat_get_block);
+	return mpage_readahead(mapping, start, nr_pages, fat_get_block);
 }
 
 static void fat_write_failed(struct address_space *mapping, loff_t to)
@@ -343,7 +343,7 @@ int fat_block_truncate_page(struct inode *inode, loff_t from)
 
 static const struct address_space_operations fat_aops = {
 	.readpage	= fat_readpage,
-	.readpages	= fat_readpages,
+	.readahead	= fat_readahead,
 	.writepage	= fat_writepage,
 	.writepages	= fat_writepages,
 	.write_begin	= fat_write_begin,
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index 9c6df721321a..58d228d13c98 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -579,7 +579,7 @@ int gfs2_internal_read(struct gfs2_inode *ip, char *buf, loff_t *pos,
 }
 
 /**
- * gfs2_readpages - Read a bunch of pages at once
+ * gfs2_readahead - Read a bunch of pages at once
  * @file: The file to read from
  * @mapping: Address space info
  * @pages: List of pages to read
@@ -592,16 +592,15 @@ int gfs2_internal_read(struct gfs2_inode *ip, char *buf, loff_t *pos,
  *    obviously not something we'd want to do on too regular a basis.
  *    Any I/O we ignore at this time will be done via readpage later.
  * 2. We don't handle stuffed files here we let readpage do the honours.
- * 3. mpage_readpages() does most of the heavy lifting in the common case.
+ * 3. mpage_readahead() does most of the heavy lifting in the common case.
  * 4. gfs2_block_map() is relied upon to set BH_Boundary in the right places.
  */
 
-static int gfs2_readpages(struct file *file, struct address_space *mapping,
-			  struct list_head *pages, unsigned nr_pages)
+static unsigned gfs2_readahead(struct file *file, struct address_space *mapping,
+			  pgoff_t start, unsigned nr_pages)
 {
 	struct inode *inode = mapping->host;
 	struct gfs2_inode *ip = GFS2_I(inode);
-	struct gfs2_sbd *sdp = GFS2_SB(inode);
 	struct gfs2_holder gh;
 	int ret;
 
@@ -610,13 +609,12 @@ static int gfs2_readpages(struct file *file, struct address_space *mapping,
 	if (unlikely(ret))
 		goto out_uninit;
 	if (!gfs2_is_stuffed(ip))
-		ret = mpage_readpages(mapping, pages, nr_pages, gfs2_block_map);
+		nr_pages = mpage_readahead(mapping, start, nr_pages,
+				gfs2_block_map);
 	gfs2_glock_dq(&gh);
 out_uninit:
 	gfs2_holder_uninit(&gh);
-	if (unlikely(gfs2_withdrawn(sdp)))
-		ret = -EIO;
-	return ret;
+	return nr_pages;
 }
 
 /**
@@ -830,7 +828,7 @@ static const struct address_space_operations gfs2_aops = {
 	.writepage = gfs2_writepage,
 	.writepages = gfs2_writepages,
 	.readpage = gfs2_readpage,
-	.readpages = gfs2_readpages,
+	.readahead = gfs2_readahead,
 	.bmap = gfs2_bmap,
 	.invalidatepage = gfs2_invalidatepage,
 	.releasepage = gfs2_releasepage,
@@ -844,7 +842,7 @@ static const struct address_space_operations gfs2_jdata_aops = {
 	.writepage = gfs2_jdata_writepage,
 	.writepages = gfs2_jdata_writepages,
 	.readpage = gfs2_readpage,
-	.readpages = gfs2_readpages,
+	.readahead = gfs2_readahead,
 	.set_page_dirty = jdata_set_page_dirty,
 	.bmap = gfs2_bmap,
 	.invalidatepage = gfs2_invalidatepage,
diff --git a/fs/hpfs/file.c b/fs/hpfs/file.c
index b36abf9cb345..a0f7cc0262ae 100644
--- a/fs/hpfs/file.c
+++ b/fs/hpfs/file.c
@@ -125,10 +125,10 @@ static int hpfs_writepage(struct page *page, struct writeback_control *wbc)
 	return block_write_full_page(page, hpfs_get_block, wbc);
 }
 
-static int hpfs_readpages(struct file *file, struct address_space *mapping,
-			  struct list_head *pages, unsigned nr_pages)
+static unsigned hpfs_readahead(struct file *file, struct address_space *mapping,
+			  pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, hpfs_get_block);
+	return mpage_readahead(mapping, start, nr_pages, hpfs_get_block);
 }
 
 static int hpfs_writepages(struct address_space *mapping,
@@ -198,7 +198,7 @@ static int hpfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
 const struct address_space_operations hpfs_aops = {
 	.readpage = hpfs_readpage,
 	.writepage = hpfs_writepage,
-	.readpages = hpfs_readpages,
+	.readahead = hpfs_readahead,
 	.writepages = hpfs_writepages,
 	.write_begin = hpfs_write_begin,
 	.write_end = hpfs_write_end,
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 828444e14d09..1e2f3cc4579b 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -359,7 +359,7 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops)
 	}
 
 	/*
-	 * Just like mpage_readpages and block_read_full_page we always
+	 * Just like mpage_readahead and block_read_full_page we always
 	 * return 0 and just mark the page as PageError on errors.  This
 	 * should be cleaned up all through the stack eventually.
 	 */
diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
index 62c0462dc89f..11154cc35b16 100644
--- a/fs/isofs/inode.c
+++ b/fs/isofs/inode.c
@@ -1185,10 +1185,11 @@ static int isofs_readpage(struct file *file, struct page *page)
 	return mpage_readpage(page, isofs_get_block);
 }
 
-static int isofs_readpages(struct file *file, struct address_space *mapping,
-			struct list_head *pages, unsigned nr_pages)
+static
+unsigned isofs_readahead(struct file *file, struct address_space *mapping,
+			pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, isofs_get_block);
+	return mpage_readahead(mapping, start, nr_pages, isofs_get_block);
 }
 
 static sector_t _isofs_bmap(struct address_space *mapping, sector_t block)
@@ -1198,7 +1199,7 @@ static sector_t _isofs_bmap(struct address_space *mapping, sector_t block)
 
 static const struct address_space_operations isofs_aops = {
 	.readpage = isofs_readpage,
-	.readpages = isofs_readpages,
+	.readahead = isofs_readahead,
 	.bmap = _isofs_bmap
 };
 
diff --git a/fs/jfs/inode.c b/fs/jfs/inode.c
index 9486afcdac76..1ed926ac2bb9 100644
--- a/fs/jfs/inode.c
+++ b/fs/jfs/inode.c
@@ -296,10 +296,10 @@ static int jfs_readpage(struct file *file, struct page *page)
 	return mpage_readpage(page, jfs_get_block);
 }
 
-static int jfs_readpages(struct file *file, struct address_space *mapping,
-		struct list_head *pages, unsigned nr_pages)
+static unsigned jfs_readahead(struct file *file, struct address_space *mapping,
+		pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, jfs_get_block);
+	return mpage_readahead(mapping, start, nr_pages, jfs_get_block);
 }
 
 static void jfs_write_failed(struct address_space *mapping, loff_t to)
@@ -358,7 +358,7 @@ static ssize_t jfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 
 const struct address_space_operations jfs_aops = {
 	.readpage	= jfs_readpage,
-	.readpages	= jfs_readpages,
+	.readahead	= jfs_readahead,
 	.writepage	= jfs_writepage,
 	.writepages	= jfs_writepages,
 	.write_begin	= jfs_write_begin,
diff --git a/fs/mpage.c b/fs/mpage.c
index ccba3c4c4479..91a148bcd582 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -91,7 +91,7 @@ mpage_alloc(struct block_device *bdev,
 }
 
 /*
- * support function for mpage_readpages.  The fs supplied get_block might
+ * support function for mpage_readahead.  The fs supplied get_block might
  * return an up to date buffer.  This is used to map that buffer into
  * the page, which allows readpage to avoid triggering a duplicate call
  * to get_block.
@@ -338,13 +338,10 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
 }
 
 /**
- * mpage_readpages - populate an address space with some pages & start reads against them
+ * mpage_readahead - start reads against pages
  * @mapping: the address_space
- * @pages: The address of a list_head which contains the target pages.  These
- *   pages have their ->index populated and are otherwise uninitialised.
- *   The page at @pages->prev has the lowest file offset, and reads should be
- *   issued in @pages->prev to @pages->next order.
- * @nr_pages: The number of pages at *@pages
+ * @start: The number of the first page to read.
+ * @nr_pages: The number of consecutive pages to read.
  * @get_block: The filesystem's block mapper function.
  *
  * This function walks the pages and the blocks within each page, building and
@@ -381,36 +378,27 @@ static struct bio *do_mpage_readpage(struct mpage_readpage_args *args)
  *
  * This all causes the disk requests to be issued in the correct order.
  */
-int
-mpage_readpages(struct address_space *mapping, struct list_head *pages,
-				unsigned nr_pages, get_block_t get_block)
+unsigned mpage_readahead(struct address_space *mapping, pgoff_t start,
+		unsigned nr_pages, get_block_t get_block)
 {
 	struct mpage_readpage_args args = {
 		.get_block = get_block,
 		.is_readahead = true,
 	};
-	unsigned page_idx;
-
-	for (page_idx = 0; page_idx < nr_pages; page_idx++) {
-		struct page *page = lru_to_page(pages);
 
+	while (nr_pages--) {
+		struct page *page = readahead_page(mapping, start++);
 		prefetchw(&page->flags);
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, mapping,
-					page->index,
-					readahead_gfp_mask(mapping))) {
-			args.page = page;
-			args.nr_pages = nr_pages - page_idx;
-			args.bio = do_mpage_readpage(&args);
-		}
+		args.page = page;
+		args.nr_pages = nr_pages;
+		args.bio = do_mpage_readpage(&args);
 		put_page(page);
 	}
-	BUG_ON(!list_empty(pages));
 	if (args.bio)
 		mpage_bio_submit(REQ_OP_READ, REQ_RAHEAD, args.bio);
 	return 0;
 }
-EXPORT_SYMBOL(mpage_readpages);
+EXPORT_SYMBOL(mpage_readahead);
 
 /*
  * This isn't called much at all
@@ -563,7 +551,7 @@ static int __mpage_writepage(struct page *page, struct writeback_control *wbc,
 		 * Page has buffers, but they are all unmapped. The page was
 		 * created by pagein or read over a hole which was handled by
 		 * block_read_full_page().  If this address_space is also
-		 * using mpage_readpages then this can rarely happen.
+		 * using mpage_readahead then this can rarely happen.
 		 */
 		goto confused;
 	}
diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
index 671085512e0f..ecf543f35256 100644
--- a/fs/nilfs2/inode.c
+++ b/fs/nilfs2/inode.c
@@ -146,17 +146,18 @@ static int nilfs_readpage(struct file *file, struct page *page)
 }
 
 /**
- * nilfs_readpages() - implement readpages() method of nilfs_aops {}
+ * nilfs_readahead() - implement readahead() method of nilfs_aops {}
  * address_space_operations.
  * @file - file struct of the file to be read
  * @mapping - address_space struct used for reading multiple pages
- * @pages - the pages to be read
+ * @start - the first page to read
  * @nr_pages - number of pages to be read
  */
-static int nilfs_readpages(struct file *file, struct address_space *mapping,
-			   struct list_head *pages, unsigned int nr_pages)
+static
+unsigned nilfs_readahead(struct file *file, struct address_space *mapping,
+			   pgoff_t start, unsigned int nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, nilfs_get_block);
+	return mpage_readahead(mapping, start, nr_pages, nilfs_get_block);
 }
 
 static int nilfs_writepages(struct address_space *mapping,
@@ -308,7 +309,7 @@ const struct address_space_operations nilfs_aops = {
 	.readpage		= nilfs_readpage,
 	.writepages		= nilfs_writepages,
 	.set_page_dirty		= nilfs_set_page_dirty,
-	.readpages		= nilfs_readpages,
+	.readahead		= nilfs_readahead,
 	.write_begin		= nilfs_write_begin,
 	.write_end		= nilfs_write_end,
 	/* .releasepage		= nilfs_releasepage, */
diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 3a67a6518ddf..a9784a6442b7 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -350,14 +350,13 @@ static int ocfs2_readpage(struct file *file, struct page *page)
  * grow out to a tree. If need be, detecting boundary extents could
  * trivially be added in a future version of ocfs2_get_block().
  */
-static int ocfs2_readpages(struct file *filp, struct address_space *mapping,
-			   struct list_head *pages, unsigned nr_pages)
+static
+unsigned ocfs2_readahead(struct file *filp, struct address_space *mapping,
+			   pgoff_t start, unsigned nr_pages)
 {
-	int ret, err = -EIO;
+	int ret;
 	struct inode *inode = mapping->host;
 	struct ocfs2_inode_info *oi = OCFS2_I(inode);
-	loff_t start;
-	struct page *last;
 
 	/*
 	 * Use the nonblocking flag for the dlm code to avoid page
@@ -365,36 +364,33 @@ static int ocfs2_readpages(struct file *filp, struct address_space *mapping,
 	 */
 	ret = ocfs2_inode_lock_full(inode, NULL, 0, OCFS2_LOCK_NONBLOCK);
 	if (ret)
-		return err;
+		return nr_pages;
 
-	if (down_read_trylock(&oi->ip_alloc_sem) == 0) {
-		ocfs2_inode_unlock(inode, 0);
-		return err;
-	}
+	if (down_read_trylock(&oi->ip_alloc_sem) == 0)
+		goto out_unlock;
 
 	/*
 	 * Don't bother with inline-data. There isn't anything
 	 * to read-ahead in that case anyway...
 	 */
 	if (oi->ip_dyn_features & OCFS2_INLINE_DATA_FL)
-		goto out_unlock;
+		goto out_up;
 
 	/*
 	 * Check whether a remote node truncated this file - we just
 	 * drop out in that case as it's not worth handling here.
 	 */
-	last = lru_to_page(pages);
-	start = (loff_t)last->index << PAGE_SHIFT;
 	if (start >= i_size_read(inode))
-		goto out_unlock;
+		goto out_up;
 
-	err = mpage_readpages(mapping, pages, nr_pages, ocfs2_get_block);
+	nr_pages = mpage_readahead(mapping, start, nr_pages, ocfs2_get_block);
 
-out_unlock:
+out_up:
 	up_read(&oi->ip_alloc_sem);
+out_unlock:
 	ocfs2_inode_unlock(inode, 0);
 
-	return err;
+	return nr_pages;
 }
 
 /* Note: Because we don't support holes, our allocation has
@@ -2474,7 +2470,7 @@ static ssize_t ocfs2_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
 
 const struct address_space_operations ocfs2_aops = {
 	.readpage		= ocfs2_readpage,
-	.readpages		= ocfs2_readpages,
+	.readahead		= ocfs2_readahead,
 	.writepage		= ocfs2_writepage,
 	.write_begin		= ocfs2_write_begin,
 	.write_end		= ocfs2_write_end,
diff --git a/fs/omfs/file.c b/fs/omfs/file.c
index d640b9388238..e7392f49f619 100644
--- a/fs/omfs/file.c
+++ b/fs/omfs/file.c
@@ -289,10 +289,10 @@ static int omfs_readpage(struct file *file, struct page *page)
 	return block_read_full_page(page, omfs_get_block);
 }
 
-static int omfs_readpages(struct file *file, struct address_space *mapping,
-		struct list_head *pages, unsigned nr_pages)
+static unsigned omfs_readahead(struct file *file, struct address_space *mapping,
+		pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, omfs_get_block);
+	return mpage_readahead(mapping, start, nr_pages, omfs_get_block);
 }
 
 static int omfs_writepage(struct page *page, struct writeback_control *wbc)
@@ -373,7 +373,7 @@ const struct inode_operations omfs_file_inops = {
 
 const struct address_space_operations omfs_aops = {
 	.readpage = omfs_readpage,
-	.readpages = omfs_readpages,
+	.readahead = omfs_readahead,
 	.writepage = omfs_writepage,
 	.writepages = omfs_writepages,
 	.write_begin = omfs_write_begin,
diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c
index 345db56c98fd..949e823a1d30 100644
--- a/fs/qnx6/inode.c
+++ b/fs/qnx6/inode.c
@@ -99,10 +99,10 @@ static int qnx6_readpage(struct file *file, struct page *page)
 	return mpage_readpage(page, qnx6_get_block);
 }
 
-static int qnx6_readpages(struct file *file, struct address_space *mapping,
-		   struct list_head *pages, unsigned nr_pages)
+static unsigned qnx6_readahead(struct file *file, struct address_space *mapping,
+		   pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, qnx6_get_block);
+	return mpage_readahead(mapping, start, nr_pages, qnx6_get_block);
 }
 
 /*
@@ -499,7 +499,7 @@ static sector_t qnx6_bmap(struct address_space *mapping, sector_t block)
 }
 static const struct address_space_operations qnx6_aops = {
 	.readpage	= qnx6_readpage,
-	.readpages	= qnx6_readpages,
+	.readahead	= qnx6_readahead,
 	.bmap		= qnx6_bmap
 };
 
diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c
index 6419e6dacc39..0f2666ef23dd 100644
--- a/fs/reiserfs/inode.c
+++ b/fs/reiserfs/inode.c
@@ -1160,11 +1160,11 @@ int reiserfs_get_block(struct inode *inode, sector_t block,
 	return retval;
 }
 
-static int
-reiserfs_readpages(struct file *file, struct address_space *mapping,
-		   struct list_head *pages, unsigned nr_pages)
+static unsigned
+reiserfs_readahead(struct file *file, struct address_space *mapping,
+		   pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, reiserfs_get_block);
+	return mpage_readahead(mapping, start, nr_pages, reiserfs_get_block);
 }
 
 /*
@@ -3434,7 +3434,7 @@ int reiserfs_setattr(struct dentry *dentry, struct iattr *attr)
 const struct address_space_operations reiserfs_address_space_operations = {
 	.writepage = reiserfs_writepage,
 	.readpage = reiserfs_readpage,
-	.readpages = reiserfs_readpages,
+	.readahead = reiserfs_readahead,
 	.releasepage = reiserfs_releasepage,
 	.invalidatepage = reiserfs_invalidatepage,
 	.write_begin = reiserfs_write_begin,
diff --git a/fs/udf/inode.c b/fs/udf/inode.c
index ea80036d7897..32156979fd2f 100644
--- a/fs/udf/inode.c
+++ b/fs/udf/inode.c
@@ -195,10 +195,10 @@ static int udf_readpage(struct file *file, struct page *page)
 	return mpage_readpage(page, udf_get_block);
 }
 
-static int udf_readpages(struct file *file, struct address_space *mapping,
-			struct list_head *pages, unsigned nr_pages)
+static unsigned udf_readahead(struct file *file, struct address_space *mapping,
+			pgoff_t start, unsigned nr_pages)
 {
-	return mpage_readpages(mapping, pages, nr_pages, udf_get_block);
+	return mpage_readahead(mapping, start, nr_pages, udf_get_block);
 }
 
 static int udf_write_begin(struct file *file, struct address_space *mapping,
@@ -234,7 +234,7 @@ static sector_t udf_bmap(struct address_space *mapping, sector_t block)
 
 const struct address_space_operations udf_aops = {
 	.readpage	= udf_readpage,
-	.readpages	= udf_readpages,
+	.readahead	= udf_readahead,
 	.writepage	= udf_writepage,
 	.writepages	= udf_writepages,
 	.write_begin	= udf_write_begin,
diff --git a/include/linux/mpage.h b/include/linux/mpage.h
index 001f1fcf9836..dabf7b5a6a28 100644
--- a/include/linux/mpage.h
+++ b/include/linux/mpage.h
@@ -14,7 +14,7 @@
 
 struct writeback_control;
 
-int mpage_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned mpage_readahead(struct address_space *mapping, pgoff_t start,
 				unsigned nr_pages, get_block_t get_block);
 int mpage_readpage(struct page *page, get_block_t get_block);
 int mpage_writepages(struct address_space *mapping,
diff --git a/mm/migrate.c b/mm/migrate.c
index 86873b6f38a7..819aa60a0eab 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1019,7 +1019,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 		 * to the LRU. Later, when the IO completes the pages are
 		 * marked uptodate and unlocked. However, the queueing
 		 * could be merging multiple pages for one bio (e.g.
-		 * mpage_readpages). If an allocation happens for the
+		 * mpage_readahead). If an allocation happens for the
 		 * second or third page, the process can end up locking
 		 * the same page twice and deadlocking. Rather than
 		 * trying to be clever about what pages can be locked,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 06/12] btrfs: Convert from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (4 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 05/12] fs: Convert mpage_readpages to mpage_readahead Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-29  0:46   ` Dave Chinner
  2020-01-25  1:35 ` [PATCH 07/12] erofs: Convert uncompressed files " Matthew Wilcox
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel, linux-btrfs

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in btrfs

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-btrfs@vger.kernel.org
---
 fs/btrfs/extent_io.c | 15 ++++-----------
 fs/btrfs/extent_io.h |  2 +-
 fs/btrfs/inode.c     | 18 +++++++++---------
 3 files changed, 14 insertions(+), 21 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2f4802f405a2..b1e2acbec165 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -4283,7 +4283,7 @@ int extent_writepages(struct address_space *mapping,
 	return ret;
 }
 
-int extent_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned extent_readahead(struct address_space *mapping, pgoff_t start,
 		     unsigned nr_pages)
 {
 	struct bio *bio = NULL;
@@ -4294,20 +4294,13 @@ int extent_readpages(struct address_space *mapping, struct list_head *pages,
 	int nr = 0;
 	u64 prev_em_start = (u64)-1;
 
-	while (!list_empty(pages)) {
+	while (nr_pages) {
 		u64 contig_end = 0;
 
-		for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) {
-			struct page *page = lru_to_page(pages);
+		for (nr = 0; nr < ARRAY_SIZE(pagepool) && nr_pages--;) {
+			struct page *page = readahead_page(mapping, start++);
 
 			prefetchw(&page->flags);
-			list_del(&page->lru);
-			if (add_to_page_cache_lru(page, mapping, page->index,
-						readahead_gfp_mask(mapping))) {
-				put_page(page);
-				break;
-			}
-
 			pagepool[nr++] = page;
 			contig_end = page_offset(page) + PAGE_SIZE - 1;
 		}
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index a8551a1f56e2..d0f154766a02 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -200,7 +200,7 @@ int extent_writepages(struct address_space *mapping,
 		      struct writeback_control *wbc);
 int btree_write_cache_pages(struct address_space *mapping,
 			    struct writeback_control *wbc);
-int extent_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned extent_readahead(struct address_space *mapping, pgoff_t start,
 		     unsigned nr_pages);
 int extent_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
 		__u64 start, __u64 len);
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index c70baafb2a39..4f223b4f7dff 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -5395,8 +5395,8 @@ static void evict_inode_truncate_pages(struct inode *inode)
 
 	/*
 	 * Keep looping until we have no more ranges in the io tree.
-	 * We can have ongoing bios started by readpages (called from readahead)
-	 * that have their endio callback (extent_io.c:end_bio_extent_readpage)
+	 * We can have ongoing bios started by readahead that have
+	 * their endio callback (extent_io.c:end_bio_extent_readpage)
 	 * still in progress (unlocked the pages in the bio but did not yet
 	 * unlocked the ranges in the io tree). Therefore this means some
 	 * ranges can still be locked and eviction started because before
@@ -7586,11 +7586,11 @@ static int lock_extent_direct(struct inode *inode, u64 lockstart, u64 lockend,
 			 * for it to complete) and then invalidate the pages for
 			 * this range (through invalidate_inode_pages2_range()),
 			 * but that can lead us to a deadlock with a concurrent
-			 * call to readpages() (a buffered read or a defrag call
+			 * call to readahead (a buffered read or a defrag call
 			 * triggered a readahead) on a page lock due to an
 			 * ordered dio extent we created before but did not have
 			 * yet a corresponding bio submitted (whence it can not
-			 * complete), which makes readpages() wait for that
+			 * complete), which makes readahead wait for that
 			 * ordered extent to complete while holding a lock on
 			 * that page.
 			 */
@@ -8829,11 +8829,11 @@ static int btrfs_writepages(struct address_space *mapping,
 	return extent_writepages(mapping, wbc);
 }
 
-static int
-btrfs_readpages(struct file *file, struct address_space *mapping,
-		struct list_head *pages, unsigned nr_pages)
+static unsigned
+btrfs_readahead(struct file *file, struct address_space *mapping,
+		pgoff_t start, unsigned nr_pages)
 {
-	return extent_readpages(mapping, pages, nr_pages);
+	return extent_readahead(mapping, start, nr_pages);
 }
 
 static int __btrfs_releasepage(struct page *page, gfp_t gfp_flags)
@@ -11045,7 +11045,7 @@ static const struct address_space_operations btrfs_aops = {
 	.readpage	= btrfs_readpage,
 	.writepage	= btrfs_writepage,
 	.writepages	= btrfs_writepages,
-	.readpages	= btrfs_readpages,
+	.readahead	= btrfs_readahead,
 	.direct_IO	= btrfs_direct_IO,
 	.invalidatepage = btrfs_invalidatepage,
 	.releasepage	= btrfs_releasepage,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 07/12] erofs: Convert uncompressed files from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (5 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 06/12] btrfs: Convert from readpages to readahead Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:53   ` Gao Xiang
  2020-01-29  0:57   ` Dave Chinner
  2020-01-25  1:35 ` [PATCH 08/12] erofs: Convert compressed " Matthew Wilcox
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel, linux-erofs

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in erofs.  Fix what I believe to be
a refcounting bug in the error case.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-erofs@lists.ozlabs.org
---
 fs/erofs/data.c              | 34 ++++++++++++++--------------------
 fs/erofs/zdata.c             |  2 +-
 include/trace/events/erofs.h |  6 +++---
 3 files changed, 18 insertions(+), 24 deletions(-)

diff --git a/fs/erofs/data.c b/fs/erofs/data.c
index fc3a8d8064f8..335c1ab05312 100644
--- a/fs/erofs/data.c
+++ b/fs/erofs/data.c
@@ -280,42 +280,36 @@ static int erofs_raw_access_readpage(struct file *file, struct page *page)
 	return 0;
 }
 
-static int erofs_raw_access_readpages(struct file *filp,
+static unsigned erofs_raw_access_readahead(struct file *file,
 				      struct address_space *mapping,
-				      struct list_head *pages,
+				      pgoff_t start,
 				      unsigned int nr_pages)
 {
 	erofs_off_t last_block;
 	struct bio *bio = NULL;
-	gfp_t gfp = readahead_gfp_mask(mapping);
-	struct page *page = list_last_entry(pages, struct page, lru);
 
-	trace_erofs_readpages(mapping->host, page, nr_pages, true);
+	trace_erofs_readpages(mapping->host, start, nr_pages, true);
 
 	for (; nr_pages; --nr_pages) {
-		page = list_entry(pages->prev, struct page, lru);
+		struct page *page = readahead_page(mapping, start++);
 
 		prefetchw(&page->flags);
-		list_del(&page->lru);
 
-		if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) {
-			bio = erofs_read_raw_page(bio, mapping, page,
-						  &last_block, nr_pages, true);
+		bio = erofs_read_raw_page(bio, mapping, page, &last_block,
+				nr_pages, true);
 
-			/* all the page errors are ignored when readahead */
-			if (IS_ERR(bio)) {
-				pr_err("%s, readahead error at page %lu of nid %llu\n",
-				       __func__, page->index,
-				       EROFS_I(mapping->host)->nid);
+		/* all the page errors are ignored when readahead */
+		if (IS_ERR(bio)) {
+			pr_err("%s, readahead error at page %lu of nid %llu\n",
+			       __func__, page->index,
+			       EROFS_I(mapping->host)->nid);
 
-				bio = NULL;
-			}
+			bio = NULL;
+			put_page(page);
 		}
 
-		/* pages could still be locked */
 		put_page(page);
 	}
-	DBG_BUGON(!list_empty(pages));
 
 	/* the rare case (end in gaps) */
 	if (bio)
@@ -358,7 +352,7 @@ static sector_t erofs_bmap(struct address_space *mapping, sector_t block)
 /* for uncompressed (aligned) files and raw access for other files */
 const struct address_space_operations erofs_raw_access_aops = {
 	.readpage = erofs_raw_access_readpage,
-	.readpages = erofs_raw_access_readpages,
+	.readahead = erofs_raw_access_readahead,
 	.bmap = erofs_bmap,
 };
 
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index ca99425a4536..d3dd8cf1fc01 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -1340,7 +1340,7 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping,
 	struct page *head = NULL;
 	LIST_HEAD(pagepool);
 
-	trace_erofs_readpages(mapping->host, lru_to_page(pages),
+	trace_erofs_readpages(mapping->host, lru_to_page(pages)->index,
 			      nr_pages, false);
 
 	f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT;
diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h
index 27f5caa6299a..bf9806fd1306 100644
--- a/include/trace/events/erofs.h
+++ b/include/trace/events/erofs.h
@@ -113,10 +113,10 @@ TRACE_EVENT(erofs_readpage,
 
 TRACE_EVENT(erofs_readpages,
 
-	TP_PROTO(struct inode *inode, struct page *page, unsigned int nrpage,
+	TP_PROTO(struct inode *inode, pgoff_t start, unsigned int nrpage,
 		bool raw),
 
-	TP_ARGS(inode, page, nrpage, raw),
+	TP_ARGS(inode, start, nrpage, raw),
 
 	TP_STRUCT__entry(
 		__field(dev_t,		dev	)
@@ -129,7 +129,7 @@ TRACE_EVENT(erofs_readpages,
 	TP_fast_assign(
 		__entry->dev	= inode->i_sb->s_dev;
 		__entry->nid	= EROFS_I(inode)->nid;
-		__entry->start	= page->index;
+		__entry->start	= start;
 		__entry->nrpage	= nrpage;
 		__entry->raw	= raw;
 	),
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 08/12] erofs: Convert compressed files from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (6 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 07/12] erofs: Convert uncompressed files " Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 09/12] ext4: Convert " Matthew Wilcox
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel, linux-erofs

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in erofs.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-erofs@lists.ozlabs.org
---
 fs/erofs/zdata.c | 21 +++++++--------------
 1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index d3dd8cf1fc01..20efb87dd957 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -1328,28 +1328,26 @@ static bool should_decompress_synchronously(struct erofs_sb_info *sbi,
 	return nr <= sbi->max_sync_decompress_pages;
 }
 
-static int z_erofs_readpages(struct file *filp, struct address_space *mapping,
-			     struct list_head *pages, unsigned int nr_pages)
+static
+unsigned z_erofs_readahead(struct file *file, struct address_space *mapping,
+			     pgoff_t start, unsigned int nr_pages)
 {
 	struct inode *const inode = mapping->host;
 	struct erofs_sb_info *const sbi = EROFS_I_SB(inode);
 
 	bool sync = should_decompress_synchronously(sbi, nr_pages);
 	struct z_erofs_decompress_frontend f = DECOMPRESS_FRONTEND_INIT(inode);
-	gfp_t gfp = mapping_gfp_constraint(mapping, GFP_KERNEL);
 	struct page *head = NULL;
 	LIST_HEAD(pagepool);
 
-	trace_erofs_readpages(mapping->host, lru_to_page(pages)->index,
-			      nr_pages, false);
+	trace_erofs_readpages(inode, start, nr_pages, false);
 
-	f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT;
+	f.headoffset = (erofs_off_t)start << PAGE_SHIFT;
 
 	for (; nr_pages; --nr_pages) {
-		struct page *page = lru_to_page(pages);
+		struct page *page = readahead_page(mapping, start);
 
 		prefetchw(&page->flags);
-		list_del(&page->lru);
 
 		/*
 		 * A pure asynchronous readahead is indicated if
@@ -1358,11 +1356,6 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping,
 		 */
 		sync &= !(PageReadahead(page) && !head);
 
-		if (add_to_page_cache_lru(page, mapping, page->index, gfp)) {
-			list_add(&page->lru, &pagepool);
-			continue;
-		}
-
 		set_page_private(page, (unsigned long)head);
 		head = page;
 	}
@@ -1396,6 +1389,6 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping,
 
 const struct address_space_operations z_erofs_aops = {
 	.readpage = z_erofs_readpage,
-	.readpages = z_erofs_readpages,
+	.readahead = z_erofs_readahead,
 };
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 09/12] ext4: Convert from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (7 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 08/12] erofs: Convert compressed " Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 10/12] f2fs: " Matthew Wilcox
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel, linux-ext4

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in ext4

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-ext4@vger.kernel.org
---
 fs/ext4/ext4.h     |  2 +-
 fs/ext4/inode.c    | 24 ++++++++++++------------
 fs/ext4/readpage.c | 20 +++++++-------------
 3 files changed, 20 insertions(+), 26 deletions(-)

diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
index f8578caba40d..a035694f3d9b 100644
--- a/fs/ext4/ext4.h
+++ b/fs/ext4/ext4.h
@@ -3216,7 +3216,7 @@ static inline void ext4_set_de_type(struct super_block *sb,
 
 /* readpages.c */
 extern int ext4_mpage_readpages(struct address_space *mapping,
-				struct list_head *pages, struct page *page,
+				pgoff_t start, struct page *page,
 				unsigned nr_pages, bool is_readahead);
 extern int __init ext4_init_post_read_processing(void);
 extern void ext4_exit_post_read_processing(void);
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 629a25d999f0..4afefc991b01 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3217,7 +3217,7 @@ static sector_t ext4_bmap(struct address_space *mapping, sector_t block)
 static int ext4_readpage(struct file *file, struct page *page)
 {
 	int ret = -EAGAIN;
-	struct inode *inode = page->mapping->host;
+	struct inode *inode = file_inode(file);
 
 	trace_ext4_readpage(page);
 
@@ -3225,23 +3225,23 @@ static int ext4_readpage(struct file *file, struct page *page)
 		ret = ext4_readpage_inline(inode, page);
 
 	if (ret == -EAGAIN)
-		return ext4_mpage_readpages(page->mapping, NULL, page, 1,
-						false);
+		return ext4_mpage_readpages(page->mapping, 0, page, 1, false);
 
 	return ret;
 }
 
-static int
-ext4_readpages(struct file *file, struct address_space *mapping,
-		struct list_head *pages, unsigned nr_pages)
+static unsigned
+ext4_readahead(struct file *file, struct address_space *mapping,
+		pgoff_t start, unsigned nr_pages)
 {
 	struct inode *inode = mapping->host;
 
-	/* If the file has inline data, no need to do readpages. */
+	/* If the file has inline data, no need to do readahead. */
 	if (ext4_has_inline_data(inode))
-		return 0;
+		return nr_pages;
 
-	return ext4_mpage_readpages(mapping, pages, NULL, nr_pages, true);
+	ext4_mpage_readpages(mapping, start, NULL, nr_pages, true);
+	return 0;
 }
 
 static void ext4_invalidatepage(struct page *page, unsigned int offset,
@@ -3565,7 +3565,7 @@ static int ext4_set_page_dirty(struct page *page)
 
 static const struct address_space_operations ext4_aops = {
 	.readpage		= ext4_readpage,
-	.readpages		= ext4_readpages,
+	.readahead		= ext4_readahead,
 	.writepage		= ext4_writepage,
 	.writepages		= ext4_writepages,
 	.write_begin		= ext4_write_begin,
@@ -3582,7 +3582,7 @@ static const struct address_space_operations ext4_aops = {
 
 static const struct address_space_operations ext4_journalled_aops = {
 	.readpage		= ext4_readpage,
-	.readpages		= ext4_readpages,
+	.readahead		= ext4_readahead,
 	.writepage		= ext4_writepage,
 	.writepages		= ext4_writepages,
 	.write_begin		= ext4_write_begin,
@@ -3598,7 +3598,7 @@ static const struct address_space_operations ext4_journalled_aops = {
 
 static const struct address_space_operations ext4_da_aops = {
 	.readpage		= ext4_readpage,
-	.readpages		= ext4_readpages,
+	.readahead		= ext4_readahead,
 	.writepage		= ext4_writepage,
 	.writepages		= ext4_writepages,
 	.write_begin		= ext4_da_write_begin,
diff --git a/fs/ext4/readpage.c b/fs/ext4/readpage.c
index fef7755300c3..aa3f46a237ef 100644
--- a/fs/ext4/readpage.c
+++ b/fs/ext4/readpage.c
@@ -7,8 +7,8 @@
  *
  * This was originally taken from fs/mpage.c
  *
- * The intent is the ext4_mpage_readpages() function here is intended
- * to replace mpage_readpages() in the general case, not just for
+ * The ext4_mpage_readahead() function here is intended to
+ * replace mpage_readahead() in the general case, not just for
  * encrypted files.  It has some limitations (see below), where it
  * will fall back to read_block_full_page(), but these limitations
  * should only be hit when page_size != block_size.
@@ -209,9 +209,8 @@ static inline loff_t ext4_readpage_limit(struct inode *inode)
 	return i_size_read(inode);
 }
 
-int ext4_mpage_readpages(struct address_space *mapping,
-			 struct list_head *pages, struct page *page,
-			 unsigned nr_pages, bool is_readahead)
+int ext4_mpage_readpages(struct address_space *mapping, pgoff_t start,
+		struct page *page, unsigned nr_pages, bool is_readahead)
 {
 	struct bio *bio = NULL;
 	sector_t last_block_in_bio = 0;
@@ -239,14 +238,10 @@ int ext4_mpage_readpages(struct address_space *mapping,
 		int fully_mapped = 1;
 		unsigned first_hole = blocks_per_page;
 
-		if (pages) {
-			page = lru_to_page(pages);
+		if (is_readahead) {
+			page = readahead_page(mapping, start++);
 
 			prefetchw(&page->flags);
-			list_del(&page->lru);
-			if (add_to_page_cache_lru(page, mapping, page->index,
-				  readahead_gfp_mask(mapping)))
-				goto next_page;
 		}
 
 		if (page_has_buffers(page))
@@ -402,10 +397,9 @@ int ext4_mpage_readpages(struct address_space *mapping,
 		else
 			unlock_page(page);
 	next_page:
-		if (pages)
+		if (is_readahead)
 			put_page(page);
 	}
-	BUG_ON(pages && !list_empty(pages));
 	if (bio)
 		submit_bio(bio);
 	return 0;
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 10/12] f2fs: Convert from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (8 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 09/12] ext4: Convert " Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-25  1:35 ` [PATCH 11/12] fuse: " Matthew Wilcox
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel, linux-f2fs-devel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in f2fs

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-f2fs-devel@lists.sourceforge.net
---
 fs/f2fs/data.c              | 33 +++++++++++++--------------------
 include/trace/events/f2fs.h |  6 +++---
 2 files changed, 16 insertions(+), 23 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index a034cd0ce021..70bbcdfbd368 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -1882,7 +1882,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page,
  * from read-ahead.
  */
 static int f2fs_mpage_readpages(struct address_space *mapping,
-			struct list_head *pages, struct page *page,
+			pgoff_t start, struct page *page,
 			unsigned nr_pages, bool is_readahead)
 {
 	struct bio *bio = NULL;
@@ -1901,15 +1901,10 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 	map.m_may_create = false;
 
 	for (; nr_pages; nr_pages--) {
-		if (pages) {
-			page = list_last_entry(pages, struct page, lru);
+		if (is_readahead) {
+			page = readahead_page(mapping, start++);
 
 			prefetchw(&page->flags);
-			list_del(&page->lru);
-			if (add_to_page_cache_lru(page, mapping,
-						  page_index(page),
-						  readahead_gfp_mask(mapping)))
-				goto next_page;
 		}
 
 		ret = f2fs_read_single_page(inode, page, nr_pages, &map, &bio,
@@ -1919,14 +1914,12 @@ static int f2fs_mpage_readpages(struct address_space *mapping,
 			zero_user_segment(page, 0, PAGE_SIZE);
 			unlock_page(page);
 		}
-next_page:
-		if (pages)
+		if (is_readahead)
 			put_page(page);
 	}
-	BUG_ON(pages && !list_empty(pages));
 	if (bio)
 		__submit_bio(F2FS_I_SB(inode), bio, DATA);
-	return pages ? 0 : ret;
+	return ret;
 }
 
 static int f2fs_read_data_page(struct file *file, struct page *page)
@@ -1941,24 +1934,24 @@ static int f2fs_read_data_page(struct file *file, struct page *page)
 		ret = f2fs_read_inline_data(inode, page);
 	if (ret == -EAGAIN)
 		ret = f2fs_mpage_readpages(page_file_mapping(page),
-						NULL, page, 1, false);
+						0, page, 1, false);
 	return ret;
 }
 
-static int f2fs_read_data_pages(struct file *file,
+static unsigned f2fs_readahead(struct file *file,
 			struct address_space *mapping,
-			struct list_head *pages, unsigned nr_pages)
+			pgoff_t start, unsigned nr_pages)
 {
 	struct inode *inode = mapping->host;
-	struct page *page = list_last_entry(pages, struct page, lru);
 
-	trace_f2fs_readpages(inode, page, nr_pages);
+	trace_f2fs_readpages(inode, start, nr_pages);
 
 	/* If the file has inline data, skip readpages */
 	if (f2fs_has_inline_data(inode))
-		return 0;
+		return nr_pages;
 
-	return f2fs_mpage_readpages(mapping, pages, NULL, nr_pages, true);
+	f2fs_mpage_readpages(mapping, start, NULL, nr_pages, true);
+	return 0;
 }
 
 static int encrypt_one_page(struct f2fs_io_info *fio)
@@ -3265,7 +3258,7 @@ static void f2fs_swap_deactivate(struct file *file)
 
 const struct address_space_operations f2fs_dblock_aops = {
 	.readpage	= f2fs_read_data_page,
-	.readpages	= f2fs_read_data_pages,
+	.readahead	= f2fs_readahead,
 	.writepage	= f2fs_write_data_page,
 	.writepages	= f2fs_write_data_pages,
 	.write_begin	= f2fs_write_begin,
diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h
index 1796ff99c3e9..35b5bb2188a3 100644
--- a/include/trace/events/f2fs.h
+++ b/include/trace/events/f2fs.h
@@ -1368,9 +1368,9 @@ TRACE_EVENT(f2fs_writepages,
 
 TRACE_EVENT(f2fs_readpages,
 
-	TP_PROTO(struct inode *inode, struct page *page, unsigned int nrpage),
+	TP_PROTO(struct inode *inode, pgoff_t start, unsigned int nrpage),
 
-	TP_ARGS(inode, page, nrpage),
+	TP_ARGS(inode, start, nrpage),
 
 	TP_STRUCT__entry(
 		__field(dev_t,	dev)
@@ -1382,7 +1382,7 @@ TRACE_EVENT(f2fs_readpages,
 	TP_fast_assign(
 		__entry->dev	= inode->i_sb->s_dev;
 		__entry->ino	= inode->i_ino;
-		__entry->start	= page->index;
+		__entry->start	= start;
 		__entry->nrpage	= nrpage;
 	),
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 11/12] fuse: Convert from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (9 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 10/12] f2fs: " Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-29  1:08   ` Dave Chinner
  2020-01-29 10:50   ` Miklos Szeredi
  2020-01-25  1:35 ` [PATCH 12/12] iomap: " Matthew Wilcox
  2020-02-13  4:38 ` [PATCH 00/12] Change readahead API Andrew Morton
  12 siblings, 2 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in fuse.  Switching away from the
read_cache_pages() helper gets rid of an implicit call to put_page(),
so we can get rid of the get_page() call in fuse_readpages_fill().
We can also get rid of the call to fuse_wait_on_page_writeback() as
this page is newly allocated and so cannot be under writeback.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/fuse/file.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index ce715380143c..b6d0ed7d805b 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -911,16 +911,13 @@ struct fuse_fill_data {
 	unsigned int max_pages;
 };
 
-static int fuse_readpages_fill(void *_data, struct page *page)
+static int fuse_readpages_fill(struct fuse_fill_data *data, struct page *page)
 {
-	struct fuse_fill_data *data = _data;
 	struct fuse_io_args *ia = data->ia;
 	struct fuse_args_pages *ap = &ia->ap;
 	struct inode *inode = data->inode;
 	struct fuse_conn *fc = get_fuse_conn(inode);
 
-	fuse_wait_on_page_writeback(inode, page->index);
-
 	if (ap->num_pages &&
 	    (ap->num_pages == fc->max_pages ||
 	     (ap->num_pages + 1) * PAGE_SIZE > fc->max_read ||
@@ -942,7 +939,6 @@ static int fuse_readpages_fill(void *_data, struct page *page)
 		return -EIO;
 	}
 
-	get_page(page);
 	ap->pages[ap->num_pages] = page;
 	ap->descs[ap->num_pages].length = PAGE_SIZE;
 	ap->num_pages++;
@@ -950,15 +946,13 @@ static int fuse_readpages_fill(void *_data, struct page *page)
 	return 0;
 }
 
-static int fuse_readpages(struct file *file, struct address_space *mapping,
-			  struct list_head *pages, unsigned nr_pages)
+static unsigned fuse_readahead(struct file *file, struct address_space *mapping,
+			  pgoff_t start, unsigned nr_pages)
 {
 	struct inode *inode = mapping->host;
 	struct fuse_conn *fc = get_fuse_conn(inode);
 	struct fuse_fill_data data;
-	int err;
 
-	err = -EIO;
 	if (is_bad_inode(inode))
 		goto out;
 
@@ -968,19 +962,24 @@ static int fuse_readpages(struct file *file, struct address_space *mapping,
 	data.max_pages = min_t(unsigned int, nr_pages, fc->max_pages);
 ;
 	data.ia = fuse_io_alloc(NULL, data.max_pages);
-	err = -ENOMEM;
 	if (!data.ia)
 		goto out;
 
-	err = read_cache_pages(mapping, pages, fuse_readpages_fill, &data);
-	if (!err) {
-		if (data.ia->ap.num_pages)
-			fuse_send_readpages(data.ia, file);
-		else
-			fuse_io_free(data.ia);
+	while (nr_pages--) {
+		struct page *page = readahead_page(mapping, start++);
+		int err = fuse_readpages_fill(&data, page);
+
+		if (!err)
+			continue;
+		nr_pages++;
+		goto out;
 	}
+	if (data.ia->ap.num_pages)
+		fuse_send_readpages(data.ia, file);
+	else
+		fuse_io_free(data.ia);
 out:
-	return err;
+	return nr_pages;
 }
 
 static ssize_t fuse_cache_read_iter(struct kiocb *iocb, struct iov_iter *to)
@@ -3358,10 +3357,10 @@ static const struct file_operations fuse_file_operations = {
 
 static const struct address_space_operations fuse_file_aops  = {
 	.readpage	= fuse_readpage,
+	.readahead	= fuse_readahead,
 	.writepage	= fuse_writepage,
 	.writepages	= fuse_writepages,
 	.launder_page	= fuse_launder_page,
-	.readpages	= fuse_readpages,
 	.set_page_dirty	= __set_page_dirty_nobuffers,
 	.bmap		= fuse_bmap,
 	.direct_IO	= fuse_direct_IO,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH 12/12] iomap: Convert from readpages to readahead
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (10 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 11/12] fuse: " Matthew Wilcox
@ 2020-01-25  1:35 ` Matthew Wilcox
  2020-01-29  1:38   ` Dave Chinner
  2020-02-13  4:38 ` [PATCH 00/12] Change readahead API Andrew Morton
  12 siblings, 1 reply; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25  1:35 UTC (permalink / raw)
  To: linux-fsdevel; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-kernel, linux-xfs

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use the new readahead operation in XFS and iomap.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: linux-xfs@vger.kernel.org
---
 fs/iomap/buffered-io.c | 72 +++++++++---------------------------------
 fs/iomap/trace.h       |  2 +-
 fs/xfs/xfs_aops.c      | 10 +++---
 include/linux/iomap.h  |  2 +-
 4 files changed, 22 insertions(+), 64 deletions(-)

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 1e2f3cc4579b..07aedd359c4b 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -216,7 +216,6 @@ struct iomap_readpage_ctx {
 	bool			cur_page_in_bio;
 	bool			is_readahead;
 	struct bio		*bio;
-	struct list_head	*pages;
 };
 
 static void
@@ -367,36 +366,8 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops)
 }
 EXPORT_SYMBOL_GPL(iomap_readpage);
 
-static struct page *
-iomap_next_page(struct inode *inode, struct list_head *pages, loff_t pos,
-		loff_t length, loff_t *done)
-{
-	while (!list_empty(pages)) {
-		struct page *page = lru_to_page(pages);
-
-		if (page_offset(page) >= (u64)pos + length)
-			break;
-
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, inode->i_mapping, page->index,
-				GFP_NOFS))
-			return page;
-
-		/*
-		 * If we already have a page in the page cache at index we are
-		 * done.  Upper layers don't care if it is uptodate after the
-		 * readpages call itself as every page gets checked again once
-		 * actually needed.
-		 */
-		*done += PAGE_SIZE;
-		put_page(page);
-	}
-
-	return NULL;
-}
-
 static loff_t
-iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
+iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length,
 		void *data, struct iomap *iomap, struct iomap *srcmap)
 {
 	struct iomap_readpage_ctx *ctx = data;
@@ -410,10 +381,8 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
 			ctx->cur_page = NULL;
 		}
 		if (!ctx->cur_page) {
-			ctx->cur_page = iomap_next_page(inode, ctx->pages,
-					pos, length, &done);
-			if (!ctx->cur_page)
-				break;
+			ctx->cur_page = readahead_page(inode->i_mapping,
+					pos / PAGE_SIZE);
 			ctx->cur_page_in_bio = false;
 		}
 		ret = iomap_readpage_actor(inode, pos + done, length - done,
@@ -423,48 +392,37 @@ iomap_readpages_actor(struct inode *inode, loff_t pos, loff_t length,
 	return done;
 }
 
-int
-iomap_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned
+iomap_readahead(struct address_space *mapping, pgoff_t start,
 		unsigned nr_pages, const struct iomap_ops *ops)
 {
 	struct iomap_readpage_ctx ctx = {
-		.pages		= pages,
 		.is_readahead	= true,
 	};
-	loff_t pos = page_offset(list_entry(pages->prev, struct page, lru));
-	loff_t last = page_offset(list_entry(pages->next, struct page, lru));
-	loff_t length = last - pos + PAGE_SIZE, ret = 0;
+	loff_t pos = start * PAGE_SIZE;
+	loff_t length = nr_pages * PAGE_SIZE;
 
-	trace_iomap_readpages(mapping->host, nr_pages);
+	trace_iomap_readahead(mapping->host, nr_pages);
 
 	while (length > 0) {
-		ret = iomap_apply(mapping->host, pos, length, 0, ops,
-				&ctx, iomap_readpages_actor);
+		loff_t ret = iomap_apply(mapping->host, pos, length, 0, ops,
+				&ctx, iomap_readahead_actor);
 		if (ret <= 0) {
 			WARN_ON_ONCE(ret == 0);
-			goto done;
+			break;
 		}
 		pos += ret;
 		length -= ret;
 	}
-	ret = 0;
-done:
+
 	if (ctx.bio)
 		submit_bio(ctx.bio);
-	if (ctx.cur_page) {
-		if (!ctx.cur_page_in_bio)
-			unlock_page(ctx.cur_page);
+	if (ctx.cur_page && ctx.cur_page_in_bio)
 		put_page(ctx.cur_page);
-	}
 
-	/*
-	 * Check that we didn't lose a page due to the arcance calling
-	 * conventions..
-	 */
-	WARN_ON_ONCE(!ret && !list_empty(ctx.pages));
-	return ret;
+	return length / PAGE_SIZE;
 }
-EXPORT_SYMBOL_GPL(iomap_readpages);
+EXPORT_SYMBOL_GPL(iomap_readahead);
 
 /*
  * iomap_is_partially_uptodate checks whether blocks within a page are
diff --git a/fs/iomap/trace.h b/fs/iomap/trace.h
index 6dc227b8c47e..d6ba705f938a 100644
--- a/fs/iomap/trace.h
+++ b/fs/iomap/trace.h
@@ -39,7 +39,7 @@ DEFINE_EVENT(iomap_readpage_class, name,	\
 	TP_PROTO(struct inode *inode, int nr_pages), \
 	TP_ARGS(inode, nr_pages))
 DEFINE_READPAGE_EVENT(iomap_readpage);
-DEFINE_READPAGE_EVENT(iomap_readpages);
+DEFINE_READPAGE_EVENT(iomap_readahead);
 
 DECLARE_EVENT_CLASS(iomap_page_class,
 	TP_PROTO(struct inode *inode, struct page *page, unsigned long off,
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 3a688eb5c5ae..4d9da34e759b 100644
--- a/fs/xfs/xfs_aops.c
+++ b/fs/xfs/xfs_aops.c
@@ -621,14 +621,14 @@ xfs_vm_readpage(
 	return iomap_readpage(page, &xfs_read_iomap_ops);
 }
 
-STATIC int
-xfs_vm_readpages(
+STATIC unsigned
+xfs_vm_readahead(
 	struct file		*unused,
 	struct address_space	*mapping,
-	struct list_head	*pages,
+	pgoff_t			start,
 	unsigned		nr_pages)
 {
-	return iomap_readpages(mapping, pages, nr_pages, &xfs_read_iomap_ops);
+	return iomap_readahead(mapping, start, nr_pages, &xfs_read_iomap_ops);
 }
 
 static int
@@ -644,7 +644,7 @@ xfs_iomap_swapfile_activate(
 
 const struct address_space_operations xfs_address_space_operations = {
 	.readpage		= xfs_vm_readpage,
-	.readpages		= xfs_vm_readpages,
+	.readahead		= xfs_vm_readahead,
 	.writepage		= xfs_vm_writepage,
 	.writepages		= xfs_vm_writepages,
 	.set_page_dirty		= iomap_set_page_dirty,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 8b09463dae0d..81c6067e9b61 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -155,7 +155,7 @@ loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
 		const struct iomap_ops *ops);
 int iomap_readpage(struct page *page, const struct iomap_ops *ops);
-int iomap_readpages(struct address_space *mapping, struct list_head *pages,
+unsigned iomap_readahead(struct address_space *, pgoff_t start,
 		unsigned nr_pages, const struct iomap_ops *ops);
 int iomap_set_page_dirty(struct page *page);
 int iomap_is_partially_uptodate(struct page *page, unsigned long from,
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH 07/12] erofs: Convert uncompressed files from readpages to readahead
  2020-01-25  1:35 ` [PATCH 07/12] erofs: Convert uncompressed files " Matthew Wilcox
@ 2020-01-25  1:53   ` Gao Xiang
  2020-01-25 19:09     ` Matthew Wilcox
  2020-01-29  0:57   ` Dave Chinner
  1 sibling, 1 reply; 34+ messages in thread
From: Gao Xiang @ 2020-01-25  1:53 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-erofs, linux-kernel

Hi Matthew,

On Fri, Jan 24, 2020 at 05:35:48PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Use the new readahead operation in erofs.  Fix what I believe to be
> a refcounting bug in the error case.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: linux-erofs@lists.ozlabs.org
> ---
>  fs/erofs/data.c              | 34 ++++++++++++++--------------------
>  fs/erofs/zdata.c             |  2 +-
>  include/trace/events/erofs.h |  6 +++---
>  3 files changed, 18 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index fc3a8d8064f8..335c1ab05312 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -280,42 +280,36 @@ static int erofs_raw_access_readpage(struct file *file, struct page *page)
>  	return 0;
>  }
>  
> -static int erofs_raw_access_readpages(struct file *filp,
> +static unsigned erofs_raw_access_readahead(struct file *file,
>  				      struct address_space *mapping,
> -				      struct list_head *pages,
> +				      pgoff_t start,
>  				      unsigned int nr_pages)
>  {
>  	erofs_off_t last_block;
>  	struct bio *bio = NULL;
> -	gfp_t gfp = readahead_gfp_mask(mapping);
> -	struct page *page = list_last_entry(pages, struct page, lru);
>  
> -	trace_erofs_readpages(mapping->host, page, nr_pages, true);
> +	trace_erofs_readpages(mapping->host, start, nr_pages, true);
>  
>  	for (; nr_pages; --nr_pages) {
> -		page = list_entry(pages->prev, struct page, lru);
> +		struct page *page = readahead_page(mapping, start++);
>  
>  		prefetchw(&page->flags);
> -		list_del(&page->lru);
>  
> -		if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) {
> -			bio = erofs_read_raw_page(bio, mapping, page,
> -						  &last_block, nr_pages, true);
> +		bio = erofs_read_raw_page(bio, mapping, page, &last_block,
> +				nr_pages, true);
>  
> -			/* all the page errors are ignored when readahead */
> -			if (IS_ERR(bio)) {
> -				pr_err("%s, readahead error at page %lu of nid %llu\n",
> -				       __func__, page->index,
> -				       EROFS_I(mapping->host)->nid);
> +		/* all the page errors are ignored when readahead */
> +		if (IS_ERR(bio)) {
> +			pr_err("%s, readahead error at page %lu of nid %llu\n",
> +			       __func__, page->index,
> +			       EROFS_I(mapping->host)->nid);
>  
> -				bio = NULL;
> -			}
> +			bio = NULL;
> +			put_page(page);

Out of curiously, some little question... Why we need put_page(page) twice
if erofs_read_raw_page returns with error...

One put_page(page) is used as a temporary reference count for this request,
we could put_page(page) in advance since pages are still locked before endio.

Another put_page(page) is used for page cache xarray. I think in this case
the page has been successfully inserted to the page cache anyway, after erroring
out it will trigger .readpage again... so probably we need to keep this
refcount count for page cache xarray?

If I'm missing something, kindly correct me if I'm wrong....

Thanks,
Gao Xiang

>  		}
>  
> -		/* pages could still be locked */
>  		put_page(page);
>  	}
> -	DBG_BUGON(!list_empty(pages));
>  
>  	/* the rare case (end in gaps) */
>  	if (bio)
> @@ -358,7 +352,7 @@ static sector_t erofs_bmap(struct address_space *mapping, sector_t block)
>  /* for uncompressed (aligned) files and raw access for other files */
>  const struct address_space_operations erofs_raw_access_aops = {
>  	.readpage = erofs_raw_access_readpage,
> -	.readpages = erofs_raw_access_readpages,
> +	.readahead = erofs_raw_access_readahead,
>  	.bmap = erofs_bmap,
>  };
>  
> diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
> index ca99425a4536..d3dd8cf1fc01 100644
> --- a/fs/erofs/zdata.c
> +++ b/fs/erofs/zdata.c
> @@ -1340,7 +1340,7 @@ static int z_erofs_readpages(struct file *filp, struct address_space *mapping,
>  	struct page *head = NULL;
>  	LIST_HEAD(pagepool);
>  
> -	trace_erofs_readpages(mapping->host, lru_to_page(pages),
> +	trace_erofs_readpages(mapping->host, lru_to_page(pages)->index,
>  			      nr_pages, false);
>  
>  	f.headoffset = (erofs_off_t)lru_to_page(pages)->index << PAGE_SHIFT;
> diff --git a/include/trace/events/erofs.h b/include/trace/events/erofs.h
> index 27f5caa6299a..bf9806fd1306 100644
> --- a/include/trace/events/erofs.h
> +++ b/include/trace/events/erofs.h
> @@ -113,10 +113,10 @@ TRACE_EVENT(erofs_readpage,
>  
>  TRACE_EVENT(erofs_readpages,
>  
> -	TP_PROTO(struct inode *inode, struct page *page, unsigned int nrpage,
> +	TP_PROTO(struct inode *inode, pgoff_t start, unsigned int nrpage,
>  		bool raw),
>  
> -	TP_ARGS(inode, page, nrpage, raw),
> +	TP_ARGS(inode, start, nrpage, raw),
>  
>  	TP_STRUCT__entry(
>  		__field(dev_t,		dev	)
> @@ -129,7 +129,7 @@ TRACE_EVENT(erofs_readpages,
>  	TP_fast_assign(
>  		__entry->dev	= inode->i_sb->s_dev;
>  		__entry->nid	= EROFS_I(inode)->nid;
> -		__entry->start	= page->index;
> +		__entry->start	= start;
>  		__entry->nrpage	= nrpage;
>  		__entry->raw	= raw;
>  	),
> -- 
> 2.24.1
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 04/12] mm: Add readahead address space operation
  2020-01-25  1:35 ` [PATCH 04/12] mm: Add readahead address space operation Matthew Wilcox
@ 2020-01-25  3:57   ` Randy Dunlap
  2020-02-01  0:25     ` Matthew Wilcox
  2020-01-29  0:24   ` Dave Chinner
  1 sibling, 1 reply; 34+ messages in thread
From: Randy Dunlap @ 2020-01-25  3:57 UTC (permalink / raw)
  To: Matthew Wilcox, linux-fsdevel
  Cc: linux-mm, linux-kernel, linux-btrfs, linux-erofs, linux-ext4,
	linux-f2fs-devel, linux-xfs, cluster-devel, ocfs2-devel

On 1/24/20 5:35 PM, Matthew Wilcox wrote:
> diff --git a/Documentation/filesystems/vfs.rst b/Documentation/filesystems/vfs.rst
> index 7d4d09dd5e6d..bb06fb7b120b 100644
> --- a/Documentation/filesystems/vfs.rst
> +++ b/Documentation/filesystems/vfs.rst
> @@ -706,6 +706,8 @@ cache in your filesystem.  The following members are defined:
>  		int (*readpage)(struct file *, struct page *);
>  		int (*writepages)(struct address_space *, struct writeback_control *);
>  		int (*set_page_dirty)(struct page *page);
> +		unsigned (*readahead)(struct file *filp, struct address_space *mapping,
> +				 pgoff_t start, unsigned nr_pages);
>  		int (*readpages)(struct file *filp, struct address_space *mapping,
>  				 struct list_head *pages, unsigned nr_pages);
>  		int (*write_begin)(struct file *, struct address_space *mapping,
> @@ -781,6 +783,15 @@ cache in your filesystem.  The following members are defined:
>  	If defined, it should set the PageDirty flag, and the
>  	PAGECACHE_TAG_DIRTY tag in the radix tree.
>  
> +``readahead``
> +	called by the VM to read pages associated with the address_space
> +	object.  The pages are consecutive in the page cache and are
> +        locked.  The implementation should decrement the page refcount after
> +        attempting I/O on each page.  Usually the page will be unlocked by
> +        the I/O completion handler.  If the function does not attempt I/O on
> +        some pages, return the number of pages which were not read so the
> +        common code can unlock the pages for you.
> +

Please use consistent indentation (tabs).

>  ``readpages``
>  	called by the VM to read pages associated with the address_space
>  	object.  This is essentially just a vector version of readpage.


cheers.
-- 
~Randy


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 07/12] erofs: Convert uncompressed files from readpages to readahead
  2020-01-25  1:53   ` Gao Xiang
@ 2020-01-25 19:09     ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25 19:09 UTC (permalink / raw)
  To: Gao Xiang; +Cc: linux-fsdevel, linux-mm, linux-erofs, linux-kernel

On Sat, Jan 25, 2020 at 09:53:29AM +0800, Gao Xiang wrote:
> > +		/* all the page errors are ignored when readahead */
> > +		if (IS_ERR(bio)) {
> > +			pr_err("%s, readahead error at page %lu of nid %llu\n",
> > +			       __func__, page->index,
> > +			       EROFS_I(mapping->host)->nid);
> >  
> > -				bio = NULL;
> > -			}
> > +			bio = NULL;
> > +			put_page(page);
> 
> Out of curiously, some little question... Why we need put_page(page) twice
> if erofs_read_raw_page returns with error...
> 
> One put_page(page) is used as a temporary reference count for this request,
> we could put_page(page) in advance since pages are still locked before endio.
> 
> Another put_page(page) is used for page cache xarray. I think in this case
> the page has been successfully inserted to the page cache anyway, after erroring
> out it will trigger .readpage again... so probably we need to keep this
> refcount count for page cache xarray?
> 
> If I'm missing something, kindly correct me if I'm wrong....

You're quite right.  After readahead has completed, the page should have
a refcount of 1 and be unlocked.  If we hit an error, the page should
be !uptodate.  It doesn't matter whether we set PageError or not in this
case; filemap_fault() will ClearPageError() before retrying if the page
is !uptodate.  This extra put_page() is wrong, and I'll remove it from
the next version.  Thanks!

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 03/12] readahead: Put pages in cache earlier
  2020-01-25  1:35 ` [PATCH 03/12] readahead: Put pages in cache earlier Matthew Wilcox
@ 2020-01-25 19:44   ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-25 19:44 UTC (permalink / raw)
  To: linux-fsdevel
  Cc: linux-mm, linux-kernel, linux-btrfs, linux-erofs, linux-ext4,
	linux-f2fs-devel, linux-xfs, cluster-devel, ocfs2-devel

On Fri, Jan 24, 2020 at 05:35:44PM -0800, Matthew Wilcox wrote:
> @@ -192,8 +194,18 @@ unsigned long __do_page_cache_readahead(struct address_space *mapping,
>  		page = __page_cache_alloc(gfp_mask);
>  		if (!page)
>  			break;
> -		page->index = page_offset;
> -		list_add(&page->lru, &page_pool);
> +		if (use_list) {
> +			page->index = page_offset;
> +			list_add(&page->lru, &page_pool);
> +		} else if (!add_to_page_cache_lru(page, mapping, page_offset,
> +					gfp_mask)) {
> +			if (nr_pages)
> +				read_pages(mapping, filp, &page_pool,
> +						page_offset - nr_pages,
> +						nr_pages);
> +			nr_pages = 0;

This is missing a call to put_page().

> +			continue;
> +		}
>  		if (page_idx == nr_to_read - lookahead_size)
>  			SetPageReadahead(page);
>  		nr_pages++;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 04/12] mm: Add readahead address space operation
  2020-01-25  1:35 ` [PATCH 04/12] mm: Add readahead address space operation Matthew Wilcox
  2020-01-25  3:57   ` Randy Dunlap
@ 2020-01-29  0:24   ` Dave Chinner
  2020-01-30  8:00     ` Matthew Wilcox
  1 sibling, 1 reply; 34+ messages in thread
From: Dave Chinner @ 2020-01-29  0:24 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs, linux-erofs,
	linux-ext4, linux-f2fs-devel, linux-xfs, cluster-devel,
	ocfs2-devel

On Fri, Jan 24, 2020 at 05:35:45PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> This replaces ->readpages with a saner interface:
>  - Return the number of pages not read instead of an ignored error code.
>  - Pages are already in the page cache when ->readahead is called.
>  - Implementation looks up the pages in the page cache instead of
>    having them passed in a linked list.
....
> diff --git a/mm/readahead.c b/mm/readahead.c
> index 5a6676640f20..6d65dae6dad0 100644
> --- a/mm/readahead.c
> +++ b/mm/readahead.c
> @@ -121,7 +121,18 @@ static void read_pages(struct address_space *mapping, struct file *filp,
>  
>  	blk_start_plug(&plug);
>  
> -	if (mapping->a_ops->readpages) {
> +	if (mapping->a_ops->readahead) {
> +		unsigned left = mapping->a_ops->readahead(filp, mapping,
> +				start, nr_pages);
> +
> +		while (left) {
> +			struct page *page = readahead_page(mapping,
> +					start + nr_pages - left - 1);

Off by one? start = 2, nr_pages = 2, left = 1, this looks up the
page at index 2, which is the one we issued IO on, not the one we
"left behind" which is at index 3.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 06/12] btrfs: Convert from readpages to readahead
  2020-01-25  1:35 ` [PATCH 06/12] btrfs: Convert from readpages to readahead Matthew Wilcox
@ 2020-01-29  0:46   ` Dave Chinner
  2020-01-30  8:09     ` Matthew Wilcox
  0 siblings, 1 reply; 34+ messages in thread
From: Dave Chinner @ 2020-01-29  0:46 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs

On Fri, Jan 24, 2020 at 05:35:47PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Use the new readahead operation in btrfs
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: linux-btrfs@vger.kernel.org
> ---
>  fs/btrfs/extent_io.c | 15 ++++-----------
>  fs/btrfs/extent_io.h |  2 +-
>  fs/btrfs/inode.c     | 18 +++++++++---------
>  3 files changed, 14 insertions(+), 21 deletions(-)
> 
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 2f4802f405a2..b1e2acbec165 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -4283,7 +4283,7 @@ int extent_writepages(struct address_space *mapping,
>  	return ret;
>  }
>  
> -int extent_readpages(struct address_space *mapping, struct list_head *pages,
> +unsigned extent_readahead(struct address_space *mapping, pgoff_t start,
>  		     unsigned nr_pages)
>  {
>  	struct bio *bio = NULL;
> @@ -4294,20 +4294,13 @@ int extent_readpages(struct address_space *mapping, struct list_head *pages,
>  	int nr = 0;
>  	u64 prev_em_start = (u64)-1;
>  
> -	while (!list_empty(pages)) {
> +	while (nr_pages) {
>  		u64 contig_end = 0;
>  
> -		for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) {
> -			struct page *page = lru_to_page(pages);
> +		for (nr = 0; nr < ARRAY_SIZE(pagepool) && nr_pages--;) {

What is stopping nr_pages from going negative here, and then looping
forever on the outer nr_pages loop? Perhaps "while(nr_pages > 0) {"
would be better there?

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 07/12] erofs: Convert uncompressed files from readpages to readahead
  2020-01-25  1:35 ` [PATCH 07/12] erofs: Convert uncompressed files " Matthew Wilcox
  2020-01-25  1:53   ` Gao Xiang
@ 2020-01-29  0:57   ` Dave Chinner
  2020-01-30  8:10     ` Matthew Wilcox
  1 sibling, 1 reply; 34+ messages in thread
From: Dave Chinner @ 2020-01-29  0:57 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-erofs

On Fri, Jan 24, 2020 at 05:35:48PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Use the new readahead operation in erofs.  Fix what I believe to be
> a refcounting bug in the error case.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: linux-erofs@lists.ozlabs.org
> ---
>  fs/erofs/data.c              | 34 ++++++++++++++--------------------
>  fs/erofs/zdata.c             |  2 +-
>  include/trace/events/erofs.h |  6 +++---
>  3 files changed, 18 insertions(+), 24 deletions(-)
> 
> diff --git a/fs/erofs/data.c b/fs/erofs/data.c
> index fc3a8d8064f8..335c1ab05312 100644
> --- a/fs/erofs/data.c
> +++ b/fs/erofs/data.c
> @@ -280,42 +280,36 @@ static int erofs_raw_access_readpage(struct file *file, struct page *page)
>  	return 0;
>  }
>  
> -static int erofs_raw_access_readpages(struct file *filp,
> +static unsigned erofs_raw_access_readahead(struct file *file,
>  				      struct address_space *mapping,
> -				      struct list_head *pages,
> +				      pgoff_t start,
>  				      unsigned int nr_pages)
>  {
>  	erofs_off_t last_block;
>  	struct bio *bio = NULL;
> -	gfp_t gfp = readahead_gfp_mask(mapping);
> -	struct page *page = list_last_entry(pages, struct page, lru);
>  
> -	trace_erofs_readpages(mapping->host, page, nr_pages, true);
> +	trace_erofs_readpages(mapping->host, start, nr_pages, true);
>  
>  	for (; nr_pages; --nr_pages) {
> -		page = list_entry(pages->prev, struct page, lru);
> +		struct page *page = readahead_page(mapping, start++);
>  
>  		prefetchw(&page->flags);
> -		list_del(&page->lru);
>  
> -		if (!add_to_page_cache_lru(page, mapping, page->index, gfp)) {
> -			bio = erofs_read_raw_page(bio, mapping, page,
> -						  &last_block, nr_pages, true);
> +		bio = erofs_read_raw_page(bio, mapping, page, &last_block,
> +				nr_pages, true);
>  
> -			/* all the page errors are ignored when readahead */
> -			if (IS_ERR(bio)) {
> -				pr_err("%s, readahead error at page %lu of nid %llu\n",
> -				       __func__, page->index,
> -				       EROFS_I(mapping->host)->nid);
> +		/* all the page errors are ignored when readahead */
> +		if (IS_ERR(bio)) {
> +			pr_err("%s, readahead error at page %lu of nid %llu\n",
> +			       __func__, page->index,
> +			       EROFS_I(mapping->host)->nid);
>  
> -				bio = NULL;
> -			}
> +			bio = NULL;
> +			put_page(page);
>  		}
>  
> -		/* pages could still be locked */
>  		put_page(page);

A double put_page() on error?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/12] fuse: Convert from readpages to readahead
  2020-01-25  1:35 ` [PATCH 11/12] fuse: " Matthew Wilcox
@ 2020-01-29  1:08   ` Dave Chinner
  2020-01-30 21:35     ` Matthew Wilcox
  2020-01-29 10:50   ` Miklos Szeredi
  1 sibling, 1 reply; 34+ messages in thread
From: Dave Chinner @ 2020-01-29  1:08 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel

On Fri, Jan 24, 2020 at 05:35:52PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Use the new readahead operation in fuse.  Switching away from the
> read_cache_pages() helper gets rid of an implicit call to put_page(),
> so we can get rid of the get_page() call in fuse_readpages_fill().
> We can also get rid of the call to fuse_wait_on_page_writeback() as
> this page is newly allocated and so cannot be under writeback.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
.....
> @@ -968,19 +962,24 @@ static int fuse_readpages(struct file *file, struct address_space *mapping,
>  	data.max_pages = min_t(unsigned int, nr_pages, fc->max_pages);
>  ;
>  	data.ia = fuse_io_alloc(NULL, data.max_pages);
> -	err = -ENOMEM;
>  	if (!data.ia)
>  		goto out;
>  
> -	err = read_cache_pages(mapping, pages, fuse_readpages_fill, &data);
> -	if (!err) {
> -		if (data.ia->ap.num_pages)
> -			fuse_send_readpages(data.ia, file);
> -		else
> -			fuse_io_free(data.ia);
> +	while (nr_pages--) {
> +		struct page *page = readahead_page(mapping, start++);
> +		int err = fuse_readpages_fill(&data, page);
> +
> +		if (!err)
> +			continue;
> +		nr_pages++;
> +		goto out;
>  	}

That's some pretty convoluted logic. Perhaps:

	for (; nr_pages > 0 ; nr_pages--) {
		struct page *page = readahead_page(mapping, start++);

		if (fuse_readpages_fill(&data, page))
			goto out;
	}

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 12/12] iomap: Convert from readpages to readahead
  2020-01-25  1:35 ` [PATCH 12/12] iomap: " Matthew Wilcox
@ 2020-01-29  1:38   ` Dave Chinner
  2020-01-31  9:44     ` Matthew Wilcox
  0 siblings, 1 reply; 34+ messages in thread
From: Dave Chinner @ 2020-01-29  1:38 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-xfs

On Fri, Jan 24, 2020 at 05:35:53PM -0800, Matthew Wilcox wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> Use the new readahead operation in XFS and iomap.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: linux-xfs@vger.kernel.org

....
> +unsigned
> +iomap_readahead(struct address_space *mapping, pgoff_t start,
>  		unsigned nr_pages, const struct iomap_ops *ops)
>  {
>  	struct iomap_readpage_ctx ctx = {
> -		.pages		= pages,
>  		.is_readahead	= true,
>  	};
> -	loff_t pos = page_offset(list_entry(pages->prev, struct page, lru));
> -	loff_t last = page_offset(list_entry(pages->next, struct page, lru));
> -	loff_t length = last - pos + PAGE_SIZE, ret = 0;
> +	loff_t pos = start * PAGE_SIZE;
> +	loff_t length = nr_pages * PAGE_SIZE;
>  
> -	trace_iomap_readpages(mapping->host, nr_pages);
> +	trace_iomap_readahead(mapping->host, nr_pages);
>  
>  	while (length > 0) {
> -		ret = iomap_apply(mapping->host, pos, length, 0, ops,
> -				&ctx, iomap_readpages_actor);
> +		loff_t ret = iomap_apply(mapping->host, pos, length, 0, ops,
> +				&ctx, iomap_readahead_actor);
>  		if (ret <= 0) {
>  			WARN_ON_ONCE(ret == 0);
> -			goto done;
> +			break;
>  		}
>  		pos += ret;
>  		length -= ret;
>  	}
> -	ret = 0;
> -done:
> +
>  	if (ctx.bio)
>  		submit_bio(ctx.bio);
> -	if (ctx.cur_page) {
> -		if (!ctx.cur_page_in_bio)
> -			unlock_page(ctx.cur_page);
> +	if (ctx.cur_page && ctx.cur_page_in_bio)
>  		put_page(ctx.cur_page);
> -	}
>  
> -	/*
> -	 * Check that we didn't lose a page due to the arcance calling
> -	 * conventions..
> -	 */
> -	WARN_ON_ONCE(!ret && !list_empty(ctx.pages));
> -	return ret;
> +	return length / PAGE_SIZE;

Took me quite some time to get my head around whether this was
correct or not.

I'm still not certain in the cases where block size != page size and
we've got an extent boundary in the middle of the page and had a
read error on the second extent in the page. In this case,
ctx.cur_page_in_bio is true so we drop the readahead reference to
the page. Also, length is not a multiple of page size, and so the
nr_pages value returned includes the partial page that we have IO
underway on.

That, I think, leads to both a double unlock and a double put_page()
of the partial page in question.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/12] fuse: Convert from readpages to readahead
  2020-01-25  1:35 ` [PATCH 11/12] fuse: " Matthew Wilcox
  2020-01-29  1:08   ` Dave Chinner
@ 2020-01-29 10:50   ` Miklos Szeredi
  2020-01-30  7:26     ` Matthew Wilcox
  1 sibling, 1 reply; 34+ messages in thread
From: Miklos Szeredi @ 2020-01-29 10:50 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel

On Sat, Jan 25, 2020 at 2:36 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>
> Use the new readahead operation in fuse.  Switching away from the
> read_cache_pages() helper gets rid of an implicit call to put_page(),
> so we can get rid of the get_page() call in fuse_readpages_fill().
> We can also get rid of the call to fuse_wait_on_page_writeback() as
> this page is newly allocated and so cannot be under writeback.

Not sure.  fuse_wait_on_page_writeback() waits not on the page but the
byte range indicated by the index.

Fuse writeback goes through some hoops to prevent reclaim related
deadlocks, which means that the byte range could still be under
writeback, yet the page belonging to that range be already reclaimed.
 This fuse_wait_on_page_writeback() is trying to serialize reads
against such writes.

Thanks,
Miklos

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/12] fuse: Convert from readpages to readahead
  2020-01-29 10:50   ` Miklos Szeredi
@ 2020-01-30  7:26     ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-30  7:26 UTC (permalink / raw)
  To: Miklos Szeredi; +Cc: linux-fsdevel, linux-mm, linux-kernel

On Wed, Jan 29, 2020 at 11:50:37AM +0100, Miklos Szeredi wrote:
> On Sat, Jan 25, 2020 at 2:36 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> >
> > Use the new readahead operation in fuse.  Switching away from the
> > read_cache_pages() helper gets rid of an implicit call to put_page(),
> > so we can get rid of the get_page() call in fuse_readpages_fill().
> > We can also get rid of the call to fuse_wait_on_page_writeback() as
> > this page is newly allocated and so cannot be under writeback.
> 
> Not sure.  fuse_wait_on_page_writeback() waits not on the page but the
> byte range indicated by the index.
> 
> Fuse writeback goes through some hoops to prevent reclaim related
> deadlocks, which means that the byte range could still be under
> writeback, yet the page belonging to that range be already reclaimed.
>  This fuse_wait_on_page_writeback() is trying to serialize reads
> against such writes.

Thanks.  I'll put that back in.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 04/12] mm: Add readahead address space operation
  2020-01-29  0:24   ` Dave Chinner
@ 2020-01-30  8:00     ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-30  8:00 UTC (permalink / raw)
  To: Dave Chinner
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs, linux-erofs,
	linux-ext4, linux-f2fs-devel, linux-xfs, cluster-devel,
	ocfs2-devel

On Wed, Jan 29, 2020 at 11:24:56AM +1100, Dave Chinner wrote:
> On Fri, Jan 24, 2020 at 05:35:45PM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > 
> > This replaces ->readpages with a saner interface:
> >  - Return the number of pages not read instead of an ignored error code.
> >  - Pages are already in the page cache when ->readahead is called.
> >  - Implementation looks up the pages in the page cache instead of
> >    having them passed in a linked list.
> ....
> > diff --git a/mm/readahead.c b/mm/readahead.c
> > index 5a6676640f20..6d65dae6dad0 100644
> > --- a/mm/readahead.c
> > +++ b/mm/readahead.c
> > @@ -121,7 +121,18 @@ static void read_pages(struct address_space *mapping, struct file *filp,
> >  
> >  	blk_start_plug(&plug);
> >  
> > -	if (mapping->a_ops->readpages) {
> > +	if (mapping->a_ops->readahead) {
> > +		unsigned left = mapping->a_ops->readahead(filp, mapping,
> > +				start, nr_pages);
> > +
> > +		while (left) {
> > +			struct page *page = readahead_page(mapping,
> > +					start + nr_pages - left - 1);
> 
> Off by one? start = 2, nr_pages = 2, left = 1, this looks up the
> page at index 2, which is the one we issued IO on, not the one we
> "left behind" which is at index 3.

Yup.  I originally had:

		while (left--) ...

decided that was too confusing and didn't quite complete that thought.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 06/12] btrfs: Convert from readpages to readahead
  2020-01-29  0:46   ` Dave Chinner
@ 2020-01-30  8:09     ` Matthew Wilcox
  2020-01-31  2:17       ` Dave Chinner
  0 siblings, 1 reply; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-30  8:09 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs

On Wed, Jan 29, 2020 at 11:46:09AM +1100, Dave Chinner wrote:
> On Fri, Jan 24, 2020 at 05:35:47PM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > 
> > Use the new readahead operation in btrfs
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > Cc: linux-btrfs@vger.kernel.org
> > ---
> >  fs/btrfs/extent_io.c | 15 ++++-----------
> >  fs/btrfs/extent_io.h |  2 +-
> >  fs/btrfs/inode.c     | 18 +++++++++---------
> >  3 files changed, 14 insertions(+), 21 deletions(-)
> > 
> > diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> > index 2f4802f405a2..b1e2acbec165 100644
> > --- a/fs/btrfs/extent_io.c
> > +++ b/fs/btrfs/extent_io.c
> > @@ -4283,7 +4283,7 @@ int extent_writepages(struct address_space *mapping,
> >  	return ret;
> >  }
> >  
> > -int extent_readpages(struct address_space *mapping, struct list_head *pages,
> > +unsigned extent_readahead(struct address_space *mapping, pgoff_t start,
> >  		     unsigned nr_pages)
> >  {
> >  	struct bio *bio = NULL;
> > @@ -4294,20 +4294,13 @@ int extent_readpages(struct address_space *mapping, struct list_head *pages,
> >  	int nr = 0;
> >  	u64 prev_em_start = (u64)-1;
> >  
> > -	while (!list_empty(pages)) {
> > +	while (nr_pages) {
> >  		u64 contig_end = 0;
> >  
> > -		for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) {
> > -			struct page *page = lru_to_page(pages);
> > +		for (nr = 0; nr < ARRAY_SIZE(pagepool) && nr_pages--;) {
> 
> What is stopping nr_pages from going negative here, and then looping
> forever on the outer nr_pages loop? Perhaps "while(nr_pages > 0) {"
> would be better there?

Ugh, nr_pages is unsigned, so that's no good.  Maybe make this a more
conventional loop ...

        while (nr_pages) {
                u64 contig_end = 0;

                for (nr = 0; nr < ARRAY_SIZE(pagepool); nr++) {
                        struct page *page = readahead_page(mapping, start++);

                        prefetchw(&page->flags);
                        pagepool[nr] = page;
                        contig_end = page_offset(page) + PAGE_SIZE - 1;
                        if (--nr_pages == 0)
                                break;
                }


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 07/12] erofs: Convert uncompressed files from readpages to readahead
  2020-01-29  0:57   ` Dave Chinner
@ 2020-01-30  8:10     ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-30  8:10 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-erofs

On Wed, Jan 29, 2020 at 11:57:41AM +1100, Dave Chinner wrote:
> > +			bio = NULL;
> > +			put_page(page);
> >  		}
> >  
> > -		/* pages could still be locked */
> >  		put_page(page);
> 
> A double put_page() on error?

Yup, already fixed when Gao Xiang pointed it out ;-)

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/12] fuse: Convert from readpages to readahead
  2020-01-29  1:08   ` Dave Chinner
@ 2020-01-30 21:35     ` Matthew Wilcox
  2020-01-31  2:19       ` Dave Chinner
  0 siblings, 1 reply; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-30 21:35 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-fsdevel, linux-mm, linux-kernel

On Wed, Jan 29, 2020 at 12:08:29PM +1100, Dave Chinner wrote:
> On Fri, Jan 24, 2020 at 05:35:52PM -0800, Matthew Wilcox wrote:
> > +	while (nr_pages--) {
> > +		struct page *page = readahead_page(mapping, start++);
> > +		int err = fuse_readpages_fill(&data, page);
> > +
> > +		if (!err)
> > +			continue;
> > +		nr_pages++;
> > +		goto out;
> >  	}
> 
> That's some pretty convoluted logic. Perhaps:
> 
> 	for (; nr_pages > 0 ; nr_pages--) {
> 		struct page *page = readahead_page(mapping, start++);
> 
> 		if (fuse_readpages_fill(&data, page))
> 			goto out;
> 	}

I have a bit of an aversion to that style of for loop ... I like this
instead:

        while (nr_pages) {
                struct page *page = readahead_page(mapping, start++);

                if (fuse_readpages_fill(&data, page) != 0)
                        goto out;
                nr_pages--;
        }


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 06/12] btrfs: Convert from readpages to readahead
  2020-01-30  8:09     ` Matthew Wilcox
@ 2020-01-31  2:17       ` Dave Chinner
  0 siblings, 0 replies; 34+ messages in thread
From: Dave Chinner @ 2020-01-31  2:17 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs

On Thu, Jan 30, 2020 at 12:09:39AM -0800, Matthew Wilcox wrote:
> On Wed, Jan 29, 2020 at 11:46:09AM +1100, Dave Chinner wrote:
> > On Fri, Jan 24, 2020 at 05:35:47PM -0800, Matthew Wilcox wrote:
> > > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > > 
> > > Use the new readahead operation in btrfs
> > > 
> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > > Cc: linux-btrfs@vger.kernel.org
> > > ---
> > >  fs/btrfs/extent_io.c | 15 ++++-----------
> > >  fs/btrfs/extent_io.h |  2 +-
> > >  fs/btrfs/inode.c     | 18 +++++++++---------
> > >  3 files changed, 14 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> > > index 2f4802f405a2..b1e2acbec165 100644
> > > --- a/fs/btrfs/extent_io.c
> > > +++ b/fs/btrfs/extent_io.c
> > > @@ -4283,7 +4283,7 @@ int extent_writepages(struct address_space *mapping,
> > >  	return ret;
> > >  }
> > >  
> > > -int extent_readpages(struct address_space *mapping, struct list_head *pages,
> > > +unsigned extent_readahead(struct address_space *mapping, pgoff_t start,
> > >  		     unsigned nr_pages)
> > >  {
> > >  	struct bio *bio = NULL;
> > > @@ -4294,20 +4294,13 @@ int extent_readpages(struct address_space *mapping, struct list_head *pages,
> > >  	int nr = 0;
> > >  	u64 prev_em_start = (u64)-1;
> > >  
> > > -	while (!list_empty(pages)) {
> > > +	while (nr_pages) {
> > >  		u64 contig_end = 0;
> > >  
> > > -		for (nr = 0; nr < ARRAY_SIZE(pagepool) && !list_empty(pages);) {
> > > -			struct page *page = lru_to_page(pages);
> > > +		for (nr = 0; nr < ARRAY_SIZE(pagepool) && nr_pages--;) {
> > 
> > What is stopping nr_pages from going negative here, and then looping
> > forever on the outer nr_pages loop? Perhaps "while(nr_pages > 0) {"
> > would be better there?
> 
> Ugh, nr_pages is unsigned, so that's no good.  Maybe make this a more
> conventional loop ...
> 
>         while (nr_pages) {
>                 u64 contig_end = 0;
> 
>                 for (nr = 0; nr < ARRAY_SIZE(pagepool); nr++) {
>                         struct page *page = readahead_page(mapping, start++);
> 
>                         prefetchw(&page->flags);
>                         pagepool[nr] = page;
>                         contig_end = page_offset(page) + PAGE_SIZE - 1;
>                         if (--nr_pages == 0)
>                                 break;
>                 }

Looks like it solves the problem :)

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 11/12] fuse: Convert from readpages to readahead
  2020-01-30 21:35     ` Matthew Wilcox
@ 2020-01-31  2:19       ` Dave Chinner
  0 siblings, 0 replies; 34+ messages in thread
From: Dave Chinner @ 2020-01-31  2:19 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-fsdevel, linux-mm, linux-kernel

On Thu, Jan 30, 2020 at 01:35:33PM -0800, Matthew Wilcox wrote:
> On Wed, Jan 29, 2020 at 12:08:29PM +1100, Dave Chinner wrote:
> > On Fri, Jan 24, 2020 at 05:35:52PM -0800, Matthew Wilcox wrote:
> > > +	while (nr_pages--) {
> > > +		struct page *page = readahead_page(mapping, start++);
> > > +		int err = fuse_readpages_fill(&data, page);
> > > +
> > > +		if (!err)
> > > +			continue;
> > > +		nr_pages++;
> > > +		goto out;
> > >  	}
> > 
> > That's some pretty convoluted logic. Perhaps:
> > 
> > 	for (; nr_pages > 0 ; nr_pages--) {
> > 		struct page *page = readahead_page(mapping, start++);
> > 
> > 		if (fuse_readpages_fill(&data, page))
> > 			goto out;
> > 	}
> 
> I have a bit of an aversion to that style of for loop ... I like this
> instead:
> 
>         while (nr_pages) {
>                 struct page *page = readahead_page(mapping, start++);
> 
>                 if (fuse_readpages_fill(&data, page) != 0)
>                         goto out;
>                 nr_pages--;
>         }

yup, that's also fine.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 12/12] iomap: Convert from readpages to readahead
  2020-01-29  1:38   ` Dave Chinner
@ 2020-01-31  9:44     ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-01-31  9:44 UTC (permalink / raw)
  To: Dave Chinner; +Cc: linux-fsdevel, linux-mm, linux-kernel, linux-xfs

On Wed, Jan 29, 2020 at 12:38:39PM +1100, Dave Chinner wrote:
> On Fri, Jan 24, 2020 at 05:35:53PM -0800, Matthew Wilcox wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > Use the new readahead operation in XFS and iomap.
> > +	if (ctx.cur_page && ctx.cur_page_in_bio)
> >  		put_page(ctx.cur_page);
> > -	}
> >  
> > -	/*
> > -	 * Check that we didn't lose a page due to the arcance calling
> > -	 * conventions..
> > -	 */
> > -	WARN_ON_ONCE(!ret && !list_empty(ctx.pages));
> > -	return ret;
> > +	return length / PAGE_SIZE;
> 
> Took me quite some time to get my head around whether this was
> correct or not.

Yes.  Unfortunately, this is the most complex of the conversions ;-(

> I'm still not certain in the cases where block size != page size and
> we've got an extent boundary in the middle of the page and had a
> read error on the second extent in the page. In this case,
> ctx.cur_page_in_bio is true so we drop the readahead reference to
> the page. Also, length is not a multiple of page size, and so the
> nr_pages value returned includes the partial page that we have IO
> underway on.
> 
> That, I think, leads to both a double unlock and a double put_page()
> of the partial page in question.

But C division rounds down.  So we neither unlock, nor put_page() the
page which was in the bio ... do we?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 04/12] mm: Add readahead address space operation
  2020-01-25  3:57   ` Randy Dunlap
@ 2020-02-01  0:25     ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-02-01  0:25 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs, linux-erofs,
	linux-ext4, linux-f2fs-devel, linux-xfs, cluster-devel,
	ocfs2-devel

On Fri, Jan 24, 2020 at 07:57:40PM -0800, Randy Dunlap wrote:
> > +``readahead``
> > +	called by the VM to read pages associated with the address_space
> > +	object.  The pages are consecutive in the page cache and are
> > +        locked.  The implementation should decrement the page refcount after
> > +        attempting I/O on each page.  Usually the page will be unlocked by
> > +        the I/O completion handler.  If the function does not attempt I/O on
> > +        some pages, return the number of pages which were not read so the
> > +        common code can unlock the pages for you.
> > +
> 
> Please use consistent indentation (tabs).

This turned out to be not my fault.  The vim rst ... mode?  plugin?
Whatever it is, it's converting tabs to spaces!  To fix it, I had to
rename the file to .txt, make the edits, then rename it back.  This is
very poor behaviour.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00/12] Change readahead API
  2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
                   ` (11 preceding siblings ...)
  2020-01-25  1:35 ` [PATCH 12/12] iomap: " Matthew Wilcox
@ 2020-02-13  4:38 ` Andrew Morton
  2020-02-13 13:43   ` Matthew Wilcox
  12 siblings, 1 reply; 34+ messages in thread
From: Andrew Morton @ 2020-02-13  4:38 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs, linux-erofs,
	linux-ext4, linux-f2fs-devel, linux-xfs, cluster-devel,
	ocfs2-devel

On Fri, 24 Jan 2020 17:35:41 -0800 Matthew Wilcox <willy@infradead.org> wrote:

> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> 
> This series adds a readahead address_space operation to eventually
> replace the readpages operation.  The key difference is that
> pages are added to the page cache as they are allocated (and
> then looked up by the filesystem) instead of passing them on a
> list to the readpages operation and having the filesystem add
> them to the page cache.  It's a net reduction in code for each
> implementation, more efficient than walking a list, and solves
> the direct-write vs buffered-read problem reported by yu kuai at
> https://lore.kernel.org/linux-fsdevel/20200116063601.39201-1-yukuai3@huawei.com/

Unclear which patch fixes this and how it did it?

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH 00/12] Change readahead API
  2020-02-13  4:38 ` [PATCH 00/12] Change readahead API Andrew Morton
@ 2020-02-13 13:43   ` Matthew Wilcox
  0 siblings, 0 replies; 34+ messages in thread
From: Matthew Wilcox @ 2020-02-13 13:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-fsdevel, linux-mm, linux-kernel, linux-btrfs, linux-erofs,
	linux-ext4, linux-f2fs-devel, linux-xfs, cluster-devel,
	ocfs2-devel, Mark Fasheh, Joel Becker, Joseph Qi, Bob Peterson,
	Andreas Gruenbacher

On Wed, Feb 12, 2020 at 08:38:52PM -0800, Andrew Morton wrote:
> On Fri, 24 Jan 2020 17:35:41 -0800 Matthew Wilcox <willy@infradead.org> wrote:
> 
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> > 
> > This series adds a readahead address_space operation to eventually
> > replace the readpages operation.  The key difference is that
> > pages are added to the page cache as they are allocated (and
> > then looked up by the filesystem) instead of passing them on a
> > list to the readpages operation and having the filesystem add
> > them to the page cache.  It's a net reduction in code for each
> > implementation, more efficient than walking a list, and solves
> > the direct-write vs buffered-read problem reported by yu kuai at
> > https://lore.kernel.org/linux-fsdevel/20200116063601.39201-1-yukuai3@huawei.com/
> 
> Unclear which patch fixes this and how it did it?

I suppose the problem isn't fixed until patch 13/13 is applied.
What yu kuai is seeing is a race where readahead allocates a page,
then passes it to iomap_readpages, which calls xfs_read_iomap_begin()
which looks up the extent.  Then thread 2 does DIO which modifies the
extent, because there's nothing to say that thread 1 is still using it.
With this patch series, the readpages code puts the locked pages in the
cache before calling iomap_readpages, so any racing write will block on
the locked page until readahead is completed.

If you're tempted to put this into -mm, I have a couple of new changes;
one to fix a kernel-doc warning for mpage_readahead() and one to add
kernel-doc for iomap_readahead():

+++ b/fs/mpage.c
@@ -339,9 +339,7 @@
 
 /**
  * mpage_readahead - start reads against pages
- * @mapping: the address_space
- * @start: The number of the first page to read.
- * @nr_pages: The number of consecutive pages to read.
+ * @rac: Describes which pages to read.
  * @get_block: The filesystem's block mapper function.
  *
  * This function walks the pages and the blocks within each page, building and

+++ b/fs/iomap/buffered-io.c
@@ -395,6 +395,21 @@
 	return done;
 }
 
+/**
+ * iomap_readahead - Attempt to read pages from a file.
+ * @rac: Describes the pages to be read.
+ * @ops: The operations vector for the filesystem.
+ *
+ * This function is for filesystems to call to implement their readahead
+ * address_space operation.
+ *
+ * Context: The file is pinned by the caller, and the pages to be read are
+ * all locked and have an elevated refcount.  This function will unlock
+ * the pages (once I/O has completed on them, or I/O has been determined to
+ * not be necessary).  It will also decrease the refcount once the pages
+ * have been submitted for I/O.  After this point, the page may be removed
+ * from the page cache, and should not be referenced.
+ */
 void iomap_readahead(struct readahead_control *rac, const struct iomap_ops *ops)
 {
 	struct inode *inode = rac->mapping->host;

I'll do a v6 with those changes soon, but I would really like a bit more
review from filesystem people, particularly ocfs2 and gfs2.

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2020-02-13 13:43 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-25  1:35 [PATCH 00/12] Change readahead API Matthew Wilcox
2020-01-25  1:35 ` [PATCH 01/12] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
2020-01-25  1:35 ` [PATCH 02/12] readahead: Ignore return value of ->readpages Matthew Wilcox
2020-01-25  1:35 ` [PATCH 03/12] readahead: Put pages in cache earlier Matthew Wilcox
2020-01-25 19:44   ` Matthew Wilcox
2020-01-25  1:35 ` [PATCH 04/12] mm: Add readahead address space operation Matthew Wilcox
2020-01-25  3:57   ` Randy Dunlap
2020-02-01  0:25     ` Matthew Wilcox
2020-01-29  0:24   ` Dave Chinner
2020-01-30  8:00     ` Matthew Wilcox
2020-01-25  1:35 ` [PATCH 05/12] fs: Convert mpage_readpages to mpage_readahead Matthew Wilcox
2020-01-25  1:35 ` [PATCH 06/12] btrfs: Convert from readpages to readahead Matthew Wilcox
2020-01-29  0:46   ` Dave Chinner
2020-01-30  8:09     ` Matthew Wilcox
2020-01-31  2:17       ` Dave Chinner
2020-01-25  1:35 ` [PATCH 07/12] erofs: Convert uncompressed files " Matthew Wilcox
2020-01-25  1:53   ` Gao Xiang
2020-01-25 19:09     ` Matthew Wilcox
2020-01-29  0:57   ` Dave Chinner
2020-01-30  8:10     ` Matthew Wilcox
2020-01-25  1:35 ` [PATCH 08/12] erofs: Convert compressed " Matthew Wilcox
2020-01-25  1:35 ` [PATCH 09/12] ext4: Convert " Matthew Wilcox
2020-01-25  1:35 ` [PATCH 10/12] f2fs: " Matthew Wilcox
2020-01-25  1:35 ` [PATCH 11/12] fuse: " Matthew Wilcox
2020-01-29  1:08   ` Dave Chinner
2020-01-30 21:35     ` Matthew Wilcox
2020-01-31  2:19       ` Dave Chinner
2020-01-29 10:50   ` Miklos Szeredi
2020-01-30  7:26     ` Matthew Wilcox
2020-01-25  1:35 ` [PATCH 12/12] iomap: " Matthew Wilcox
2020-01-29  1:38   ` Dave Chinner
2020-01-31  9:44     ` Matthew Wilcox
2020-02-13  4:38 ` [PATCH 00/12] Change readahead API Andrew Morton
2020-02-13 13:43   ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).