linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] Readahead patches for 5.9/5.10
@ 2020-09-03 14:08 Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 1/9] Fix khugepaged's request size in collapse_file Matthew Wilcox (Oracle)
                   ` (8 more replies)
  0 siblings, 9 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	David Howells, linux-mm, linux-fsdevel, Eric Biggers

Hi Andrew,

The first patch from David should go upstream soon as a bugfix.

The others are infrastructure for both the THP patchset and for the
fscache rewrite, so it'd be great to get those upstream early in 5.10.

David Howells (4):
  Fix khugepaged's request size in collapse_file
  mm/readahead: Pass readahead_control to force_page_cache_ra
  mm/filemap: Fold ra_submit into do_sync_mmap_readahead
  mm/readahead: Pass a file_ra_state into force_page_cache_ra

Matthew Wilcox (Oracle) (5):
  mm/readahead: Add DEFINE_READAHEAD
  mm/readahead: Make page_cache_ra_unbounded take a readahead_control
  mm/readahead: Make do_page_cache_ra take a readahead_control
  mm/readahead: Make ondemand_readahead take a readahead_control
  mm/readahead: Add page_cache_sync_ra and page_cache_async_ra

 fs/ext4/verity.c        |   4 +-
 fs/f2fs/verity.c        |   4 +-
 include/linux/pagemap.h |  72 ++++++++++++++++++----
 mm/filemap.c            |  10 ++--
 mm/internal.h           |  19 +++---
 mm/khugepaged.c         |   2 +-
 mm/readahead.c          | 130 +++++++++++++++-------------------------
 7 files changed, 127 insertions(+), 114 deletions(-)

-- 
2.28.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/9] Fix khugepaged's request size in collapse_file
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 2/9] mm/readahead: Add DEFINE_READAHEAD Matthew Wilcox (Oracle)
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers, Song Liu,
	Matthew Wilcox, Yang Shi, Pankaj Gupta

From: David Howells <dhowells@redhat.com>

collapse_file() in khugepaged passes PAGE_SIZE as the number of pages
to be read to page_cache_sync_readahead().  The intent was probably to
read a single page.  Fix it to use the number of pages to the end of
the window instead.

Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS")
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Song Liu <songliubraving@fb.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Yang Shi <shy828301@gmail.com>
Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
---
 mm/khugepaged.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index e749e568e1ea..cfa0dba5fd3b 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1709,7 +1709,7 @@ static void collapse_file(struct mm_struct *mm,
 				xas_unlock_irq(&xas);
 				page_cache_sync_readahead(mapping, &file->f_ra,
 							  file, index,
-							  PAGE_SIZE);
+							  end - index);
 				/* drain pagevecs to help isolate_lru_page() */
 				lru_add_drain();
 				page = find_lock_page(mapping, index);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/9] mm/readahead: Add DEFINE_READAHEAD
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 1/9] Fix khugepaged's request size in collapse_file Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control Matthew Wilcox (Oracle)
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	David Howells, linux-mm, linux-fsdevel, Eric Biggers

Allow for a more concise definition of a struct readahead_control.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/pagemap.h | 7 +++++++
 mm/readahead.c          | 6 +-----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 7de11dcd534d..19bba4360436 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -749,6 +749,13 @@ struct readahead_control {
 	unsigned int _batch_count;
 };
 
+#define DEFINE_READAHEAD(rac, f, m, i)					\
+	struct readahead_control rac = {				\
+		.file = f,						\
+		.mapping = m,						\
+		._index = i,						\
+	}
+
 /**
  * readahead_page - Get the next page to read.
  * @rac: The current readahead request.
diff --git a/mm/readahead.c b/mm/readahead.c
index 3c9a8dd7c56c..2126a2754e22 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -179,11 +179,7 @@ void page_cache_readahead_unbounded(struct address_space *mapping,
 {
 	LIST_HEAD(page_pool);
 	gfp_t gfp_mask = readahead_gfp_mask(mapping);
-	struct readahead_control rac = {
-		.mapping = mapping,
-		.file = file,
-		._index = index,
-	};
+	DEFINE_READAHEAD(rac, file, mapping, index);
 	unsigned long i;
 
 	/*
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 1/9] Fix khugepaged's request size in collapse_file Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 2/9] mm/readahead: Add DEFINE_READAHEAD Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 19:22   ` Andrew Morton
  2020-09-03 14:08 ` [PATCH 4/9] mm/readahead: Make do_page_cache_ra " Matthew Wilcox (Oracle)
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	David Howells, linux-mm, linux-fsdevel, Eric Biggers

Define it in the callers instead of in page_cache_ra_unbounded().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 fs/ext4/verity.c        |  4 ++--
 fs/f2fs/verity.c        |  4 ++--
 include/linux/pagemap.h |  5 ++---
 mm/readahead.c          | 30 ++++++++++++++----------------
 4 files changed, 20 insertions(+), 23 deletions(-)

diff --git a/fs/ext4/verity.c b/fs/ext4/verity.c
index bbd5e7e0632b..5b7ba8f71153 100644
--- a/fs/ext4/verity.c
+++ b/fs/ext4/verity.c
@@ -349,6 +349,7 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
 					       pgoff_t index,
 					       unsigned long num_ra_pages)
 {
+	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index);
 	struct page *page;
 
 	index += ext4_verity_metadata_pos(inode) >> PAGE_SHIFT;
@@ -358,8 +359,7 @@ static struct page *ext4_read_merkle_tree_page(struct inode *inode,
 		if (page)
 			put_page(page);
 		else if (num_ra_pages > 1)
-			page_cache_readahead_unbounded(inode->i_mapping, NULL,
-					index, num_ra_pages, 0);
+			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
 		page = read_mapping_page(inode->i_mapping, index, NULL);
 	}
 	return page;
diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c
index 9eb0dba851e8..054ec852b5ea 100644
--- a/fs/f2fs/verity.c
+++ b/fs/f2fs/verity.c
@@ -228,6 +228,7 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
 					       pgoff_t index,
 					       unsigned long num_ra_pages)
 {
+	DEFINE_READAHEAD(ractl, NULL, inode->i_mapping, index);
 	struct page *page;
 
 	index += f2fs_verity_metadata_pos(inode) >> PAGE_SHIFT;
@@ -237,8 +238,7 @@ static struct page *f2fs_read_merkle_tree_page(struct inode *inode,
 		if (page)
 			put_page(page);
 		else if (num_ra_pages > 1)
-			page_cache_readahead_unbounded(inode->i_mapping, NULL,
-					index, num_ra_pages, 0);
+			page_cache_ra_unbounded(&ractl, num_ra_pages, 0);
 		page = read_mapping_page(inode->i_mapping, index, NULL);
 	}
 	return page;
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 19bba4360436..2b613c369a2f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -705,9 +705,8 @@ void page_cache_sync_readahead(struct address_space *, struct file_ra_state *,
 void page_cache_async_readahead(struct address_space *, struct file_ra_state *,
 		struct file *, struct page *, pgoff_t index,
 		unsigned long req_count);
-void page_cache_readahead_unbounded(struct address_space *, struct file *,
-		pgoff_t index, unsigned long nr_to_read,
-		unsigned long lookahead_count);
+void page_cache_ra_unbounded(struct readahead_control *,
+		unsigned long nr_to_read, unsigned long lookahead_count);
 
 /*
  * Like add_to_page_cache_locked, but used to add newly allocated pages:
diff --git a/mm/readahead.c b/mm/readahead.c
index 2126a2754e22..a444943781bb 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -158,10 +158,8 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages,
 }
 
 /**
- * page_cache_readahead_unbounded - Start unchecked readahead.
- * @mapping: File address space.
- * @file: This instance of the open file; used for authentication.
- * @index: First page index to read.
+ * page_cache_ra_unbounded - Start unchecked readahead.
+ * @ractl: Readahead control.
  * @nr_to_read: The number of pages to read.
  * @lookahead_size: Where to start the next readahead.
  *
@@ -173,13 +171,13 @@ static void read_pages(struct readahead_control *rac, struct list_head *pages,
  * Context: File is referenced by caller.  Mutexes may be held by caller.
  * May sleep, but will not reenter filesystem to reclaim memory.
  */
-void page_cache_readahead_unbounded(struct address_space *mapping,
-		struct file *file, pgoff_t index, unsigned long nr_to_read,
-		unsigned long lookahead_size)
+void page_cache_ra_unbounded(struct readahead_control *ractl,
+		unsigned long nr_to_read, unsigned long lookahead_size)
 {
+	struct address_space *mapping = ractl->mapping;
+	unsigned long index = readahead_index(ractl);
 	LIST_HEAD(page_pool);
 	gfp_t gfp_mask = readahead_gfp_mask(mapping);
-	DEFINE_READAHEAD(rac, file, mapping, index);
 	unsigned long i;
 
 	/*
@@ -200,7 +198,7 @@ void page_cache_readahead_unbounded(struct address_space *mapping,
 	for (i = 0; i < nr_to_read; i++) {
 		struct page *page = xa_load(&mapping->i_pages, index + i);
 
-		BUG_ON(index + i != rac._index + rac._nr_pages);
+		BUG_ON(index + i != ractl->_index + ractl->_nr_pages);
 
 		if (page && !xa_is_value(page)) {
 			/*
@@ -211,7 +209,7 @@ void page_cache_readahead_unbounded(struct address_space *mapping,
 			 * have a stable reference to this page, and it's
 			 * not worth getting one just for that.
 			 */
-			read_pages(&rac, &page_pool, true);
+			read_pages(ractl, &page_pool, true);
 			continue;
 		}
 
@@ -224,12 +222,12 @@ void page_cache_readahead_unbounded(struct address_space *mapping,
 		} else if (add_to_page_cache_lru(page, mapping, index + i,
 					gfp_mask) < 0) {
 			put_page(page);
-			read_pages(&rac, &page_pool, true);
+			read_pages(ractl, &page_pool, true);
 			continue;
 		}
 		if (i == nr_to_read - lookahead_size)
 			SetPageReadahead(page);
-		rac._nr_pages++;
+		ractl->_nr_pages++;
 	}
 
 	/*
@@ -237,10 +235,10 @@ void page_cache_readahead_unbounded(struct address_space *mapping,
 	 * uptodate then the caller will launch readpage again, and
 	 * will then handle the error.
 	 */
-	read_pages(&rac, &page_pool, false);
+	read_pages(ractl, &page_pool, false);
 	memalloc_nofs_restore(nofs);
 }
-EXPORT_SYMBOL_GPL(page_cache_readahead_unbounded);
+EXPORT_SYMBOL_GPL(page_cache_ra_unbounded);
 
 /*
  * __do_page_cache_readahead() actually reads a chunk of disk.  It allocates
@@ -252,6 +250,7 @@ void __do_page_cache_readahead(struct address_space *mapping,
 		struct file *file, pgoff_t index, unsigned long nr_to_read,
 		unsigned long lookahead_size)
 {
+	DEFINE_READAHEAD(ractl, file, mapping, index);
 	struct inode *inode = mapping->host;
 	loff_t isize = i_size_read(inode);
 	pgoff_t end_index;	/* The last page we want to read */
@@ -266,8 +265,7 @@ void __do_page_cache_readahead(struct address_space *mapping,
 	if (nr_to_read > end_index - index)
 		nr_to_read = end_index - index + 1;
 
-	page_cache_readahead_unbounded(mapping, file, index, nr_to_read,
-			lookahead_size);
+	page_cache_ra_unbounded(&ractl, nr_to_read, lookahead_size);
 }
 
 /*
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 4/9] mm/readahead: Make do_page_cache_ra take a readahead_control
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2020-09-03 14:08 ` [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 5/9] mm/readahead: Make ondemand_readahead " Matthew Wilcox (Oracle)
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	David Howells, linux-mm, linux-fsdevel, Eric Biggers

Rename __do_page_cache_readahead() to do_page_cache_ra() and call it
directly from ondemand_readahead() instead of indirecting via ra_submit().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h  | 11 +++++------
 mm/readahead.c | 28 +++++++++++++++-------------
 2 files changed, 20 insertions(+), 19 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 10c677655912..6aef85f62b9d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -51,18 +51,17 @@ void unmap_page_range(struct mmu_gather *tlb,
 
 void force_page_cache_readahead(struct address_space *, struct file *,
 		pgoff_t index, unsigned long nr_to_read);
-void __do_page_cache_readahead(struct address_space *, struct file *,
-		pgoff_t index, unsigned long nr_to_read,
-		unsigned long lookahead_size);
+void do_page_cache_ra(struct readahead_control *,
+		unsigned long nr_to_read, unsigned long lookahead_size);
 
 /*
  * Submit IO for the read-ahead request in file_ra_state.
  */
 static inline void ra_submit(struct file_ra_state *ra,
-		struct address_space *mapping, struct file *filp)
+		struct address_space *mapping, struct file *file)
 {
-	__do_page_cache_readahead(mapping, filp,
-			ra->start, ra->size, ra->async_size);
+	DEFINE_READAHEAD(ractl, file, mapping, ra->start);
+	do_page_cache_ra(&ractl, ra->size, ra->async_size);
 }
 
 /**
diff --git a/mm/readahead.c b/mm/readahead.c
index a444943781bb..577f180d9252 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -241,17 +241,16 @@ void page_cache_ra_unbounded(struct readahead_control *ractl,
 EXPORT_SYMBOL_GPL(page_cache_ra_unbounded);
 
 /*
- * __do_page_cache_readahead() actually reads a chunk of disk.  It allocates
+ * do_page_cache_ra() actually reads a chunk of disk.  It allocates
  * the pages first, then submits them for I/O. This avoids the very bad
  * behaviour which would occur if page allocations are causing VM writeback.
  * We really don't want to intermingle reads and writes like that.
  */
-void __do_page_cache_readahead(struct address_space *mapping,
-		struct file *file, pgoff_t index, unsigned long nr_to_read,
-		unsigned long lookahead_size)
+void do_page_cache_ra(struct readahead_control *ractl,
+		unsigned long nr_to_read, unsigned long lookahead_size)
 {
-	DEFINE_READAHEAD(ractl, file, mapping, index);
-	struct inode *inode = mapping->host;
+	struct inode *inode = ractl->mapping->host;
+	unsigned long index = readahead_index(ractl);
 	loff_t isize = i_size_read(inode);
 	pgoff_t end_index;	/* The last page we want to read */
 
@@ -265,7 +264,7 @@ void __do_page_cache_readahead(struct address_space *mapping,
 	if (nr_to_read > end_index - index)
 		nr_to_read = end_index - index + 1;
 
-	page_cache_ra_unbounded(&ractl, nr_to_read, lookahead_size);
+	page_cache_ra_unbounded(ractl, nr_to_read, lookahead_size);
 }
 
 /*
@@ -273,10 +272,11 @@ void __do_page_cache_readahead(struct address_space *mapping,
  * memory at once.
  */
 void force_page_cache_readahead(struct address_space *mapping,
-		struct file *filp, pgoff_t index, unsigned long nr_to_read)
+		struct file *file, pgoff_t index, unsigned long nr_to_read)
 {
+	DEFINE_READAHEAD(ractl, file, mapping, index);
 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
-	struct file_ra_state *ra = &filp->f_ra;
+	struct file_ra_state *ra = &file->f_ra;
 	unsigned long max_pages;
 
 	if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
@@ -294,7 +294,7 @@ void force_page_cache_readahead(struct address_space *mapping,
 
 		if (this_chunk > nr_to_read)
 			this_chunk = nr_to_read;
-		__do_page_cache_readahead(mapping, filp, index, this_chunk, 0);
+		do_page_cache_ra(&ractl, this_chunk, 0);
 
 		index += this_chunk;
 		nr_to_read -= this_chunk;
@@ -432,10 +432,11 @@ static int try_context_readahead(struct address_space *mapping,
  * A minimal readahead algorithm for trivial sequential/random reads.
  */
 static void ondemand_readahead(struct address_space *mapping,
-		struct file_ra_state *ra, struct file *filp,
+		struct file_ra_state *ra, struct file *file,
 		bool hit_readahead_marker, pgoff_t index,
 		unsigned long req_size)
 {
+	DEFINE_READAHEAD(ractl, file, mapping, index);
 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
 	unsigned long max_pages = ra->ra_pages;
 	unsigned long add_pages;
@@ -516,7 +517,7 @@ static void ondemand_readahead(struct address_space *mapping,
 	 * standalone, small random read
 	 * Read as is, and do not pollute the readahead state.
 	 */
-	__do_page_cache_readahead(mapping, filp, index, req_size, 0);
+	do_page_cache_ra(&ractl, req_size, 0);
 	return;
 
 initial_readahead:
@@ -542,7 +543,8 @@ static void ondemand_readahead(struct address_space *mapping,
 		}
 	}
 
-	ra_submit(ra, mapping, filp);
+	ractl._index = ra->start;
+	do_page_cache_ra(&ractl, ra->size, ra->async_size);
 }
 
 /**
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 5/9] mm/readahead: Make ondemand_readahead take a readahead_control
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2020-09-03 14:08 ` [PATCH 4/9] mm/readahead: Make do_page_cache_ra " Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers, Matthew Wilcox

From: David Howells <dhowells@redhat.com>

Make ondemand_readahead() take a readahead_control struct in preparation
for making do_sync_mmap_readahead() pass down an RAC struct.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/readahead.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/mm/readahead.c b/mm/readahead.c
index 577f180d9252..73110c4148f8 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -431,15 +431,14 @@ static int try_context_readahead(struct address_space *mapping,
 /*
  * A minimal readahead algorithm for trivial sequential/random reads.
  */
-static void ondemand_readahead(struct address_space *mapping,
-		struct file_ra_state *ra, struct file *file,
-		bool hit_readahead_marker, pgoff_t index,
+static void ondemand_readahead(struct readahead_control *ractl,
+		struct file_ra_state *ra, bool hit_readahead_marker,
 		unsigned long req_size)
 {
-	DEFINE_READAHEAD(ractl, file, mapping, index);
-	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
+	struct backing_dev_info *bdi = inode_to_bdi(ractl->mapping->host);
 	unsigned long max_pages = ra->ra_pages;
 	unsigned long add_pages;
+	unsigned long index = readahead_index(ractl);
 	pgoff_t prev_index;
 
 	/*
@@ -477,7 +476,8 @@ static void ondemand_readahead(struct address_space *mapping,
 		pgoff_t start;
 
 		rcu_read_lock();
-		start = page_cache_next_miss(mapping, index + 1, max_pages);
+		start = page_cache_next_miss(ractl->mapping, index + 1,
+				max_pages);
 		rcu_read_unlock();
 
 		if (!start || start - index > max_pages)
@@ -510,14 +510,15 @@ static void ondemand_readahead(struct address_space *mapping,
 	 * Query the page cache and look for the traces(cached history pages)
 	 * that a sequential stream would leave behind.
 	 */
-	if (try_context_readahead(mapping, ra, index, req_size, max_pages))
+	if (try_context_readahead(ractl->mapping, ra, index, req_size,
+			max_pages))
 		goto readit;
 
 	/*
 	 * standalone, small random read
 	 * Read as is, and do not pollute the readahead state.
 	 */
-	do_page_cache_ra(&ractl, req_size, 0);
+	do_page_cache_ra(ractl, req_size, 0);
 	return;
 
 initial_readahead:
@@ -543,8 +544,8 @@ static void ondemand_readahead(struct address_space *mapping,
 		}
 	}
 
-	ractl._index = ra->start;
-	do_page_cache_ra(&ractl, ra->size, ra->async_size);
+	ractl->_index = ra->start;
+	do_page_cache_ra(ractl, ra->size, ra->async_size);
 }
 
 /**
@@ -564,6 +565,8 @@ void page_cache_sync_readahead(struct address_space *mapping,
 			       struct file_ra_state *ra, struct file *filp,
 			       pgoff_t index, unsigned long req_count)
 {
+	DEFINE_READAHEAD(ractl, filp, mapping, index);
+
 	/* no read-ahead */
 	if (!ra->ra_pages)
 		return;
@@ -578,7 +581,7 @@ void page_cache_sync_readahead(struct address_space *mapping,
 	}
 
 	/* do read-ahead */
-	ondemand_readahead(mapping, ra, filp, false, index, req_count);
+	ondemand_readahead(&ractl, ra, false, req_count);
 }
 EXPORT_SYMBOL_GPL(page_cache_sync_readahead);
 
@@ -602,6 +605,8 @@ page_cache_async_readahead(struct address_space *mapping,
 			   struct page *page, pgoff_t index,
 			   unsigned long req_count)
 {
+	DEFINE_READAHEAD(ractl, filp, mapping, index);
+
 	/* no read-ahead */
 	if (!ra->ra_pages)
 		return;
@@ -624,7 +629,7 @@ page_cache_async_readahead(struct address_space *mapping,
 		return;
 
 	/* do read-ahead */
-	ondemand_readahead(mapping, ra, filp, true, index, req_count);
+	ondemand_readahead(&ractl, ra, true, req_count);
 }
 EXPORT_SYMBOL_GPL(page_cache_async_readahead);
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
                   ` (4 preceding siblings ...)
  2020-09-03 14:08 ` [PATCH 5/9] mm/readahead: Make ondemand_readahead " Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 7/9] mm/readahead: Add page_cache_sync_ra and page_cache_async_ra Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers, Matthew Wilcox

From: David Howells <dhowells@redhat.com>

Reimplement force_page_cache_readahead() as a wrapper around
force_page_cache_ra().  Pass the existing readahead_control from
page_cache_sync_readahead().

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h  | 13 +++++++++----
 mm/readahead.c | 18 ++++++++++--------
 2 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 6aef85f62b9d..5533e85bd123 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -49,10 +49,15 @@ void unmap_page_range(struct mmu_gather *tlb,
 			     unsigned long addr, unsigned long end,
 			     struct zap_details *details);
 
-void force_page_cache_readahead(struct address_space *, struct file *,
-		pgoff_t index, unsigned long nr_to_read);
-void do_page_cache_ra(struct readahead_control *,
-		unsigned long nr_to_read, unsigned long lookahead_size);
+void do_page_cache_ra(struct readahead_control *, unsigned long nr_to_read,
+		unsigned long lookahead_size);
+void force_page_cache_ra(struct readahead_control *, unsigned long nr);
+static inline void force_page_cache_readahead(struct address_space *mapping,
+		struct file *file, pgoff_t index, unsigned long nr_to_read)
+{
+	DEFINE_READAHEAD(ractl, file, mapping, index);
+	force_page_cache_ra(&ractl, nr_to_read);
+}
 
 /*
  * Submit IO for the read-ahead request in file_ra_state.
diff --git a/mm/readahead.c b/mm/readahead.c
index 73110c4148f8..3115ced5faae 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -271,13 +271,13 @@ void do_page_cache_ra(struct readahead_control *ractl,
  * Chunk the readahead into 2 megabyte units, so that we don't pin too much
  * memory at once.
  */
-void force_page_cache_readahead(struct address_space *mapping,
-		struct file *file, pgoff_t index, unsigned long nr_to_read)
+void force_page_cache_ra(struct readahead_control *ractl,
+		unsigned long nr_to_read)
 {
-	DEFINE_READAHEAD(ractl, file, mapping, index);
+	struct address_space *mapping = ractl->mapping;
 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
-	struct file_ra_state *ra = &file->f_ra;
-	unsigned long max_pages;
+	struct file_ra_state *ra = &ractl->file->f_ra;
+	unsigned long max_pages, index;
 
 	if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
 			!mapping->a_ops->readahead))
@@ -287,14 +287,16 @@ void force_page_cache_readahead(struct address_space *mapping,
 	 * If the request exceeds the readahead window, allow the read to
 	 * be up to the optimal hardware IO size
 	 */
+	index = readahead_index(ractl);
 	max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages);
-	nr_to_read = min(nr_to_read, max_pages);
+	nr_to_read = min_t(unsigned long, nr_to_read, max_pages);
 	while (nr_to_read) {
 		unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE;
 
 		if (this_chunk > nr_to_read)
 			this_chunk = nr_to_read;
-		do_page_cache_ra(&ractl, this_chunk, 0);
+		ractl->_index = index;
+		do_page_cache_ra(ractl, this_chunk, 0);
 
 		index += this_chunk;
 		nr_to_read -= this_chunk;
@@ -576,7 +578,7 @@ void page_cache_sync_readahead(struct address_space *mapping,
 
 	/* be dumb */
 	if (filp && (filp->f_mode & FMODE_RANDOM)) {
-		force_page_cache_readahead(mapping, filp, index, req_count);
+		force_page_cache_ra(&ractl, req_count);
 		return;
 	}
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 7/9] mm/readahead: Add page_cache_sync_ra and page_cache_async_ra
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
                   ` (5 preceding siblings ...)
  2020-09-03 14:08 ` [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 8/9] mm/filemap: Fold ra_submit into do_sync_mmap_readahead Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 9/9] mm/readahead: Pass a file_ra_state into force_page_cache_ra Matthew Wilcox (Oracle)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox (Oracle),
	David Howells, linux-mm, linux-fsdevel, Eric Biggers

Reimplement page_cache_sync_readahead() and page_cache_async_readahead()
as wrappers around versions of the function which take a readahead_control
in preparation for making do_sync_mmap_readahead() pass down an RAC
struct.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/pagemap.h | 64 ++++++++++++++++++++++++++++++++++-------
 mm/readahead.c          | 58 ++++++++-----------------------------
 2 files changed, 66 insertions(+), 56 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 2b613c369a2f..12ab56c3a86f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -698,16 +698,6 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask);
 void delete_from_page_cache_batch(struct address_space *mapping,
 				  struct pagevec *pvec);
 
-#define VM_READAHEAD_PAGES	(SZ_128K / PAGE_SIZE)
-
-void page_cache_sync_readahead(struct address_space *, struct file_ra_state *,
-		struct file *, pgoff_t index, unsigned long req_count);
-void page_cache_async_readahead(struct address_space *, struct file_ra_state *,
-		struct file *, struct page *, pgoff_t index,
-		unsigned long req_count);
-void page_cache_ra_unbounded(struct readahead_control *,
-		unsigned long nr_to_read, unsigned long lookahead_count);
-
 /*
  * Like add_to_page_cache_locked, but used to add newly allocated pages:
  * the page is new, so we can just run __SetPageLocked() against it.
@@ -755,6 +745,60 @@ struct readahead_control {
 		._index = i,						\
 	}
 
+#define VM_READAHEAD_PAGES	(SZ_128K / PAGE_SIZE)
+
+void page_cache_ra_unbounded(struct readahead_control *,
+		unsigned long nr_to_read, unsigned long lookahead_count);
+void page_cache_sync_ra(struct readahead_control *, struct file_ra_state *,
+		unsigned long req_count);
+void page_cache_async_ra(struct readahead_control *, struct file_ra_state *,
+		struct page *, unsigned long req_count);
+
+/**
+ * page_cache_sync_readahead - generic file readahead
+ * @mapping: address_space which holds the pagecache and I/O vectors
+ * @ra: file_ra_state which holds the readahead state
+ * @file: Used by the filesystem for authentication.
+ * @index: Index of first page to be read.
+ * @req_count: Total number of pages being read by the caller.
+ *
+ * page_cache_sync_readahead() should be called when a cache miss happened:
+ * it will submit the read.  The readahead logic may decide to piggyback more
+ * pages onto the read request if access patterns suggest it will improve
+ * performance.
+ */
+static inline
+void page_cache_sync_readahead(struct address_space *mapping,
+		struct file_ra_state *ra, struct file *file, pgoff_t index,
+		unsigned long req_count)
+{
+	DEFINE_READAHEAD(ractl, file, mapping, index);
+	page_cache_sync_ra(&ractl, ra, req_count);
+}
+
+/**
+ * page_cache_async_readahead - file readahead for marked pages
+ * @mapping: address_space which holds the pagecache and I/O vectors
+ * @ra: file_ra_state which holds the readahead state
+ * @file: Used by the filesystem for authentication.
+ * @page: The page at @index which triggered the readahead call.
+ * @index: Index of first page to be read.
+ * @req_count: Total number of pages being read by the caller.
+ *
+ * page_cache_async_readahead() should be called when a page is used which
+ * is marked as PageReadahead; this is a marker to suggest that the application
+ * has used up enough of the readahead window that we should start pulling in
+ * more pages.
+ */
+static inline
+void page_cache_async_readahead(struct address_space *mapping,
+		struct file_ra_state *ra, struct file *file,
+		struct page *page, pgoff_t index, unsigned long req_count)
+{
+	DEFINE_READAHEAD(ractl, file, mapping, index);
+	page_cache_async_ra(&ractl, ra, page, req_count);
+}
+
 /**
  * readahead_page - Get the next page to read.
  * @rac: The current readahead request.
diff --git a/mm/readahead.c b/mm/readahead.c
index 3115ced5faae..620ac83f35cc 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -550,25 +550,9 @@ static void ondemand_readahead(struct readahead_control *ractl,
 	do_page_cache_ra(ractl, ra->size, ra->async_size);
 }
 
-/**
- * page_cache_sync_readahead - generic file readahead
- * @mapping: address_space which holds the pagecache and I/O vectors
- * @ra: file_ra_state which holds the readahead state
- * @filp: passed on to ->readpage() and ->readpages()
- * @index: Index of first page to be read.
- * @req_count: Total number of pages being read by the caller.
- *
- * page_cache_sync_readahead() should be called when a cache miss happened:
- * it will submit the read.  The readahead logic may decide to piggyback more
- * pages onto the read request if access patterns suggest it will improve
- * performance.
- */
-void page_cache_sync_readahead(struct address_space *mapping,
-			       struct file_ra_state *ra, struct file *filp,
-			       pgoff_t index, unsigned long req_count)
+void page_cache_sync_ra(struct readahead_control *ractl,
+		struct file_ra_state *ra, unsigned long req_count)
 {
-	DEFINE_READAHEAD(ractl, filp, mapping, index);
-
 	/* no read-ahead */
 	if (!ra->ra_pages)
 		return;
@@ -577,38 +561,20 @@ void page_cache_sync_readahead(struct address_space *mapping,
 		return;
 
 	/* be dumb */
-	if (filp && (filp->f_mode & FMODE_RANDOM)) {
-		force_page_cache_ra(&ractl, req_count);
+	if (ractl->file && (ractl->file->f_mode & FMODE_RANDOM)) {
+		force_page_cache_ra(ractl, req_count);
 		return;
 	}
 
 	/* do read-ahead */
-	ondemand_readahead(&ractl, ra, false, req_count);
+	ondemand_readahead(ractl, ra, false, req_count);
 }
-EXPORT_SYMBOL_GPL(page_cache_sync_readahead);
+EXPORT_SYMBOL_GPL(page_cache_sync_ra);
 
-/**
- * page_cache_async_readahead - file readahead for marked pages
- * @mapping: address_space which holds the pagecache and I/O vectors
- * @ra: file_ra_state which holds the readahead state
- * @filp: passed on to ->readpage() and ->readpages()
- * @page: The page at @index which triggered the readahead call.
- * @index: Index of first page to be read.
- * @req_count: Total number of pages being read by the caller.
- *
- * page_cache_async_readahead() should be called when a page is used which
- * is marked as PageReadahead; this is a marker to suggest that the application
- * has used up enough of the readahead window that we should start pulling in
- * more pages.
- */
-void
-page_cache_async_readahead(struct address_space *mapping,
-			   struct file_ra_state *ra, struct file *filp,
-			   struct page *page, pgoff_t index,
-			   unsigned long req_count)
+void page_cache_async_ra(struct readahead_control *ractl,
+		struct file_ra_state *ra, struct page *page,
+		unsigned long req_count)
 {
-	DEFINE_READAHEAD(ractl, filp, mapping, index);
-
 	/* no read-ahead */
 	if (!ra->ra_pages)
 		return;
@@ -624,16 +590,16 @@ page_cache_async_readahead(struct address_space *mapping,
 	/*
 	 * Defer asynchronous read-ahead on IO congestion.
 	 */
-	if (inode_read_congested(mapping->host))
+	if (inode_read_congested(ractl->mapping->host))
 		return;
 
 	if (blk_cgroup_congested())
 		return;
 
 	/* do read-ahead */
-	ondemand_readahead(&ractl, ra, true, req_count);
+	ondemand_readahead(ractl, ra, true, req_count);
 }
-EXPORT_SYMBOL_GPL(page_cache_async_readahead);
+EXPORT_SYMBOL_GPL(page_cache_async_ra);
 
 ssize_t ksys_readahead(int fd, loff_t offset, size_t count)
 {
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 8/9] mm/filemap: Fold ra_submit into do_sync_mmap_readahead
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
                   ` (6 preceding siblings ...)
  2020-09-03 14:08 ` [PATCH 7/9] mm/readahead: Add page_cache_sync_ra and page_cache_async_ra Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  2020-09-03 14:08 ` [PATCH 9/9] mm/readahead: Pass a file_ra_state into force_page_cache_ra Matthew Wilcox (Oracle)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers, Matthew Wilcox

From: David Howells <dhowells@redhat.com>

Fold ra_submit() into its last remaining user and pass the
readahead_control struct to both do_page_cache_ra() and
page_cache_sync_ra().

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/filemap.c  | 10 +++++-----
 mm/internal.h | 10 ----------
 2 files changed, 5 insertions(+), 15 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 1aaea26556cc..1ad49c33439a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2466,8 +2466,8 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 	struct file *file = vmf->vma->vm_file;
 	struct file_ra_state *ra = &file->f_ra;
 	struct address_space *mapping = file->f_mapping;
+	DEFINE_READAHEAD(ractl, file, mapping, vmf->pgoff);
 	struct file *fpin = NULL;
-	pgoff_t offset = vmf->pgoff;
 	unsigned int mmap_miss;
 
 	/* If we don't want any read-ahead, don't bother */
@@ -2478,8 +2478,7 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 
 	if (vmf->vma->vm_flags & VM_SEQ_READ) {
 		fpin = maybe_unlock_mmap_for_io(vmf, fpin);
-		page_cache_sync_readahead(mapping, ra, file, offset,
-					  ra->ra_pages);
+		page_cache_sync_ra(&ractl, ra, ra->ra_pages);
 		return fpin;
 	}
 
@@ -2499,10 +2498,11 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
 	 * mmap read-around
 	 */
 	fpin = maybe_unlock_mmap_for_io(vmf, fpin);
-	ra->start = max_t(long, 0, offset - ra->ra_pages / 2);
+	ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
 	ra->size = ra->ra_pages;
 	ra->async_size = ra->ra_pages / 4;
-	ra_submit(ra, mapping, file);
+	ractl._index = ra->start;
+	do_page_cache_ra(&ractl, ra->size, ra->async_size);
 	return fpin;
 }
 
diff --git a/mm/internal.h b/mm/internal.h
index 5533e85bd123..0a2e5caea2aa 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -59,16 +59,6 @@ static inline void force_page_cache_readahead(struct address_space *mapping,
 	force_page_cache_ra(&ractl, nr_to_read);
 }
 
-/*
- * Submit IO for the read-ahead request in file_ra_state.
- */
-static inline void ra_submit(struct file_ra_state *ra,
-		struct address_space *mapping, struct file *file)
-{
-	DEFINE_READAHEAD(ractl, file, mapping, ra->start);
-	do_page_cache_ra(&ractl, ra->size, ra->async_size);
-}
-
 /**
  * page_evictable - test whether a page is evictable
  * @page: the page to test
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 9/9] mm/readahead: Pass a file_ra_state into force_page_cache_ra
  2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
                   ` (7 preceding siblings ...)
  2020-09-03 14:08 ` [PATCH 8/9] mm/filemap: Fold ra_submit into do_sync_mmap_readahead Matthew Wilcox (Oracle)
@ 2020-09-03 14:08 ` Matthew Wilcox (Oracle)
  8 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox (Oracle) @ 2020-09-03 14:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers, Matthew Wilcox

From: David Howells <dhowells@redhat.com>

The file_ra_state being passed into page_cache_sync_readahead() was being
ignored in favour of using the one embedded in the struct file.  The only
caller for which this makes a difference is the fsverity code if the file
has been marked as POSIX_FADV_RANDOM, but it's confusing and worth fixing.

Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/internal.h  | 5 +++--
 mm/readahead.c | 5 ++---
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 0a2e5caea2aa..ab4beb7c5cd2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -51,12 +51,13 @@ void unmap_page_range(struct mmu_gather *tlb,
 
 void do_page_cache_ra(struct readahead_control *, unsigned long nr_to_read,
 		unsigned long lookahead_size);
-void force_page_cache_ra(struct readahead_control *, unsigned long nr);
+void force_page_cache_ra(struct readahead_control *, struct file_ra_state *,
+		unsigned long nr);
 static inline void force_page_cache_readahead(struct address_space *mapping,
 		struct file *file, pgoff_t index, unsigned long nr_to_read)
 {
 	DEFINE_READAHEAD(ractl, file, mapping, index);
-	force_page_cache_ra(&ractl, nr_to_read);
+	force_page_cache_ra(&ractl, &file->f_ra, nr_to_read);
 }
 
 /**
diff --git a/mm/readahead.c b/mm/readahead.c
index 620ac83f35cc..c6ffb76827da 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -272,11 +272,10 @@ void do_page_cache_ra(struct readahead_control *ractl,
  * memory at once.
  */
 void force_page_cache_ra(struct readahead_control *ractl,
-		unsigned long nr_to_read)
+		struct file_ra_state *ra, unsigned long nr_to_read)
 {
 	struct address_space *mapping = ractl->mapping;
 	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
-	struct file_ra_state *ra = &ractl->file->f_ra;
 	unsigned long max_pages, index;
 
 	if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
@@ -562,7 +561,7 @@ void page_cache_sync_ra(struct readahead_control *ractl,
 
 	/* be dumb */
 	if (ractl->file && (ractl->file->f_mode & FMODE_RANDOM)) {
-		force_page_cache_ra(ractl, req_count);
+		force_page_cache_ra(ractl, ra, req_count);
 		return;
 	}
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control
  2020-09-03 14:08 ` [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control Matthew Wilcox (Oracle)
@ 2020-09-03 19:22   ` Andrew Morton
  2020-09-03 19:33     ` Matthew Wilcox
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2020-09-03 19:22 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers

On Thu,  3 Sep 2020 15:08:38 +0100 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:

> Define it in the callers instead of in page_cache_ra_unbounded().
> 

The changelogs for patches 2-9 are explaining what the patches do, but
not why they do it, Presumably there's some grand scheme in mind, but
it isn't being revealed to the reader!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control
  2020-09-03 19:22   ` Andrew Morton
@ 2020-09-03 19:33     ` Matthew Wilcox
  0 siblings, 0 replies; 12+ messages in thread
From: Matthew Wilcox @ 2020-09-03 19:33 UTC (permalink / raw)
  To: Andrew Morton; +Cc: David Howells, linux-mm, linux-fsdevel, Eric Biggers

On Thu, Sep 03, 2020 at 12:22:18PM -0700, Andrew Morton wrote:
> On Thu,  3 Sep 2020 15:08:38 +0100 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> 
> > Define it in the callers instead of in page_cache_ra_unbounded().
> > 
> 
> The changelogs for patches 2-9 are explaining what the patches do, but
> not why they do it, Presumably there's some grand scheme in mind, but
> it isn't being revealed to the reader!

Sorry!  For both pieces of infrastructure being build on top of this
patchset, we want the ractl to be available higher in the call-stack.

For David's work, he wants to add the 'critical page' to the ractl so that
he knows which page NEEDS to be brought in from storage, and which ones
are nice-to-have.  We might want something similar in block storage too.
It used to be simple -- the first page was the critical one, but then
mmap added fault-around and so for that usecase, the middle page is
the critical one.  Anyway, I don't have any code to show that yet,
we just know that the lowest point in the callchain where we have that
information is do_sync_mmap_readahead() and so the ractl needs to start
its life there.

For THP, I can show you the code that needs it.  It's actually the
apex patch to the series; the one which finally starts to allocate
THPs and present them to consenting filesystems:
http://git.infradead.org/users/willy/pagecache.git/commitdiff/798bcf30ab2eff278caad03a9edca74d2f8ae760

This doesn't need the ractl to be available as high in the stack as
David does, which is why he did the last few patches.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-09-03 19:33 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-03 14:08 [PATCH 0/9] Readahead patches for 5.9/5.10 Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 1/9] Fix khugepaged's request size in collapse_file Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 2/9] mm/readahead: Add DEFINE_READAHEAD Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 3/9] mm/readahead: Make page_cache_ra_unbounded take a readahead_control Matthew Wilcox (Oracle)
2020-09-03 19:22   ` Andrew Morton
2020-09-03 19:33     ` Matthew Wilcox
2020-09-03 14:08 ` [PATCH 4/9] mm/readahead: Make do_page_cache_ra " Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 5/9] mm/readahead: Make ondemand_readahead " Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 6/9] mm/readahead: Pass readahead_control to force_page_cache_ra Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 7/9] mm/readahead: Add page_cache_sync_ra and page_cache_async_ra Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 8/9] mm/filemap: Fold ra_submit into do_sync_mmap_readahead Matthew Wilcox (Oracle)
2020-09-03 14:08 ` [PATCH 9/9] mm/readahead: Pass a file_ra_state into force_page_cache_ra Matthew Wilcox (Oracle)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).