* + mm-readahead-pass-readahead_control-to-force_page_cache_ra.patch added to -mm tree
@ 2020-09-03 19:24 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2020-09-03 19:24 UTC (permalink / raw)
To: mm-commits, willy, ebiggers, dhowells
The patch titled
Subject: mm/readahead: pass readahead_control to force_page_cache_ra
has been added to the -mm tree. Its filename is
mm-readahead-pass-readahead_control-to-force_page_cache_ra.patch
This patch should soon appear at
https://ozlabs.org/~akpm/mmots/broken-out/mm-readahead-pass-readahead_control-to-force_page_cache_ra.patch
and later at
https://ozlabs.org/~akpm/mmotm/broken-out/mm-readahead-pass-readahead_control-to-force_page_cache_ra.patch
Before you just go and hit "reply", please:
a) Consider who else should be cc'ed
b) Prefer to cc a suitable mailing list as well
c) Ideally: find the original patch on the mailing list and do a
reply-to-all to that, adding suitable additional cc's
*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***
The -mm tree is included into linux-next and is updated
there every 3-4 working days
------------------------------------------------------
From: David Howells <dhowells@redhat.com>
Subject: mm/readahead: pass readahead_control to force_page_cache_ra
Reimplement force_page_cache_readahead() as a wrapper around
force_page_cache_ra(). Pass the existing readahead_control from
page_cache_sync_readahead().
Link: https://lkml.kernel.org/r/20200903140844.14194-7-willy@infradead.org
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Eric Biggers <ebiggers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/internal.h | 13 +++++++++----
mm/readahead.c | 18 ++++++++++--------
2 files changed, 19 insertions(+), 12 deletions(-)
--- a/mm/internal.h~mm-readahead-pass-readahead_control-to-force_page_cache_ra
+++ a/mm/internal.h
@@ -49,10 +49,15 @@ void unmap_page_range(struct mmu_gather
unsigned long addr, unsigned long end,
struct zap_details *details);
-void force_page_cache_readahead(struct address_space *, struct file *,
- pgoff_t index, unsigned long nr_to_read);
-void do_page_cache_ra(struct readahead_control *,
- unsigned long nr_to_read, unsigned long lookahead_size);
+void do_page_cache_ra(struct readahead_control *, unsigned long nr_to_read,
+ unsigned long lookahead_size);
+void force_page_cache_ra(struct readahead_control *, unsigned long nr);
+static inline void force_page_cache_readahead(struct address_space *mapping,
+ struct file *file, pgoff_t index, unsigned long nr_to_read)
+{
+ DEFINE_READAHEAD(ractl, file, mapping, index);
+ force_page_cache_ra(&ractl, nr_to_read);
+}
/*
* Submit IO for the read-ahead request in file_ra_state.
--- a/mm/readahead.c~mm-readahead-pass-readahead_control-to-force_page_cache_ra
+++ a/mm/readahead.c
@@ -271,13 +271,13 @@ void do_page_cache_ra(struct readahead_c
* Chunk the readahead into 2 megabyte units, so that we don't pin too much
* memory at once.
*/
-void force_page_cache_readahead(struct address_space *mapping,
- struct file *file, pgoff_t index, unsigned long nr_to_read)
+void force_page_cache_ra(struct readahead_control *ractl,
+ unsigned long nr_to_read)
{
- DEFINE_READAHEAD(ractl, file, mapping, index);
+ struct address_space *mapping = ractl->mapping;
struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
- struct file_ra_state *ra = &file->f_ra;
- unsigned long max_pages;
+ struct file_ra_state *ra = &ractl->file->f_ra;
+ unsigned long max_pages, index;
if (unlikely(!mapping->a_ops->readpage && !mapping->a_ops->readpages &&
!mapping->a_ops->readahead))
@@ -287,14 +287,16 @@ void force_page_cache_readahead(struct a
* If the request exceeds the readahead window, allow the read to
* be up to the optimal hardware IO size
*/
+ index = readahead_index(ractl);
max_pages = max_t(unsigned long, bdi->io_pages, ra->ra_pages);
- nr_to_read = min(nr_to_read, max_pages);
+ nr_to_read = min_t(unsigned long, nr_to_read, max_pages);
while (nr_to_read) {
unsigned long this_chunk = (2 * 1024 * 1024) / PAGE_SIZE;
if (this_chunk > nr_to_read)
this_chunk = nr_to_read;
- do_page_cache_ra(&ractl, this_chunk, 0);
+ ractl->_index = index;
+ do_page_cache_ra(ractl, this_chunk, 0);
index += this_chunk;
nr_to_read -= this_chunk;
@@ -576,7 +578,7 @@ void page_cache_sync_readahead(struct ad
/* be dumb */
if (filp && (filp->f_mode & FMODE_RANDOM)) {
- force_page_cache_readahead(mapping, filp, index, req_count);
+ force_page_cache_ra(&ractl, req_count);
return;
}
_
Patches currently in -mm which might be from dhowells@redhat.com are
fix-khugepageds-request-size-in-collapse_file.patch
mm-readahead-make-ondemand_readahead-take-a-readahead_control.patch
mm-readahead-pass-readahead_control-to-force_page_cache_ra.patch
mm-filemap-fold-ra_submit-into-do_sync_mmap_readahead.patch
mm-readahead-pass-a-file_ra_state-into-force_page_cache_ra.patch
mutex-subsystem-synchro-test-module.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2020-09-03 19:24 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-03 19:24 + mm-readahead-pass-readahead_control-to-force_page_cache_ra.patch added to -mm tree akpm
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).