linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
@ 2020-09-04 14:48 Bean Huo
  2020-09-04 18:09 ` Andrew Morton
  2020-09-07  7:16 ` Christoph Hellwig
  0 siblings, 2 replies; 7+ messages in thread
From: Bean Huo @ 2020-09-04 14:48 UTC (permalink / raw)
  To: akpm, linux-mm, linux-kernel; +Cc: beanhuo

From: Bean Huo <beanhuo@micron.com>

Current generic_file_buffered_read() will break up the larger batches of pages
and read data in single page length in case of ra->ra_pages == 0. This patch is
to allow it to pass the batches of pages down to the device if the supported
maximum IO size >= the requested size.

Signed-off-by: Bean Huo <beanhuo@micron.com>
---
 mm/filemap.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 1aaea26556cc..0deec1897817 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2062,6 +2062,7 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
 	struct file *filp = iocb->ki_filp;
 	struct address_space *mapping = filp->f_mapping;
 	struct inode *inode = mapping->host;
+	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
 	struct file_ra_state *ra = &filp->f_ra;
 	loff_t *ppos = &iocb->ki_pos;
 	pgoff_t index;
@@ -2098,9 +2099,14 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
 		if (!page) {
 			if (iocb->ki_flags & IOCB_NOIO)
 				goto would_block;
-			page_cache_sync_readahead(mapping,
-					ra, filp,
-					index, last_index - index);
+
+			if (!ra->ra_pages && bdi->io_pages >= last_index - index)
+				__do_page_cache_readahead(mapping, filp, index,
+							  last_index - index, 0);
+			else
+				page_cache_sync_readahead(mapping, ra, filp,
+							  index,
+							  last_index - index);
 			page = find_get_page(mapping, index);
 			if (unlikely(page == NULL))
 				goto no_cached_page;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
  2020-09-04 14:48 [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0 Bean Huo
@ 2020-09-04 18:09 ` Andrew Morton
  2020-09-11  8:15   ` Bean Huo
  2020-09-07  7:16 ` Christoph Hellwig
  1 sibling, 1 reply; 7+ messages in thread
From: Andrew Morton @ 2020-09-04 18:09 UTC (permalink / raw)
  To: Bean Huo; +Cc: linux-mm, linux-kernel, beanhuo

On Fri,  4 Sep 2020 16:48:07 +0200 Bean Huo <huobean@gmail.com> wrote:

> From: Bean Huo <beanhuo@micron.com>
> 
> Current generic_file_buffered_read() will break up the larger batches of pages
> and read data in single page length in case of ra->ra_pages == 0. This patch is
> to allow it to pass the batches of pages down to the device if the supported
> maximum IO size >= the requested size.
> 
> ...
>
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -2062,6 +2062,7 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  	struct file *filp = iocb->ki_filp;
>  	struct address_space *mapping = filp->f_mapping;
>  	struct inode *inode = mapping->host;
> +	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
>  	struct file_ra_state *ra = &filp->f_ra;
>  	loff_t *ppos = &iocb->ki_pos;
>  	pgoff_t index;
> @@ -2098,9 +2099,14 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  		if (!page) {
>  			if (iocb->ki_flags & IOCB_NOIO)
>  				goto would_block;
> -			page_cache_sync_readahead(mapping,
> -					ra, filp,
> -					index, last_index - index);
> +
> +			if (!ra->ra_pages && bdi->io_pages >= last_index - index)
> +				__do_page_cache_readahead(mapping, filp, index,
> +							  last_index - index, 0);
> +			else
> +				page_cache_sync_readahead(mapping, ra, filp,
> +							  index,
> +							  last_index - index);
>  			page = find_get_page(mapping, index);
>  			if (unlikely(page == NULL))
>  				goto no_cached_page;

I assume this is a performance patch.  What are the observed changes in
behaviour?

What is special about ->ra_pages==0?  Wouldn't this optimization still
be valid if ->ra_pages==2?

Doesn't this defeat the purpose of having ->ra_pages==0?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
  2020-09-04 14:48 [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0 Bean Huo
  2020-09-04 18:09 ` Andrew Morton
@ 2020-09-07  7:16 ` Christoph Hellwig
  1 sibling, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-09-07  7:16 UTC (permalink / raw)
  To: Bean Huo
  Cc: akpm, linux-mm, linux-kernel, beanhuo, Richard Weinberger, linux-mtd

On Fri, Sep 04, 2020 at 04:48:07PM +0200, Bean Huo wrote:
> From: Bean Huo <beanhuo@micron.com>
> 
> Current generic_file_buffered_read() will break up the larger batches of pages
> and read data in single page length in case of ra->ra_pages == 0. This patch is
> to allow it to pass the batches of pages down to the device if the supported
> maximum IO size >= the requested size.

At least ubifs and mtd seem to force ra_pages = 0 to disable read-ahead
entirely, so this seems intentional.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
  2020-09-04 18:09 ` Andrew Morton
@ 2020-09-11  8:15   ` Bean Huo
  2020-09-11  9:47     ` Christoph Hellwig
  0 siblings, 1 reply; 7+ messages in thread
From: Bean Huo @ 2020-09-11  8:15 UTC (permalink / raw)
  To: Andrew Morton, Christoph Hellwig; +Cc: linux-mm, linux-kernel, beanhuo




On Fri, 2020-09-04 at 11:09 -0700, Andrew Morton wrote:
> On Fri,  4 Sep 2020 16:48:07 +0200 Bean Huo <huobean@gmail.com>
> wrote:
> 
> > From: Bean Huo <beanhuo@micron.com>
> > 
> > Current generic_file_buffered_read() will break up the larger
> > batches of pages
> > and read data in single page length in case of ra->ra_pages == 0.
> > This patch is
> > to allow it to pass the batches of pages down to the device if the
> > supported
> > maximum IO size >= the requested size.
> > 
> > ...
> > 
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -2062,6 +2062,7 @@ ssize_t generic_file_buffered_read(struct
> > kiocb *iocb,
> >  	struct file *filp = iocb->ki_filp;
> >  	struct address_space *mapping = filp->f_mapping;
> >  	struct inode *inode = mapping->host;
> > +	struct backing_dev_info *bdi = inode_to_bdi(mapping->host);
> >  	struct file_ra_state *ra = &filp->f_ra;
> >  	loff_t *ppos = &iocb->ki_pos;
> >  	pgoff_t index;
> > @@ -2098,9 +2099,14 @@ ssize_t generic_file_buffered_read(struct
> > kiocb *iocb,
> >  		if (!page) {
> >  			if (iocb->ki_flags & IOCB_NOIO)
> >  				goto would_block;
> > -			page_cache_sync_readahead(mapping,
> > -					ra, filp,
> > -					index, last_index - index);
> > +
> > +			if (!ra->ra_pages && bdi->io_pages >=
> > last_index - index)
> > +				__do_page_cache_readahead(mapping,
> > filp, index,
> > +							  last_index -
> > index, 0);
> > +			else
> > +				page_cache_sync_readahead(mapping, ra,
> > filp,
> > +							  index,
> > +							  last_index -
> > index);
> >  			page = find_get_page(mapping, index);
> >  			if (unlikely(page == NULL))
> >  				goto no_cached_page;
> 
> I assume this is a performance patch.  What are the observed changes
> in behaviour?
> 
> What is special about ->ra_pages==0?  Wouldn't this optimization
> still
> be valid if ->ra_pages==2?
> 
> Doesn't this defeat the purpose of having ->ra_pages==0?


Hi Andrew
Sorry, I am still not quite understanding your above three questions. 

Based on my shallow understanding, ra_pages is associated with
read_ahead_kb. Seems ra_pages controls the maximum read-ahead window
size, but it doesn't work when the requested size exceeds ra_pages. 

If I set the read_ahead_kb to 0, also, as Christoph mentioned, MTD
forcibly sets ra_pages to 0.  I think the intention is that only wants
to disable read-ahead, however, doesn't want
generic_file_buffered_read() to split the request and read data with
4KB chunk size separately.

Thanks,
Bean


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
  2020-09-11  8:15   ` Bean Huo
@ 2020-09-11  9:47     ` Christoph Hellwig
  2020-09-11 11:35       ` Bean Huo
  2020-09-11 11:36       ` Bean Huo
  0 siblings, 2 replies; 7+ messages in thread
From: Christoph Hellwig @ 2020-09-11  9:47 UTC (permalink / raw)
  To: Bean Huo
  Cc: Andrew Morton, Christoph Hellwig, linux-mm, linux-kernel,
	beanhuo, Richard Weinberger

On Fri, Sep 11, 2020 at 10:15:24AM +0200, Bean Huo wrote:
> > What is special about ->ra_pages==0?  Wouldn't this optimization
> > still
> > be valid if ->ra_pages==2?
> > 
> > Doesn't this defeat the purpose of having ->ra_pages==0?
> 
> 
> Hi Andrew
> Sorry, I am still not quite understanding your above three questions. 
> 
> Based on my shallow understanding, ra_pages is associated with
> read_ahead_kb. Seems ra_pages controls the maximum read-ahead window
> size, but it doesn't work when the requested size exceeds ra_pages. 
> 
> If I set the read_ahead_kb to 0, also, as Christoph mentioned, MTD
> forcibly sets ra_pages to 0.  I think the intention is that only wants
> to disable read-ahead, however, doesn't want
> generic_file_buffered_read() to split the request and read data with
> 4KB chunk size separately.

They way I understood Richard this is intentional.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
  2020-09-11  9:47     ` Christoph Hellwig
@ 2020-09-11 11:35       ` Bean Huo
  2020-09-11 11:36       ` Bean Huo
  1 sibling, 0 replies; 7+ messages in thread
From: Bean Huo @ 2020-09-11 11:35 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, linux-mm, linux-kernel, beanhuo, Richard Weinberger

On Fri, 2020-09-11 at 10:47 +0100, Christoph Hellwig wrote:
> > Hi Andrew
> > Sorry, I am still not quite understanding your above three
> > questions. 
> > 
> > Based on my shallow understanding, ra_pages is associated with
> > read_ahead_kb. Seems ra_pages controls the maximum read-ahead
> > window
> > size, but it doesn't work when the requested size exceeds
> > ra_pages. 
> > 
> > If I set the read_ahead_kb to 0, also, as Christoph mentioned, MTD
> > forcibly sets ra_pages to 0.  I think the intention is that only
> > wants
> > to disable read-ahead, however, doesn't want
> > generic_file_buffered_read() to split the request and read data
> > with
> > 4KB chunk size separately.
> 
> They way I understood Richard this is intentional.

Hi Christoph
Thanks. understood now, MTD expects this result. Even so, I think this
patch doesn't impact MTD because the flash-based FS only achieved the
readpage. Inside __do_page_cache_readahead will use mapping->a_ops-
>readpage to read data.

Thanks,
Bean


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0
  2020-09-11  9:47     ` Christoph Hellwig
  2020-09-11 11:35       ` Bean Huo
@ 2020-09-11 11:36       ` Bean Huo
  1 sibling, 0 replies; 7+ messages in thread
From: Bean Huo @ 2020-09-11 11:36 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, linux-mm, linux-kernel, beanhuo, Richard Weinberger

On Fri, 2020-09-11 at 10:47 +0100, Christoph Hellwig wrote:
> > > Doesn't this defeat the purpose of having ->ra_pages==0?
> > 
> > 
> > Hi Andrew
> > Sorry, I am still not quite understanding your above three
> > questions. 
> > 
> > Based on my shallow understanding, ra_pages is associated with
> > read_ahead_kb. Seems ra_pages controls the maximum read-ahead
> > window
> > size, but it doesn't work when the requested size exceeds
> > ra_pages. 
> > 
> > If I set the read_ahead_kb to 0, also, as Christoph mentioned, MTD
> > forcibly sets ra_pages to 0.  I think the intention is that only
> > wants
> > to disable read-ahead, however, doesn't want
> > generic_file_buffered_read() to split the request and read data
> > with
> > 4KB chunk size separately.
> 
> They way I understood Richard this is intentional.

Hi Christoph
Thanks. understood now, MTD expects this result. Even so, I think this
patch doesn't impact MTD because the flash-based FS only achieved the
readpage. Inside __do_page_cache_readahead will use mapping->a_ops-
>readpage to read data.

Thanks,
Bean


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-12-05 11:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-04 14:48 [PATCH RFC] mm: Let readahead submit larger batches of pages in case of ra->ra_pages == 0 Bean Huo
2020-09-04 18:09 ` Andrew Morton
2020-09-11  8:15   ` Bean Huo
2020-09-11  9:47     ` Christoph Hellwig
2020-09-11 11:35       ` Bean Huo
2020-09-11 11:36       ` Bean Huo
2020-09-07  7:16 ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).