From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Morton Subject: [patch 023/128] mm: put readahead pages in cache earlier Date: Mon, 01 Jun 2020 21:46:40 -0700 Message-ID: <20200602044640.caWDg75T3%akpm@linux-foundation.org> References: <20200601214457.919c35648e96a2b46b573fe1@linux-foundation.org> Reply-To: linux-kernel@vger.kernel.org Return-path: Received: from mail.kernel.org ([198.145.29.99]:37422 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725793AbgFBEqm (ORCPT ); Tue, 2 Jun 2020 00:46:42 -0400 In-Reply-To: <20200601214457.919c35648e96a2b46b573fe1@linux-foundation.org> Sender: mm-commits-owner@vger.kernel.org List-Id: mm-commits@vger.kernel.org To: akpm@linux-foundation.org, darrick.wong@oracle.com, dchinner@redhat.com, ebiggers@google.com, gaoxiang25@huawei.com, hch@lst.de, jaegeuk@kernel.org, jhubbard@nvidia.com, johannes.thumshirn@wdc.com, joseph.qi@linux.alibaba.com, junxiao.bi@oracle.com, linux-mm@kvack.org, mhocko@suse.com, mm-commits@vger.kernel.org, mszeredi@redhat.com, torvalds@linux-foundation.org, william.kucharski@oracle.com, willy@infradead.org, xiyou.wangcong@gmail.com, yuchao0@huawei.com, ziy@nvidia.com From: "Matthew Wilcox (Oracle)" Subject: mm: put readahead pages in cache earlier When populating the page cache for readahead, mappings that use ->readpages must populate the page cache themselves as the pages are passed on a linked list which would normally be used for the page cache's LRU. For mappings that use ->readpage or the upcoming ->readahead method, we can put the pages into the page cache as soon as they're allocated, which solves a race between readahead and direct IO. It also lets us remove the gfp argument from read_pages(). Use the new readahead_page() API to implement the repeated calls to ->readpage(), just like most filesystems will. Link: http://lkml.kernel.org/r/20200414150233.24495-11-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Reviewed-by: William Kucharski Cc: Chao Yu Cc: Cong Wang Cc: Darrick J. Wong Cc: Dave Chinner Cc: Eric Biggers Cc: Gao Xiang Cc: Jaegeuk Kim Cc: John Hubbard Cc: Joseph Qi Cc: Junxiao Bi Cc: Michal Hocko Cc: Zi Yan Cc: Johannes Thumshirn Cc: Miklos Szeredi Signed-off-by: Andrew Morton --- mm/readahead.c | 46 ++++++++++++++++++++++++++++------------------ 1 file changed, 28 insertions(+), 18 deletions(-) --- a/mm/readahead.c~mm-put-readahead-pages-in-cache-earlier +++ a/mm/readahead.c @@ -114,14 +114,14 @@ int read_cache_pages(struct address_spac EXPORT_SYMBOL(read_cache_pages); static void read_pages(struct readahead_control *rac, struct list_head *pages, - gfp_t gfp) + bool skip_page) { const struct address_space_operations *aops = rac->mapping->a_ops; + struct page *page; struct blk_plug plug; - unsigned page_idx; if (!readahead_count(rac)) - return; + goto out; blk_start_plug(&plug); @@ -130,23 +130,23 @@ static void read_pages(struct readahead_ readahead_count(rac)); /* Clean up the remaining pages */ put_pages_list(pages); - goto out; - } - - for (page_idx = 0; page_idx < readahead_count(rac); page_idx++) { - struct page *page = lru_to_page(pages); - list_del(&page->lru); - if (!add_to_page_cache_lru(page, rac->mapping, page->index, - gfp)) + rac->_index += rac->_nr_pages; + rac->_nr_pages = 0; + } else { + while ((page = readahead_page(rac))) { aops->readpage(rac->file, page); - put_page(page); + put_page(page); + } } -out: blk_finish_plug(&plug); BUG_ON(!list_empty(pages)); - rac->_nr_pages = 0; + BUG_ON(readahead_count(rac)); + +out: + if (skip_page) + rac->_index++; } /* @@ -168,6 +168,7 @@ void __do_page_cache_readahead(struct ad struct readahead_control rac = { .mapping = mapping, .file = filp, + ._index = index, }; unsigned long i; @@ -183,6 +184,8 @@ void __do_page_cache_readahead(struct ad if (index + i > end_index) break; + BUG_ON(index + i != rac._index + rac._nr_pages); + page = xa_load(&mapping->i_pages, index + i); if (page && !xa_is_value(page)) { /* @@ -190,15 +193,22 @@ void __do_page_cache_readahead(struct ad * contiguous pages before continuing with the next * batch. */ - read_pages(&rac, &page_pool, gfp_mask); + read_pages(&rac, &page_pool, true); continue; } page = __page_cache_alloc(gfp_mask); if (!page) break; - page->index = index + i; - list_add(&page->lru, &page_pool); + if (mapping->a_ops->readpages) { + page->index = index + i; + list_add(&page->lru, &page_pool); + } else if (add_to_page_cache_lru(page, mapping, index + i, + gfp_mask) < 0) { + put_page(page); + read_pages(&rac, &page_pool, true); + continue; + } if (i == nr_to_read - lookahead_size) SetPageReadahead(page); rac._nr_pages++; @@ -209,7 +219,7 @@ void __do_page_cache_readahead(struct ad * uptodate then the caller will launch readpage again, and * will then handle the error. */ - read_pages(&rac, &page_pool, gfp_mask); + read_pages(&rac, &page_pool, false); } /* _