From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.6 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8C64C10DCE for ; Mon, 23 Mar 2020 20:50:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8F0102073C for ; Mon, 23 Mar 2020 20:50:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584996601; bh=uca8lceeryIzQMAk4G4iarM7sj/2pw+KF5opo+TrNGY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=XrDvVSUcxFaf1l4GzX8ZdCdtabZLFin5mBRxEIRKI10vTyu3/a3cI0TKnUNNcAkOr 9KmWwsT/odEqBdjvrVvQhp9CCzOsjkg5HGLQlnY2zBf9ZJaYFtmRcCyn1W2zWu/RZ0 ZJzzkt9ZjAAUdKZpjLzwwydZFM3nKt7n4IPSisek= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727025AbgCWUt5 (ORCPT ); Mon, 23 Mar 2020 16:49:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:54246 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725914AbgCWUt5 (ORCPT ); Mon, 23 Mar 2020 16:49:57 -0400 Received: from gmail.com (unknown [104.132.1.76]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 2370A20719; Mon, 23 Mar 2020 20:49:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1584996596; bh=uca8lceeryIzQMAk4G4iarM7sj/2pw+KF5opo+TrNGY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=coWkHNesJ3VgUDx8upr085r4HZYltqISo1yUGjJ/WXKfvNR+04jgsegcd1wxMZodR fQGiy5Lk9Drq+w7ah2QA9gYBnXswZgZRIqxSJS3M98zDeMk86vooA8s3ntT5pmzqG+ 0iFnw5waVySEm2t9URBP8tlPK7izf9Vhqov96a5c= Date: Mon, 23 Mar 2020 13:49:54 -0700 From: Eric Biggers To: Matthew Wilcox Cc: Andrew Morton , linux-xfs@vger.kernel.org, William Kucharski , John Hubbard , linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net, cluster-devel@redhat.com, linux-mm@kvack.org, ocfs2-devel@oss.oracle.com, linux-fsdevel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-btrfs@vger.kernel.org Subject: Re: [PATCH v10 12/25] mm: Move end_index check out of readahead loop Message-ID: <20200323204954.GB61708@gmail.com> References: <20200323202259.13363-1-willy@infradead.org> <20200323202259.13363-13-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200323202259.13363-13-willy@infradead.org> User-Agent: Mutt/1.12.2 (2019-09-21) Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Mon, Mar 23, 2020 at 01:22:46PM -0700, Matthew Wilcox wrote: > From: "Matthew Wilcox (Oracle)" > > By reducing nr_to_read, we can eliminate this check from inside the loop. > > Signed-off-by: Matthew Wilcox (Oracle) > Reviewed-by: John Hubbard > Reviewed-by: William Kucharski > --- > mm/readahead.c | 14 ++++++++------ > 1 file changed, 8 insertions(+), 6 deletions(-) > > diff --git a/mm/readahead.c b/mm/readahead.c > index d01531ef9f3c..998fdd23c0b1 100644 > --- a/mm/readahead.c > +++ b/mm/readahead.c > @@ -167,8 +167,6 @@ void __do_page_cache_readahead(struct address_space *mapping, > unsigned long lookahead_size) > { > struct inode *inode = mapping->host; > - struct page *page; > - unsigned long end_index; /* The last page we want to read */ > LIST_HEAD(page_pool); > loff_t isize = i_size_read(inode); > gfp_t gfp_mask = readahead_gfp_mask(mapping); > @@ -178,22 +176,26 @@ void __do_page_cache_readahead(struct address_space *mapping, > ._index = index, > }; > unsigned long i; > + pgoff_t end_index; /* The last page we want to read */ > > if (isize == 0) > return; > > - end_index = ((isize - 1) >> PAGE_SHIFT); > + end_index = (isize - 1) >> PAGE_SHIFT; > + if (index > end_index) > + return; > + /* Don't read past the page containing the last byte of the file */ > + if (nr_to_read > end_index - index) > + nr_to_read = end_index - index + 1; > > /* > * Preallocate as many pages as we will need. > */ > for (i = 0; i < nr_to_read; i++) { > - if (index + i > end_index) > - break; > + struct page *page = xa_load(&mapping->i_pages, index + i); > > BUG_ON(index + i != rac._index + rac._nr_pages); > > - page = xa_load(&mapping->i_pages, index + i); > if (page && !xa_is_value(page)) { > /* > * Page already present? Kick off the current batch of > -- Reviewed-by: Eric Biggers - Eric