From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84199C433F5 for ; Wed, 6 Oct 2021 11:22:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 007D461166 for ; Wed, 6 Oct 2021 11:22:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 007D461166 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 530776B006C; Wed, 6 Oct 2021 07:22:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E0276B0071; Wed, 6 Oct 2021 07:22:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F618900002; Wed, 6 Oct 2021 07:22:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id 303956B006C for ; Wed, 6 Oct 2021 07:22:05 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BCB25183DC7BE for ; Wed, 6 Oct 2021 11:22:04 +0000 (UTC) X-FDA: 78665773368.35.C9D51B5 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf22.hostedemail.com (Postfix) with ESMTP id 4E8312DAF for ; Wed, 6 Oct 2021 11:22:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=OpSK6Fbyuv9gaDCt9Hpmr6zqjt9Sx5Cm+tBQN/Rtb+k=; b=QsTBg2htewdDYbMMDCLO84C8vY Udpbrm8GSKJw7z61XJZlJfW5M61CWsFTceqE8aLrNXTNZPgp+66k6OvwzykE1TpSKLC0OVvCpfwb6 EFKekeqEUkRxCWSLiu/k8QWz5g77ps59/T7k/GWWnU/eTll/WJShCIXD7hiUOxtQ0X4ekmn/em0S6 GmRm5ZLJAPUdJdbMtNFh+8M0DN6gJwa8hNrt1R2YY22sm2mOs08tCENl3F/rLu2Ebk722NZHoxlgH v3g5faZr1A7vJnTn9E3NFtroxo8xL2sY6uf6GkmmyMKjG227Bovqq+Dhk0VX9IYJk1ZTbvyxoEwgy 6/ieHoJQ==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mY4yk-000owa-4t; Wed, 06 Oct 2021 11:21:15 +0000 Date: Wed, 6 Oct 2021 12:20:54 +0100 From: Matthew Wilcox To: Hsin-Yi Wang Cc: Andrew Morton , William Kucharski , Christoph Hellwig , linux-mm@kvack.org Subject: Re: Readahead regressed with c1f6925e1091("mm: put readahead pages in cache earlier") on multicore arm64 platforms Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QsTBg2ht; spf=none (imf22.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 4E8312DAF X-Stat-Signature: rs9pkskr71nzqxkjcxcupqm7tygi5gw7 X-HE-Tag: 1633519324-505189 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Oct 06, 2021 at 05:25:23PM +0800, Hsin-Yi Wang wrote: > Hi Matthew, > > We tested that the performance of readahead is regressed on multicore > arm64 platforms running on the 5.10 kernel. > - The platform we used: 8 cores (4x a53(small), 4x a73(big)) arm64 platform > - The command we used: ureadahead $FILE ($FILE is a 1MB+ pack file, > note that if the file size is small, it's not obvious to see the > regression) > > After we revert the commit c1f6925e1091("mm: put readahead pages in > cache earlier"), the readahead performance is back: > - time ureadahead $FILE: > - 5.10: 1m23.124s > - with c1f6925e1091 reverted: 0m3.323s > - other LTS kernel (eg. 5.4): 0m3.066s > > The slowest part is aops->readpage() in read_pages() called in > read_pages(ractl, &page_pool, false); (the 3rd in > page_cache_ra_unbounded()) What filesystem are you using? > static void read_pages(struct readahead_control *rac, struct list_head *pages, > bool skip_page) > { > ... > if (aops->readahead) { > ... > } else if (aops->readpages) { > ... > } else { > while ((page = readahead_page(rac))) { > aops->readpage(rac->file, page); // most of the time is > spent on this line > put_page(page); > } > } > ... > } > > We also found following metrics that are relevant: > - time ureadahead $FILE: > - 5.10 > - taskset ureadahead to a small core: 0m7.411s > - taskset ureadahead to a big core: 0m5.982s > compared to the original 1m23s, pining the ureadahead task on a > single core also solves the gap. > > Do you have any idea why moving pages to cache earlier then doing page > read later will cause such a difference? > > Thanks, > > Hsin-Yi