From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S263072AbTIWAla (ORCPT ); Mon, 22 Sep 2003 20:41:30 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S263064AbTIWAla (ORCPT ); Mon, 22 Sep 2003 20:41:30 -0400 Received: from e33.co.us.ibm.com ([32.97.110.131]:43003 "EHLO e33.co.us.ibm.com") by vger.kernel.org with ESMTP id S263072AbTIWAl1 (ORCPT ); Mon, 22 Sep 2003 20:41:27 -0400 Subject: Re: [PATCH][2.6-mm] Readahead issues and AIO read speedup From: Ram Pai To: Andrew Morton Cc: Badari Pulavarty , slpratt@us.ibm.com, suparna@in.ibm.com, linux-kernel@vger.kernel.org, linux-aio@kvack.org In-Reply-To: <20030807135819.3368ee16.akpm@osdl.org> References: <20030807100120.GA5170@in.ibm.com> <200308071021.39816.pbadari@us.ibm.com> <20030807103930.69e497a7.akpm@osdl.org> <200308071341.50834.pbadari@us.ibm.com> <20030807135819.3368ee16.akpm@osdl.org> Content-Type: text/plain Organization: Message-Id: <1064277666.9807.46.camel@dyn319480.beaverton.ibm.com> Mime-Version: 1.0 X-Mailer: Ximian Evolution 1.2.2 (1.2.2-5) Date: 22 Sep 2003 17:41:07 -0700 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org On Thu, 2003-08-07 at 13:58, Andrew Morton wrote: > Badari Pulavarty wrote: > > > > On Thursday 07 August 2003 10:39 am, Andrew Morton wrote: > > > Badari Pulavarty wrote: > > > > We should do readahead of actual pages required by the current > > > > read would be correct solution. (like Suparna suggested). > > > > > > I repeat: what will be the effect of this if all those pages are already in > > > pagecache? > > > > Hmm !! Do you think just peeking at pagecache and bailing out if > > nothing needed to be done, is too expensive ? Anyway, slow read > > code has to do this later. Doing it in readahead one more time causes > > significant perf. hit ? > > It has been observed, yes. we found substantial improvements (around 20% )in Database decision support work load on Filesystems. To address your concern regarding possible SDET regression generated by this patch, we (Steve Pratt) ran a bunch of regression tests on 2.6.0test5-mm2(with and without the patch). I have pasted the results of SDET and Kernel Bench. We did not find any noticable performance regression. Here are some results from Steve on test5-mm2 ************************************************************************** sdet comparison of 2.6.0-test5-mm2 vs 2.6.0-test5mm2-without-READAHEAD-patch Results:Throughput tolerance = 0.00 + 3.00% of 2.6.0-test5-mm2 2.6.0-test5-mm2 2.6.0-test5mm2-without-READAHEAD Threads Ops/sec Ops/sec %diff diff tolerance ---------- ------------ ------------ -------- ------------ ------------ 1 3089 3103 0.45 14.00 92.67 4 11181 11294 1.01 113.00 335.43 16 18436 18530 0.51 94.00 553.08 64 18867 19002 0.72 135.00 566.01 **************************************************************************** ************************************************************************** kernbench comparison of 2.6.0-test5-mm2 vs 2.6.0-test5mm2-without-READAHEAD Results:Elapsed Time tolerance = 0.00 + 3.00% of 2.6.0-test5-mm2 2.6.0-test5-mm2 2.6.0-test5mm2-without-READAHEAD Seconds Seconds %diff diff tolerance ---------- ------------ ------------ -------- ------------ ------------ 2 96.015 95.035 -1.02 -0.98 2.88 ************************************************************************** Would you like us to run some other tests? Thanks, RP > > > And also, do you think this is the most common case ? > > It is a very common case. It's one we need to care for. Especially when > lots of CPUs are hitting the same file. > > There are things we can do to tweak it up, such as adding a max_index to > find_get_pages(), then do multipage lookups, etc. But not doing it at all > is always the fastest way. > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ >