From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S945176AbcJaRmO (ORCPT ); Mon, 31 Oct 2016 13:42:14 -0400 Received: from mail.kernel.org ([198.145.29.136]:59316 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S945161AbcJaRmN (ORCPT ); Mon, 31 Oct 2016 13:42:13 -0400 Date: Mon, 31 Oct 2016 10:42:09 -0700 From: Jaegeuk Kim To: "Huang, Ying" Cc: Fengguang Wu , LKP ML , huang ying , LKML , linux-f2fs-devel@lists.sourceforge.net Subject: Re: [LKP] [lkp] [f2fs] ec795418c4: fsmark.files_per_sec -36.3% regression Message-ID: <20161031174209.GA12400@jaegeuk> References: <20160827005257.GD88444@jaegeuk> <20160827021334.eb3xpz57xvo37g5l@wfg-t540p.sh.intel.com> <20160830023048.GA2088@jaegeuk> <87pooqnr4t.fsf@yhuang-mobile.sh.intel.com> <87fuoni3cx.fsf@yhuang-dev.intel.com> <20160926182353.GA33149@jaegeuk> <8760pii2th.fsf@yhuang-dev.intel.com> <20160927014138.GB35593@jaegeuk> <87bmy15hvy.fsf@yhuang-dev.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <87bmy15hvy.fsf@yhuang-dev.intel.com> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 31, 2016 at 11:14:57AM +0800, Huang, Ying wrote: > Hi, Kim, > > Jaegeuk Kim writes: > > > On Tue, Sep 27, 2016 at 08:50:02AM +0800, Huang, Ying wrote: > >> Jaegeuk Kim writes: > >> > >> > On Mon, Sep 26, 2016 at 02:26:06PM +0800, Huang, Ying wrote: > >> >> Hi, Jaegeuk, > >> >> > >> >> "Huang, Ying" writes: > >> >> > >> >> > Jaegeuk Kim writes: > >> >> > > >> >> >> Hello, > >> >> >> > >> >> >> On Sat, Aug 27, 2016 at 10:13:34AM +0800, Fengguang Wu wrote: > >> >> >>> Hi Jaegeuk, > >> >> >>> > >> >> >>> > > >> > - [lkp] [f2fs] b93f771286: aim7.jobs-per-min -81.2% regression > >> >> >>> > > >> > > >> >> >>> > > >> > The disk is 4 12G ram disk, and setup RAID0 on them via mdadm. The > >> >> >>> > > >> > steps for aim7 is, > >> >> >>> > > >> > > >> >> >>> > > >> > cat > workfile < >> >> >>> > > >> > FILESIZE: 1M > >> >> >>> > > >> > POOLSIZE: 10M > >> >> >>> > > >> > 10 sync_disk_rw > >> >> >>> > > >> > EOF > >> >> >>> > > >> > > >> >> >>> > > >> > ( > >> >> >>> > > >> > echo $HOSTNAME > >> >> >>> > > >> > echo sync_disk_rw > >> >> >>> > > >> > > >> >> >>> > > >> > echo 1 > >> >> >>> > > >> > echo 600 > >> >> >>> > > >> > echo 2 > >> >> >>> > > >> > echo 600 > >> >> >>> > > >> > echo 1 > >> >> >>> > > >> > ) | ./multitask -t & > >> >> >>> > > >> > >> >> >>> > > >> Any update on these 2 regressions? Is the information is enough for you > >> >> >>> > > >> to reproduce? > >> >> >>> > > > > >> >> >>> > > > Sorry, I've had no time to dig this due to business travel now. > >> >> >>> > > > I'll check that when back to US. > >> >> >>> > > > >> >> >>> > > Any update? > >> >> >>> > > >> >> >>> > Sorry, how can I get multitask binary? > >> >> >>> > >> >> >>> It's part of aim7, which can be downloaded here: > >> >> >>> > >> >> >>> http://nchc.dl.sourceforge.net/project/aimbench/aim-suite7/Initial%20release/s7110.tar.Z > >> >> >> > >> >> >> Thank you for the codes. > >> >> >> > >> >> >> I've run this workload on the latest f2fs and compared performance having > >> >> >> without the reported patch. (1TB nvme SSD, 16 cores, 16GB DRAM) > >> >> >> Interestingly, I could find slight performance improvement rather than > >> >> >> regression. :( > >> >> >> Not sure how to reproduce this. > >> >> > > >> >> > I think the difference lies on disk used. The ramdisk is used in the > >> >> > original test, but it appears that your memory is too small to setup the > >> >> > RAM disk for test. So it may be impossible for you to reproduce the > >> >> > test unless you can find more memory :) > >> >> > > >> >> > But we can help you to root cause the issue. What additional data do > >> >> > you want? perf-profile data before and after the patch? > >> >> > >> >> Any update to this regression? > >> > > >> > Sorry, no. But meanwhile, I've purchased more DRAMs. :) > >> > Now I'm with 128GB DRAM. I can configure 64GB as pmem. > >> > Is it worth to try the test again? > >> > >> I think you are the decision maker for this. You can judge whether the > >> test is reasonable. And we can adjust our test accordingly. > >> > >> BTW: For this test, we use brd ram disk and raid to test. > > > > Okay, let me try this again. > > Any update on this? Still in my to-do list. Let you know, if I can get some info. Thanks, > > Best Regards, > Huang, Ying