From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752359AbcHNX5z (ORCPT ); Sun, 14 Aug 2016 19:57:55 -0400 Received: from mga01.intel.com ([192.55.52.88]:29064 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751932AbcHNX5y (ORCPT ); Sun, 14 Aug 2016 19:57:54 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.28,522,1464678000"; d="scan'208";a="748704446" Date: Mon, 15 Aug 2016 07:57:49 +0800 From: Fengguang Wu To: Christoph Hellwig Cc: Dave Chinner , Ye Xiaolong , Linus Torvalds , LKML , Bob Peterson , LKP Subject: Re: [LKP] [lkp] [xfs] 68a9f5e700: aim7.jobs-per-min -13.6% regression Message-ID: <20160814235749.GA28940@wfg-t540p.sh.intel.com> References: <20160812062934.GA17589@yexl-desktop> <20160812085124.GB19354@yexl-desktop> <20160812100208.GA16044@dastard> <20160813003054.GA3101@lst.de> <20160813214825.GA31667@lst.de> <20160813220727.GA4901@wfg-t540p.sh.intel.com> <20160813221507.GA1368@lst.de> <20160813225128.GA6416@wfg-t540p.sh.intel.com> <20160814145053.GA17428@wfg-t540p.sh.intel.com> <20160814161724.GA20274@lst.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20160814161724.GA20274@lst.de> User-Agent: Mutt/1.6.0 (2016-04-01) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Christoph, On Sun, Aug 14, 2016 at 06:17:24PM +0200, Christoph Hellwig wrote: >Snipping the long contest: > >I think there are three observations here: > > (1) removing the mark_page_accessed (which is the only significant > change in the parent commit) hurts the > aim7/1BRD_48G-xfs-disk_rr-3000-performance/ivb44 test. > I'd still rather stick to the filemap version and let the > VM people sort it out. How do the numbers for this test > look for XFS vs say ext4 and btrfs? We'll be able to compare between filesystems when the tests for Linus' patch finish. > (2) lots of additional spinlock contention in the new case. A quick > check shows that I fat-fingered my rewrite so that we do > the xfs_inode_set_eofblocks_tag call now for the pure lookup > case, and pretty much all new cycles come from that. > (3) Boy, are those xfs_inode_set_eofblocks_tag calls expensive, and > we're already doing way to many even without my little bug above. > >So I've force pushed a new version of the iomap-fixes branch with >(2) fixed, and also a little patch to xfs_inode_set_eofblocks_tag a >lot less expensive slotted in before that. Would be good to see >the numbers with that. I just queued these jobs. The comment-out ones will be submitted as the 2nd stage when the 1st-round quick tests finish. queue=( queue -q vip --repeat-to 3 fs=xfs perf-profile.delay=1 -b hch-vfs/iomap-fixes -k bf4dc6e4ecc2a3d042029319bc8cd4204c185610 -k 74a242ad94d13436a1644c0b4586700e39871491 -k 99091700659f4df965e138b38b4fa26a29b7eade ) "${queue[@]}" -t ivb44 aim7-fs-1brd.yaml "${queue[@]}" -t ivb44 fsmark-generic-1brd.yaml "${queue[@]}" -t ivb43 fsmark-stress-journal-1brd.yaml "${queue[@]}" -t lkp-hsx02 fsmark-generic-brd-raid.yaml "${queue[@]}" -t lkp-hsw-ep4 fsmark-1ssd-nvme-small.yaml #"${queue[@]}" -t ivb43 fsmark-stress-journal-1hdd.yaml #"${queue[@]}" -t ivb44 dd-write-1hdd.yaml fsmark-generic-1hdd.yaml Thanks, Fengguang From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============1630451785652138017==" MIME-Version: 1.0 From: Fengguang Wu To: lkp@lists.01.org Subject: Re: [xfs] 68a9f5e700: aim7.jobs-per-min -13.6% regression Date: Mon, 15 Aug 2016 07:57:49 +0800 Message-ID: <20160814235749.GA28940@wfg-t540p.sh.intel.com> In-Reply-To: <20160814161724.GA20274@lst.de> List-Id: --===============1630451785652138017== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Christoph, On Sun, Aug 14, 2016 at 06:17:24PM +0200, Christoph Hellwig wrote: >Snipping the long contest: > >I think there are three observations here: > > (1) removing the mark_page_accessed (which is the only significant > change in the parent commit) hurts the > aim7/1BRD_48G-xfs-disk_rr-3000-performance/ivb44 test. > I'd still rather stick to the filemap version and let the > VM people sort it out. How do the numbers for this test > look for XFS vs say ext4 and btrfs? We'll be able to compare between filesystems when the tests for Linus' patch finish. > (2) lots of additional spinlock contention in the new case. A quick > check shows that I fat-fingered my rewrite so that we do > the xfs_inode_set_eofblocks_tag call now for the pure lookup > case, and pretty much all new cycles come from that. > (3) Boy, are those xfs_inode_set_eofblocks_tag calls expensive, and > we're already doing way to many even without my little bug above. > >So I've force pushed a new version of the iomap-fixes branch with >(2) fixed, and also a little patch to xfs_inode_set_eofblocks_tag a >lot less expensive slotted in before that. Would be good to see >the numbers with that. I just queued these jobs. The comment-out ones will be submitted as the 2nd stage when the 1st-round quick tests finish. queue=3D( queue -q vip --repeat-to 3 fs=3Dxfs perf-profile.delay=3D1 -b hch-vfs/iomap-fixes -k bf4dc6e4ecc2a3d042029319bc8cd4204c185610 -k 74a242ad94d13436a1644c0b4586700e39871491 -k 99091700659f4df965e138b38b4fa26a29b7eade ) "${queue[@]}" -t ivb44 aim7-fs-1brd.yaml "${queue[@]}" -t ivb44 fsmark-generic-1brd.yaml "${queue[@]}" -t ivb43 fsmark-stress-journal-1brd.yaml "${queue[@]}" -t lkp-hsx02 fsmark-generic-brd-raid.yaml "${queue[@]}" -t lkp-hsw-ep4 fsmark-1ssd-nvme-small.yaml #"${queue[@]}" -t ivb43 fsmark-stress-journal-1hdd.yaml #"${queue[@]}" -t ivb44 dd-write-1hdd.yaml fsmark-generic-1hdd.yaml Thanks, Fengguang --===============1630451785652138017==--