From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752963Ab2GAXzF (ORCPT ); Sun, 1 Jul 2012 19:55:05 -0400 Received: from ipmail06.adl2.internode.on.net ([150.101.137.129]:41891 "EHLO ipmail06.adl2.internode.on.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752019Ab2GAXzE (ORCPT ); Sun, 1 Jul 2012 19:55:04 -0400 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: Ak4JABji8E95LDcx/2dsb2JhbABFtSsEgSmBCIIYAQEEATocFg0FCwgDDgouFCUDIROIBgS7CxSLJxQHKIVXA5UzkASCcYFE Date: Mon, 2 Jul 2012 09:54:58 +1000 From: Dave Chinner To: Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com Subject: Re: [MMTests] IO metadata on XFS Message-ID: <20120701235458.GM19223@dastard> References: <20120620113252.GE4011@suse.de> <20120629111932.GA14154@suse.de> <20120629112505.GF14154@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20120629112505.GF14154@suse.de> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 29, 2012 at 12:25:06PM +0100, Mel Gorman wrote: > Configuration: global-dhp__io-metadata-xfs > Benchmarks: dbench3, fsmark-single, fsmark-threaded > > Summary > ======= > Most of the figures look good and in general there has been consistent good > performance from XFS. However, fsmark-single is showing a severe performance > dip in a few cases somewhere between 3.1 and 3.4. fs-mark running a single > thread took a particularly bad dive in 3.4 for two machines that is worth > examining closer. That will be caused by the fact we changed all the metadata updates to be logged, which means a transaction every time .dirty_inode is called. This should mostly go away when XFS is converted to use .update_time rather than .dirty_inode to only issue transactions when the VFS updates the atime rather than every .dirty_inode call... > Unfortunately it is harder to easy conclusions as the > gains/losses are not consistent between machines which may be related to > the available number of CPU threads. It increases the CPU overhead (dirty_inode can be called up to 4 times per write(2) call, IIRC), so with limited numbers of threads/limited CPU power it will result in lower performance. Where you have lots of CPU power, there will be little difference in performance... Cheers, Dave. -- Dave Chinner david@fromorbit.com From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dave Chinner Subject: Re: [MMTests] IO metadata on XFS Date: Mon, 2 Jul 2012 09:54:58 +1000 Message-ID: <20120701235458.GM19223@dastard> References: <20120620113252.GE4011@suse.de> <20120629111932.GA14154@suse.de> <20120629112505.GF14154@suse.de> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, xfs@oss.sgi.com To: Mel Gorman Return-path: Content-Disposition: inline In-Reply-To: <20120629112505.GF14154@suse.de> Sender: owner-linux-mm@kvack.org List-Id: linux-fsdevel.vger.kernel.org On Fri, Jun 29, 2012 at 12:25:06PM +0100, Mel Gorman wrote: > Configuration: global-dhp__io-metadata-xfs > Benchmarks: dbench3, fsmark-single, fsmark-threaded > > Summary > ======= > Most of the figures look good and in general there has been consistent good > performance from XFS. However, fsmark-single is showing a severe performance > dip in a few cases somewhere between 3.1 and 3.4. fs-mark running a single > thread took a particularly bad dive in 3.4 for two machines that is worth > examining closer. That will be caused by the fact we changed all the metadata updates to be logged, which means a transaction every time .dirty_inode is called. This should mostly go away when XFS is converted to use .update_time rather than .dirty_inode to only issue transactions when the VFS updates the atime rather than every .dirty_inode call... > Unfortunately it is harder to easy conclusions as the > gains/losses are not consistent between machines which may be related to > the available number of CPU threads. It increases the CPU overhead (dirty_inode can be called up to 4 times per write(2) call, IIRC), so with limited numbers of threads/limited CPU power it will result in lower performance. Where you have lots of CPU power, there will be little difference in performance... Cheers, Dave. -- Dave Chinner david@fromorbit.com -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from cuda.sgi.com (cuda1.sgi.com [192.48.157.11]) by oss.sgi.com (8.14.3/8.14.3/SuSE Linux 0.8) with ESMTP id q61Nt2n4138818 for ; Sun, 1 Jul 2012 18:55:02 -0500 Received: from ipmail06.adl2.internode.on.net (ipmail06.adl2.internode.on.net [150.101.137.129]) by cuda.sgi.com with ESMTP id zmxyQsbTijf9yZIm for ; Sun, 01 Jul 2012 16:55:01 -0700 (PDT) Date: Mon, 2 Jul 2012 09:54:58 +1000 From: Dave Chinner Subject: Re: [MMTests] IO metadata on XFS Message-ID: <20120701235458.GM19223@dastard> References: <20120620113252.GE4011@suse.de> <20120629111932.GA14154@suse.de> <20120629112505.GF14154@suse.de> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20120629112505.GF14154@suse.de> List-Id: XFS Filesystem from SGI List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: xfs-bounces@oss.sgi.com Errors-To: xfs-bounces@oss.sgi.com To: Mel Gorman Cc: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, xfs@oss.sgi.com On Fri, Jun 29, 2012 at 12:25:06PM +0100, Mel Gorman wrote: > Configuration: global-dhp__io-metadata-xfs > Benchmarks: dbench3, fsmark-single, fsmark-threaded > > Summary > ======= > Most of the figures look good and in general there has been consistent good > performance from XFS. However, fsmark-single is showing a severe performance > dip in a few cases somewhere between 3.1 and 3.4. fs-mark running a single > thread took a particularly bad dive in 3.4 for two machines that is worth > examining closer. That will be caused by the fact we changed all the metadata updates to be logged, which means a transaction every time .dirty_inode is called. This should mostly go away when XFS is converted to use .update_time rather than .dirty_inode to only issue transactions when the VFS updates the atime rather than every .dirty_inode call... > Unfortunately it is harder to easy conclusions as the > gains/losses are not consistent between machines which may be related to > the available number of CPU threads. It increases the CPU overhead (dirty_inode can be called up to 4 times per write(2) call, IIRC), so with limited numbers of threads/limited CPU power it will result in lower performance. Where you have lots of CPU power, there will be little difference in performance... Cheers, Dave. -- Dave Chinner david@fromorbit.com _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs