From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ric Wheeler Subject: Re: large fs testing Date: Tue, 26 May 2009 13:47:44 -0400 Message-ID: <4A1C2B40.30102@redhat.com> References: <4A17FFD8.80401@redhat.com> <5971.1243359565@gamaville.dokosmarshall.org> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: linux-fsdevel@vger.kernel.org, Christoph Hellwig , Douglas Shakshober , Joshua Giles , Valerie Aurora , Eric Sandeen , Steven Whitehouse , Edward Shishkin , Josef Bacik , Jeff Moyer , Chris Mason , "Whitney, Eric" , Theodore Tso To: nicholas.dokos@hp.com Return-path: Received: from mx2.redhat.com ([66.187.237.31]:39837 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755706AbZEZRtF (ORCPT ); Tue, 26 May 2009 13:49:05 -0400 In-Reply-To: <5971.1243359565@gamaville.dokosmarshall.org> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On 05/26/2009 01:39 PM, Nick Dokos wrote: >> (3) FS creation time - can you create a file system in reasonable >> time? (mkfs.xfs took seconds, mkfs.ext4 took 90 minutes). I think that >> 90 minutes is definitely on the painful side, but usable for most. >> > > I get better numbers for some reason: on a 32 TiB filesystem (16 LUNs, > 2TiB each, 128KiB stripes at both the RAID controller and in LVM), using > the following options, I get: > > # time mke2fs -q -t ext4 -O ^resize_inode -E stride=32,stripe-width=512,lazy_itable_init=1 /dev/mapper/bigvg-bigvol > > real 1m2.137s > user 0m58.934s > sys 0m1.981s > > > Without lazy_itable_init, I get > > # time mke2fs -q -t ext4 -O ^resize_inode -E stride=32,stripe-width=512 /dev/mapper/bigvg-bigvol > > real 12m54.510s > user 1m4.786s > sys 11m44.762s > > Thanks, > Nick Hi Nick, These runs were without lazy init, so I would expect to be a little more than twice as slow as your second run (not the three times I saw) assuming that it scales linearly. This run was with limited DRAM on the box (6GB) and only a single HBA, but I am afraid that I did not get any good insight into what was the bottleneck during my runs. Also, I am pretty certain that most arrays do better with more, smaller sized LUN's (like you had) than fewer, larger ones. Do you have any access to even larger storage, say the mythical 100TB :-) ? Any insight on interesting workloads? Thanks! Ric