All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ric Wheeler <rwheeler@redhat.com>
To: linux-fsdevel@vger.kernel.org
Cc: Christoph Hellwig <hch@infradead.org>,
	Douglas Shakshober <dshaks@redhat.com>,
	Joshua Giles <jgiles@redhat.com>,
	Valerie Aurora <vaurora@redhat.com>,
	Eric Sandeen <esandeen@redhat.com>,
	Steven Whitehouse <swhiteho@redhat.com>,
	Edward Shishkin <edward@redhat.com>,
	Josef Bacik <jbacik@redhat.com>, Jeff Moyer <jmoyer@redhat.com>,
	Chris Mason <chris.mason@oracle.com>,
	"Whitney, Eric" <eric.whitney@hp.com>,
	Theodore Tso <tytso@mit.edu>
Subject: large fs testing
Date: Sat, 23 May 2009 09:53:28 -0400	[thread overview]
Message-ID: <4A17FFD8.80401@redhat.com> (raw)

Jeff Moyer & I have been working with EMC elab over the last week or so testing 
ext4, xfs and gfs2 at roughly 80TB striped across a set of 12TB LUNs (single 
server, 6GB of DRAM, 2 quad core HT enabled CPU's).

The goal of the testing is (in decreasing priority) is to validate Val's 64 bit 
patches for ext4 e2fsprogs, do a very quick sanity check that XFS does indeed 
scale as well as I hear (and it has so far :-)) and to test gfs2 tools at that 
high capacity. Not enough time to get it all done and significant fumbling on my 
part made it go even slower.

Never the less, I have come to a rough idea of what a useful benchmark would be. 
If this sounds sane to all, I would like to try and put something together that 
we could provide to places like the EMC people who have large storage 
occasionally, are not kernel hackers, but would be willing to test for us. It 
will need to be fairly bullet proof and avoid doing performance numbers on the 
storage for normal things I assume (to avoid leaking competitive benchmarks out).

Motivation - all things being equal, users benefit from having all storage 
consumed by one massive file system since that single file system manages space 
allocation, avoids seekiness, etc (something that applications have to do 
manually when using sets of file systems, the current state of the art for ext3 
for example).

The challenges are:

(1) object count - how many files can you pack into that file system with 
reasonable performance? (The test to date filled the single ext4 fs with 207 
million 20KB files)

(2) files per directory - how many files per directory?

(3) FS creation time - can you create a file system in reasonable time? 
(mkfs.xfs took seconds, mkfs.ext4 took 90 minutes). I think that 90 minutes is 
definitely on the painful side, but usable for most.

(4) FS check time at a given fill rate for a healthy device (no IO errors). 
Testing at empty, 25%, 50%, 75% and 95% and full would all be interesting. Can 
you run these checks with a reasonable amount of DRAM - if not, what guidance do 
we need to give to customers on how big the servers need to be?

It would seem to be a nice goal to be able to fsck a file system in one working 
day - say 8 hours - so that you could get a customer back on their feet, but 
maybe 24 hours would be an outside goal?

(5) Write rate as the fs fills (picking the same set of fill rates?)

To make is some how a tractable problem, I wanted to define small (20KB), medium 
(MP3 sized, say 4MB) and large (video sized, 4GB?) files to do the test with. I 
used fs_mark (no fsync's and 256 directories) to fill the file system (at least 
until my patience/time ran out!). With these options, it still hits very high 
file/directory counts (I am thinking about tweaking fs_mark to dynamically 
create a time based directory scheme, something like day/hour/min and giving it 
an option to stop at a specified fill rate).

Sorry for the long ramble, I was curious to see if this makes sense to the 
broader set of you all & if you have had any similar experiences to share.

Thanks!

Ric









             reply	other threads:[~2009-05-23 13:54 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-05-23 13:53 Ric Wheeler [this message]
2009-05-26 12:21 ` large fs testing Joshua Giles
2009-05-26 12:28   ` Ric Wheeler
2009-05-26 17:39 ` Nick Dokos
2009-05-26 17:47   ` Ric Wheeler
2009-05-26 21:21     ` Andreas Dilger
2009-05-26 21:39       ` Theodore Tso
2009-05-26 22:17       ` Ric Wheeler
2009-05-28  6:30         ` Andreas Dilger
2009-05-28 10:52           ` Ric Wheeler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4A17FFD8.80401@redhat.com \
    --to=rwheeler@redhat.com \
    --cc=chris.mason@oracle.com \
    --cc=dshaks@redhat.com \
    --cc=edward@redhat.com \
    --cc=eric.whitney@hp.com \
    --cc=esandeen@redhat.com \
    --cc=hch@infradead.org \
    --cc=jbacik@redhat.com \
    --cc=jgiles@redhat.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=swhiteho@redhat.com \
    --cc=tytso@mit.edu \
    --cc=vaurora@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.