* A few questions before assembling Linux 7.5TB RAID 5 array
@ 2006-12-21 18:49 Yeechang Lee
2006-12-30 22:31 ` Bill Davidsen
0 siblings, 1 reply; 3+ messages in thread
From: Yeechang Lee @ 2006-12-21 18:49 UTC (permalink / raw)
To: linux-raid
[Also posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]
I'm shortly going to be setting up a Linux software RAID 5 array using
16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
controller (i.e., the controller will be used for its 16 SATA ports,
not its "hardware" fakeraid). The array will be used to store and
serve locally and via gigabit Ethernet large, mostly high-definition
video recordings (up to six or eight files being written to and/or
read from simultaneously, as I envision it). The smallest files will
be 175MB-700MB, the largest will be 25GB+, and most files will be from
4GB to 12GB with a median of about 7.5GB. I plan on using JFS as the
filesystem, without LVM.
A few performance-related questions:
* What chunk size should I use? In previous RAID 5 arrays I've built
for similar purposes I've used 512K. For the setup I'm describing,
should I go bigger? Smaller?
* Should I stick with the default of 0.4% of the array as given over
to the JFS journal? If I can safely go smaller without a
rebuilding-performance penalty, I'd like to. Conversely, if a larger
journal is recommended, I can do that.
* I'm wondering whether I should have ordered two RocketRAID 2220
(each with eight SATA ports) instead of the 2240. Would two cards,
each in a PCI-X slot, perform better? I'll be using the Supermicro
X7DVL-E
(<URL:http://www.supermicro.com/products/motherboard/Xeon1333/5000V/X7DVL-E.cfm>)
as the motherboard.
--
<URL:http://www.pobox.com/~ylee/> PERTH ----> *
Homemade 2.8TB RAID 5 storage array:
<URL:http://groups.google.ca/groups?selm=slrnd1g04a.5mt.ylee%40pobox.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: A few questions before assembling Linux 7.5TB RAID 5 array
2006-12-21 18:49 A few questions before assembling Linux 7.5TB RAID 5 array Yeechang Lee
@ 2006-12-30 22:31 ` Bill Davidsen
2006-12-30 22:55 ` Gordon Henderson
0 siblings, 1 reply; 3+ messages in thread
From: Bill Davidsen @ 2006-12-30 22:31 UTC (permalink / raw)
To: Yeechang Lee; +Cc: linux-raid
Yeechang Lee wrote:
> [Also posted to comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]
>
> I'm shortly going to be setting up a Linux software RAID 5 array using
> 16 500GB SATA drives with one HighPoint RocketRAID 2240 PCI-X
> controller (i.e., the controller will be used for its 16 SATA ports,
> not its "hardware" fakeraid). The array will be used to store and
> serve locally and via gigabit Ethernet large, mostly high-definition
> video recordings (up to six or eight files being written to and/or
> read from simultaneously, as I envision it). The smallest files will
> be 175MB-700MB, the largest will be 25GB+, and most files will be from
> 4GB to 12GB with a median of about 7.5GB. I plan on using JFS as the
> filesystem, without LVM.
>
> A few performance-related questions:
>
> * What chunk size should I use? In previous RAID 5 arrays I've built
> for similar purposes I've used 512K. For the setup I'm describing,
> should I go bigger? Smaller?
>
I am doing some tests on this right now (this weekend), because I don't
have an answer. If I get data I trust I'll share it. See the previous
thread on poor RAID-5 performance, use a BIG stripe buffer and/or wait
for a better answer on chunk size.
> * Should I stick with the default of 0.4% of the array as given over
> to the JFS journal? If I can safely go smaller without a
> rebuilding-performance penalty, I'd like to. Conversely, if a larger
> journal is recommended, I can do that.
>
I do know something about that, having run AIX for a long time. If you
have a high rate of metadata events, like create or delete file, large
journal is a must, and I had one on another array with small stripe size
to spread the head motion, otherwise the log drive became a bottleneck.
If you are going to write a lot of data to this array, mount it
"noatime" to avoid beating the journal and slowing your access.
Be sure you tune your readahead on each drive after looking at the
actual load data. Think "more is better" but "too much is worse," on that.
> * I'm wondering whether I should have ordered two RocketRAID 2220
> (each with eight SATA ports) instead of the 2240. Would two cards,
> each in a PCI-X slot, perform better? I'll be using the Supermicro
> X7DVL-E
> (<URL:http://www.supermicro.com/products/motherboard/Xeon1333/5000V/X7DVL-E.cfm>)
> as the motherboard.
>
>
My guess is that unless your m/b has dual PCI bus (it might), and you
have 2 and 4 way memory interleave (my supermicro boards did the last
time I used one), you are going to be able to swamp the bus and/or
memory with a single controller.
Now, in terms of "perform better," I'm not sure you would be able to
measure it, and unless you have some $tate of the art network, you will
run out of bandwidth to the outside world long before you run out of
disk performance.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: A few questions before assembling Linux 7.5TB RAID 5 array
2006-12-30 22:31 ` Bill Davidsen
@ 2006-12-30 22:55 ` Gordon Henderson
0 siblings, 0 replies; 3+ messages in thread
From: Gordon Henderson @ 2006-12-30 22:55 UTC (permalink / raw)
To: Yeechang Lee; +Cc: linux-raid
Yeechang Lee wrote:
> [Also posted to
> comp.sys.ibm.pc.hardware.storage,comp.arch.storage,alt.comp.hardware.pc-homebuilt,comp.os.linux.hardware.]
>
> I'm shortly going to be setting up a Linux software RAID 5 array using
> 16 500GB SATA drives [...]
I'm of the opinion that more drives means more chance of failure, but
maybe it's just me. I got bitten with a 2-drive failure once in an 8-drive
RAID-5 set a couple of years ago. Fortunately with the aid of mdadm, etc.
and having direct access to the drives rather than having them hidden away
behind some hardware device I was able to recover the data that time, and
now SMART is getting cleverer ... Howerver ...
Did you consider RAID-6?
I've been using it for some time now (over a year?)
But maybe drives are becoming more reliable though - not lost a drive in
the past year!
Gordon
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2006-12-30 22:55 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-12-21 18:49 A few questions before assembling Linux 7.5TB RAID 5 array Yeechang Lee
2006-12-30 22:31 ` Bill Davidsen
2006-12-30 22:55 ` Gordon Henderson
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.