All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID newbie, 1 vs 5, chunk sizes
@ 2014-06-14 21:01 Nuno Magalhães
  2014-06-14 21:56 ` Stan Hoeppner
  2014-06-16  2:23 ` Phil Turmel
  0 siblings, 2 replies; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-14 21:01 UTC (permalink / raw)
  To: linux-raid

Hi,

Sorry if this is the wrong list. Most of the stuff out there seems
outdated, and i have some specific doubts.

I'm trying to decide how to best create a RAID array and what
configuration to use. This is a desktop system, nothing
mission-critical, but i'd like it to be reasonably tailored to the
hardware and intended usage, and have questions about default values.

Hardware:
    1x Toshiba DT01ACA100 - 32MB bus, 931.0 GiB (1 TB; 1000204886016 bytes)
    2x Seagate ST1000DM0003 - 64MB bus, 931.0 GiB (1 TB; 1000204886016 bytes)
and
    1x Maxtor 6G160E0 - 8MB bus, 149.1 GiB (160 GB; 160041885696 bytes)

The 3 1TB disks all show similar data using
# fdisk -l /dev/sda

    Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes

The Maxtor is different, of course. The system has 8GiB RAM and a
Phenom II quadcore, running Debian Wheezy (stable) at the moment. The
motherboard's an Asus M2nPV-VM. On top the the software RAID i intend
to put LVM and then XFS for the / partition. Then i'll want to use
Xen. One of the VMs will be my regular desktop, another may be a
Windows HVM, another for LAN streaming, other(s) for trying out
distros...

My first question is: does the disk cache size *significantly* affect
performance? Are there any caveats? I assume not (i've already used
another Seagate with maybe 32MB of cache and the Maxtor in a RAID1
159GB array) but i ask because i'm trying to decide between the 2
Seagates in RAID1 + the Toshiba *or* all 3 in RAID5.
I don't intend to put the system under heavy load, but that may be
relative if i want to run 2-3 VMs in Xen and doing some streaming or
running a webserver or what not. It seems to me the overhead RAID5 has
on a system is either irrelevant for this kind of light usage or not
so much an issue with modern hardware (although the motherboard's the
bottlebeck here).

Secondly, the chunks. How do they relate to sectors and why are they
different in fdisk output? From [1] the chunk size should be at least
4KiB, so 4096 bytes seems to match sector size. Am i making the right
assumptions? Or the wrong ones even if the conclusion may be correct?
Chunk size is irrelevant to RAID1, so i assume the 4 KiB value would
apply; but for RAID5 128 KiB are suggested[2], which seems a big
difference. Is there a formula for this? A rule of thumb?
If i go for RAID5 i'd have to consider stripes, which would be 3*4KiB
in size? Of should i use only 2*4KiB as the 3rd chunk is for parity?

The partition type will depend on the RAID type. I've used whole disks
for RAID1, but i'd probably use partitions slightly smaller if i go
for RAID5. I'm assuming binary units are the way to go here,
regardless of vendor usage (and other issues). I'll just have to get
used to the 931 GiB figure.

Sorry if this is a big message or miss-directed but i'd really like
some experienced suggestions before i go re/installing stuff and
creating VMs.

Thanks,
Nuno

[1] https://raid.wiki.kernel.org/index.php/Chunk_size
[2] https://raid.wiki.kernel.org/index.php/RAID_setup#RAID-5


-- 
"On the internet, nobody knows you're a dog."

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-14 21:01 RAID newbie, 1 vs 5, chunk sizes Nuno Magalhães
@ 2014-06-14 21:56 ` Stan Hoeppner
  2014-06-15 11:50   ` Nuno Magalhães
  2014-06-16  2:23 ` Phil Turmel
  1 sibling, 1 reply; 23+ messages in thread
From: Stan Hoeppner @ 2014-06-14 21:56 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

On 6/14/2014 4:01 PM, Nuno Magalhães wrote:
...
> This is a desktop system, nothing
> mission-critical, but i'd like it to be reasonably tailored to the
> hardware and intended usage, and have questions about default values.

[snip]

Everything begins and ends with the workload.  The proper chunk and
stripe sizes are dictated by the write patterns of your workload, i.e.
the mix of applications running on the system, not the type/size of
disks you have.

If you're writing lots of small files then you'll typically want a small
chunk size.  If you use a large chunk size with small files you'll get a
hot spot on one or more drives in the array due to filesystem alignment.
 Hot spot means one disk sees lots of activity while the other(s) sit
idle.  This defeats the purpose of striping.

If you're working predominantly with large media files you'll want a
large chunk/stripe for data transfer efficiency from the platters with
the fewest seeks.  If you have a mixed workload with small and large
files, it's best to use a small chunk size, such as 32 or 64KB.


All of that stated, your filesystem setup, specifically write barrier
support and journal mode, along with those slow disk spindles, will be a
much more significant impediment to performance than your chunk size.

Cheers,

Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-14 21:56 ` Stan Hoeppner
@ 2014-06-15 11:50   ` Nuno Magalhães
       [not found]     ` <pmrj5x4rq3qbl6unnu7guho9.1402849407134@email.android.com>
  0 siblings, 1 reply; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-15 11:50 UTC (permalink / raw)
  To: linux-raid

Hi,

Thanks for your reply. The more i search the more i realize this is
*very* subjective.

On Sat, Jun 14, 2014 at 10:56 PM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
>
> If you're working predominantly with large media files you'll want a
> large chunk/stripe for data transfer efficiency from the platters with
> the fewest seeks.  If you have a mixed workload with small and large
> files, it's best to use a small chunk size, such as 32 or 64KB.

The workload will vary: i have some ~240 GiB of media (ISO files,
movies, photos, music - in increacing order of usage) that i intend to
share on the LAN via SMB/CIFS through a Xen VM. This would be the
biggest chunk of data. Then i have about 50 GiB of personal stuff,
mostly pdf, txt, docx and similar.
The remainder will be used for Xen VMs to work as servers (a web
front-end for that samba share,for instance) and a Windows VM or two,
MySQL databases (nothing big), etc.

In the end i guess it fits the mixed-workload scenario. With all the
subjectivity, can i change the chunk size once the array is created?
Will that "rechunk" existing data - is it feasable - or will it just
use the new value for new data?

> All of that stated, your filesystem setup, specifically write barrier
> support and journal mode, along with those slow disk spindles, will be a
> much more significant impediment to performance than your chunk size.

LVM2 seems to support barriers now, the main issue will be how XFS
behaves under Xen with barriers. I'd also use mostly ext4.

I'm inclined to giving RAID5 a try, i'll dig some more.

Cheers,
Nuno

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
       [not found]     ` <pmrj5x4rq3qbl6unnu7guho9.1402849407134@email.android.com>
@ 2014-06-15 17:40       ` Nuno Magalhães
  0 siblings, 0 replies; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-15 17:40 UTC (permalink / raw)
  To: linux-raid; +Cc: Craig Curtin

On Sun, Jun 15, 2014 at 5:28 PM, Craig Curtin <craigc@prosis.com.au> wrote:
> You do not want to do raid 5 with any sort of VMs - search the archives from
> last week or two for one other persons experience -  start up and Shutdown
> will kill you -  also power loss if no ups  (and you have not mentioned
> having one)

I found [1] pertaining to a RAID6 array with very slow boot times.
However, the VMs in question were file-based VirtualBox VMs, whereas i
want to use LVM-based Xen VMs. I assume Xen will perform better, or
will it perform less badly if i go for RAID5?

I also saw RAID10,f2 mentioned [2] as being a good solution for
desktop systems, which is my case (the first days will be atypical as
i gradually copy stuff onto the array, but then it'll be mostly reads
as far as data goes, with read/writes for VMs). I hadn't looked into
this type before, how does it compare to RAID5 in terms of space? It
seems that, for 3x 1TB disks, both would provide 2TB of space +1TB of
parity (or mirrorred redundancy). Is this correct?

Regards,
Nuno

[1] http://marc.info/?l=linux-raid&m=140163840103389&w=2
[2] http://marc.info/?l=linux-raid&m=140153409328161&w=2
http://en.wikipedia.org/wiki/Linux_MD_RAID_10#Linux_MD_RAID_10

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-14 21:01 RAID newbie, 1 vs 5, chunk sizes Nuno Magalhães
  2014-06-14 21:56 ` Stan Hoeppner
@ 2014-06-16  2:23 ` Phil Turmel
  2014-06-16 12:36   ` Nuno Magalhães
  1 sibling, 1 reply; 23+ messages in thread
From: Phil Turmel @ 2014-06-16  2:23 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

On 06/14/2014 05:01 PM, Nuno Magalhães wrote:
> Hi,
> 
> Sorry if this is the wrong list. Most of the stuff out there seems
> outdated, and i have some specific doubts.
> 
> I'm trying to decide how to best create a RAID array and what
> configuration to use. This is a desktop system, nothing
> mission-critical, but i'd like it to be reasonably tailored to the
> hardware and intended usage, and have questions about default values.
> 
> Hardware:
>     1x Toshiba DT01ACA100 - 32MB bus, 931.0 GiB (1 TB; 1000204886016 bytes)
>     2x Seagate ST1000DM0003 - 64MB bus, 931.0 GiB (1 TB; 1000204886016 bytes)
> and
>     1x Maxtor 6G160E0 - 8MB bus, 149.1 GiB (160 GB; 160041885696 bytes)

Before going any further, check the TLER/ERC support for these drives.
(The Seagate xxxxxDM003 is ringing warning bells for me.)  The output of
"smartctl -l scterc /dev/sdX" for each one will do.  If you get anything
other than "7.0 seconds" or similar, you'll need special boot-time
scripting.  If you get "not supported", I wouldn't even consider putting
the drive into any raid array.

Search this list's archives for details on "timeout mismatch", "scterc",
and the consequences of ignoring this.

If you do have good error recovery timeouts, continue with Stan's advice.

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16  2:23 ` Phil Turmel
@ 2014-06-16 12:36   ` Nuno Magalhães
  2014-06-16 13:19     ` Mark Knecht
                       ` (3 more replies)
  0 siblings, 4 replies; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-16 12:36 UTC (permalink / raw)
  To: linux-raid

Hi,

On Mon, Jun 16, 2014 at 3:23 AM, Phil Turmel <philip@turmel.org> wrote:
> Before going any further, check the TLER/ERC support for these drives.

Both Seagate ST1000DM0003 don't have that:
Warning: device does not support SCT Error Recovery Control command

The Toshiba DT01ACA100 seems to have, but disabled:
SCT Error Recovery Control:
           Read: Disabled
          Write: Disabled

I couldn't find any specs with specific details on this feature for
the Toshiba, and i'm not sure if it's safe to issue smartctl -l
scterc,70,70 on a drive thay may not support it. If it's disabled as
an incentive to buy more expensive drives, will the drive just ignore
this command or will it decide to roast?

Are there any - recommended - consumer-grade drives out there that do
support TLER? I was going for 1TB@7200, but that can change. I started
disliking Seagate after a ST31000528AS died on me. They bought Maxtor
a long time ago and recently bought Samsung's disk division. I've had
no qualms with Toshiba. I'm not so sure on WD, they used to be less
that good; and haven't tried Hitachi. What say you?

Thanks,
Nuno

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 12:36   ` Nuno Magalhães
@ 2014-06-16 13:19     ` Mark Knecht
  2014-06-16 14:28     ` Phil Turmel
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 23+ messages in thread
From: Mark Knecht @ 2014-06-16 13:19 UTC (permalink / raw)
  To: Nuno Magalhães; +Cc: Linux-RAID

On Mon, Jun 16, 2014 at 5:36 AM, Nuno Magalhães <nunomagalhaes@eu.ipp.pt> wrote:
<SNIP>
>
> Are there any - recommended - consumer-grade drives out there that do
> support TLER? I was going for 1TB@7200, but that can change. I started
> disliking Seagate after a ST31000528AS died on me. They bought Maxtor
> a long time ago and recently bought Samsung's disk division. I've had
> no qualms with Toshiba. I'm not so sure on WD, they used to be less
> that good; and haven't tried Hitachi. What say you?
>


I've been using WD 500GB RAID Edition drives for over 4 years now. Now
problems at all.

As for 'consumer grade' I don't know. I just purchased two WD Red 3TB
drives for a RAID1. The drives are in the box but not in use yet. They
do include the error timing feature Phil pointed out. I can say from
experience that some WD Green drives I had quite awhile ago did not
have TLER and the RAID had problems with the drives being dropped.
Listen to Phil and get a drive that supports this feature.

c2RAID6 ~ # smartctl -l scterc /dev/sda
smartctl 6.1 2013-03-16 r3800 [x86_64-linux-3.12.22-gentoo] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)

c2RAID6 ~ #

BTW: I'm the OP on the RAID6 VM slow boot thread you referenced
earlier. My plan of action since then is:

1) Move system from the 5-drive SATA2 RAID6 to the new 2-drive SATA3 RAID1
2) If performance is still lacking investigate using four of the
existing SATA2 500GB drives in a RAID10
3) If warranted investigate using my 120GB SSD with bcache (or
equivalent) in front of the RAID10.

My current power supply has 8 SATA drive power connectors all in use
this morning. (5 RAID6, 2 RAID1, 1 CDROM -> 2 RAID1, 4 RAID10, 1 SSD,
1 CDROM)

mark@c2RAID6 ~ $ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda       8:0    0   2.7T  0 disk
sdb       8:16   0   2.7T  0 disk
sdc       8:32   0 465.8G  0 disk
|-sdc1    8:33   0  54.9M  0 part
|-sdc2    8:34   0     4G  0 part  [SWAP]
`-sdc3    8:35   0 461.7G  0 part
  `-md3   9:3    0   1.4T  0 raid6 /
sdd       8:48   0 465.8G  0 disk
|-sdd1    8:49   0  54.9M  0 part
|-sdd2    8:50   0     4G  0 part  [SWAP]
`-sdd3    8:51   0 461.7G  0 part
  `-md3   9:3    0   1.4T  0 raid6 /
sde       8:64   0 465.8G  0 disk
|-sde1    8:65   0  54.9M  0 part
|-sde2    8:66   0     4G  0 part  [SWAP]
`-sde3    8:67   0 461.7G  0 part
  `-md3   9:3    0   1.4T  0 raid6 /
sdf       8:80   0 465.8G  0 disk
|-sdf1    8:81   0   4.1G  0 part
`-sdf3    8:83   0 461.7G  0 part
  `-md3   9:3    0   1.4T  0 raid6 /
sdg       8:96   0 465.8G  0 disk
|-sdg1    8:97   0   4.1G  0 part
`-sdg3    8:99   0 461.7G  0 part
  `-md3   9:3    0   1.4T  0 raid6 /
sr0      11:0    1  1024M  0 rom
mark@c2RAID6 ~ $

Cheers,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 12:36   ` Nuno Magalhães
  2014-06-16 13:19     ` Mark Knecht
@ 2014-06-16 14:28     ` Phil Turmel
  2014-06-16 16:14       ` Nuno Magalhães
  2014-06-18  7:33     ` Wilson, Jonathan
  2014-06-18 10:19     ` Dag Nygren
  3 siblings, 1 reply; 23+ messages in thread
From: Phil Turmel @ 2014-06-16 14:28 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

Hi Nuno,

On 06/16/2014 08:36 AM, Nuno Magalhães wrote:
> Hi,
> 
> On Mon, Jun 16, 2014 at 3:23 AM, Phil Turmel <philip@turmel.org> wrote:
>> Before going any further, check the TLER/ERC support for these drives.
> 
> Both Seagate ST1000DM0003 don't have that:
> Warning: device does not support SCT Error Recovery Control command

Hunch confirmed.  Don't use these Seagates in a raid.

> The Toshiba DT01ACA100 seems to have, but disabled:
> SCT Error Recovery Control:
>            Read: Disabled
>           Write: Disabled
> 
> I couldn't find any specs with specific details on this feature for
> the Toshiba, and i'm not sure if it's safe to issue smartctl -l
> scterc,70,70 on a drive thay may not support it. If it's disabled as
> an incentive to buy more expensive drives, will the drive just ignore
> this command or will it decide to roast?

No, you can issue the command to these drives, and they'll work
correctly.  They just need to be told this every time they power up.
You'll need to put the command in rc.local or wherever your distro
recommends.  Older consumer-grade drives are often like this.  The
drives have the support, but power up with it disabled.

As the industry moved to monetize raid capabilities, the support in
consumer drives was phased out.  Otherwise identical, but higher-priced
drives then appeared on the market to fill the gap.

> Are there any - recommended - consumer-grade drives out there that do
> support TLER? I was going for 1TB@7200, but that can change. I started
> disliking Seagate after a ST31000528AS died on me. They bought Maxtor
> a long time ago and recently bought Samsung's disk division. I've had
> no qualms with Toshiba. I'm not so sure on WD, they used to be less
> that good; and haven't tried Hitachi. What say you?

Modern consumer-grade drives that have it are listed as "NAS" duty
drives.  The WD Red are good examples, and what I've been buying lately.
 With the NAS labeling, they also power up with the support enabled.
Typically 7.0 seconds.

I haven't seen any reports of "green" drives that are suitable for raid
service.

HTH,

Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 14:28     ` Phil Turmel
@ 2014-06-16 16:14       ` Nuno Magalhães
  2014-06-16 20:13         ` Phil Turmel
  2014-06-17  0:41         ` Nuno Magalhães
  0 siblings, 2 replies; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-16 16:14 UTC (permalink / raw)
  To: linux-raid

Thanks for the tips.

On Mon, Jun 16, 2014 at 3:28 PM, Phil Turmel <philip@turmel.org> wrote:
> Modern consumer-grade drives that have it are listed as "NAS" duty
> drives.

Marketing... The one that is taking the lead is WD Red WD10EFRX. Can't
find the Toshiba DT01ACA100 at an affordable price anymore and Seagate
is everywhere.
Back to a previous question (assuming i have the drives sorted out first):

If i create a RAID10 with 3 1TB drives, with --layout=f2, would this
give me 2TB of space +1TB redundancy? Is this a 1E?
Using: mdadm --create /dev/md0 --verbose --chunk=128 --level=10
--layout=f2 --raid-devices=3 /dev/sd[abd]1

If so, what would be the difference to a RAID5:
mdadm --create /dev/md0 --verbose --chunk=512 --level=5
--raid-devices=3 /dev/sd[abd]1

Besides one having redundancy and the other having parity; and that
RAID5 must compute parity (is that really so horrendously slow
nowadays?).

Other than tiobench, any other benchmarking tools you guys recommend?
(I've decided to just go slow and test every layer out before moving
to the next, i.e. 1st RAID, LVM, FSs, Xen, apps.)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 16:14       ` Nuno Magalhães
@ 2014-06-16 20:13         ` Phil Turmel
  2014-06-16 21:02           ` Mark Knecht
  2014-06-17  0:41         ` Nuno Magalhães
  1 sibling, 1 reply; 23+ messages in thread
From: Phil Turmel @ 2014-06-16 20:13 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

On 06/16/2014 12:14 PM, Nuno Magalhães wrote:
> Thanks for the tips.

> Back to a previous question (assuming i have the drives sorted out first):
> 
> If i create a RAID10 with 3 1TB drives, with --layout=f2, would this
> give me 2TB of space +1TB redundancy? Is this a 1E?
> Using: mdadm --create /dev/md0 --verbose --chunk=128 --level=10
> --layout=f2 --raid-devices=3 /dev/sd[abd]1

No, you'll have 1.5T of available space mirrored by halves.

> If so, what would be the difference to a RAID5:
> mdadm --create /dev/md0 --verbose --chunk=512 --level=5
> --raid-devices=3 /dev/sd[abd]1
> 
> Besides one having redundancy and the other having parity; and that
> RAID5 must compute parity (is that really so horrendously slow
> nowadays?).

You can't fit the redundancy for 2T of data into 1T without some form of
parity raid.

> Other than tiobench, any other benchmarking tools you guys recommend?
> (I've decided to just go slow and test every layer out before moving
> to the next, i.e. 1st RAID, LVM, FSs, Xen, apps.)

I'll leave that one for others. . .  Just don't expect dd to be very useful.

Phil

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 20:13         ` Phil Turmel
@ 2014-06-16 21:02           ` Mark Knecht
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Knecht @ 2014-06-16 21:02 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Nuno Magalhães, Linux-RAID

On Mon, Jun 16, 2014 at 1:13 PM, Phil Turmel <philip@turmel.org> wrote:
> On 06/16/2014 12:14 PM, Nuno Magalhães wrote:
<SNIP>
>
>> Other than tiobench, any other benchmarking tools you guys recommend?
>> (I've decided to just go slow and test every layer out before moving
>> to the next, i.e. 1st RAID, LVM, FSs, Xen, apps.)
>
> I'll leave that one for others. . .  Just don't expect dd to be very useful.

I'd not head of tiobench before. It seems quite out of date looking at
the website which suggests it hasn't been updated in 12 years.

Some alternatives, without recommendation, that I see others talking
about and that I've tried at least once:
fio
bonnie++
iozone

Again, without recommending anything and specifically noting that not
all of these are Linux=based, there's an interesting list at the
bottom of this link:

http://fsbench.filesystems.org/

HTH,
Mark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 16:14       ` Nuno Magalhães
  2014-06-16 20:13         ` Phil Turmel
@ 2014-06-17  0:41         ` Nuno Magalhães
  2014-06-17  0:47           ` Brad Campbell
  1 sibling, 1 reply; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-17  0:41 UTC (permalink / raw)
  To: linux-raid

On Mon, Jun 16, 2014 at 5:14 PM, Nuno Magalhães <nunomagalhaes@eu.ipp.pt> wrote:
> The one that is taking the lead is WD Red WD10EFRX. Can't
> find the Toshiba DT01ACA100 at an affordable price anymore and Seagate
> is everywhere.

The WD Red has "intellipower" meaning it can - supposedly - go from
5400 to 7200, depending on various conditions and who you ask. This
means that if i wanted to combine it with the DT01 (traditional 7200),
the overall speed of the array would be dictated by the Red, right?
Doesn't this increase the change that a power outage would leave the
array with a piece of new data only on the DT01? The point of having
the array in the first place would be for it to recover, but that
means RAID1 in this case. If that new chunk had it's parity (not yet
written) on the Red in a RAID5, would it recover? I think i've read
somewhere these drives have some sort of technology to mitigate power
issues which, at the time, i read as "watch battery inside" or
something.

I do have a UPS, a crappy one that'll hold for about 10m. The
problem's that i haven't gotten the serial connection working properly
yet (so no way to tell the system the power is out). Such an outage is
rare here, though.

As for the DT01:
# smartctl -l scterc,70,70 /dev/sda
    SCT Error Recovery Control set to:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)
...which will have to go into an init script.


On Mon, Jun 16, 2014 at 10:02 PM, Mark Knecht <markknecht@gmail.com> wrote:
> I'd not head of tiobench before. It seems quite out of date looking at
> the website which suggests it hasn't been updated in 12 years.

Didn't know that, thanks for the pointers. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-17  0:41         ` Nuno Magalhães
@ 2014-06-17  0:47           ` Brad Campbell
  2014-06-17  9:09             ` Nuno Magalhães
  0 siblings, 1 reply; 23+ messages in thread
From: Brad Campbell @ 2014-06-17  0:47 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

On 17/06/14 08:41, Nuno Magalhães wrote:
> The WD Red has "intellipower" meaning it can - supposedly - go from
> 5400 to 7200, depending on various conditions and who you ask. This
> means that if i wanted to combine it with the DT01 (traditional 7200),
> the overall speed of the array would be dictated by the Red, right?

I don't know who keeps perpetuating this myth, but it is just that. 
"intellipower" is just another name for slow. It just means slow spindle 
speeds and clever use of caching algorithms to make it feel faster than 
it really is. It's a marketing name only. They do not, and have never 
had the ability to change speeds.


--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-17  0:47           ` Brad Campbell
@ 2014-06-17  9:09             ` Nuno Magalhães
  2014-06-17 19:50               ` Stan Hoeppner
  2014-06-18  2:55               ` Brad Campbell
  0 siblings, 2 replies; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-17  9:09 UTC (permalink / raw)
  To: linux-raid

On Tue, Jun 17, 2014 at 1:47 AM, Brad Campbell
<lists2009@fnarfbargle.com> wrote:
> They do not, and have never had the ability to change speeds.

I didn't mean changing speeds during I/O, that would be quite a feat;
what seems to be perceived is slowing speed in idle states. Most stuff
i found pointed to 5400 though. This thread [1] points do
drive-specific fixed speed between 5400 and 5900.

The question remains: what are the implications of having 7200 drives
and 5400 drives in a RAID1, 10 or 5? It'll be as slow as the slowest
drive, correct? What about any syncing issues during a power outage?

Any takes on the Hitachis?

[1] http://marc.info/?l=linux-raid&m=136039742726982&w=2

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-17  9:09             ` Nuno Magalhães
@ 2014-06-17 19:50               ` Stan Hoeppner
  2014-06-18  2:55               ` Brad Campbell
  1 sibling, 0 replies; 23+ messages in thread
From: Stan Hoeppner @ 2014-06-17 19:50 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

On 6/17/2014 4:09 AM, Nuno Magalhães wrote:
...
> The question remains: what are the implications of having 7200 drives
> and 5400 drives in a RAID1, 10 or 5? It'll be as slow as the slowest
> drive, correct? What about any syncing issues during a power outage?

You already know these answers.  At this point in the thread you're
seeking hand holding, not knowledge acquisition, or you simply like to
chat.  This list is not intended to be that type of resource.  That's
what web forums are for.

> Any takes on the Hitachis?

There are hundreds of drive models suitable for RAID use.  Each
manufacturer clearly identifies in their documentation which of their
drives fit the bill.  There really is no reason you should need to ask
on this list if drives from a particular manufacturer are suitable.  If
you're seeking consumer drives with ERC/TLER due to cost reasons, that's
not a technical issue, but a dollars issue, and not on topic here.

We understand this is your first RAID system, but at some point you have
to remove the training wheels from your bike.

Cheers,

Stan
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-17  9:09             ` Nuno Magalhães
  2014-06-17 19:50               ` Stan Hoeppner
@ 2014-06-18  2:55               ` Brad Campbell
  1 sibling, 0 replies; 23+ messages in thread
From: Brad Campbell @ 2014-06-18  2:55 UTC (permalink / raw)
  To: Nuno Magalhães, linux-raid

On 17/06/14 17:09, Nuno Magalhães wrote:
> On Tue, Jun 17, 2014 at 1:47 AM, Brad Campbell
> <lists2009@fnarfbargle.com> wrote:
>> They do not, and have never had the ability to change speeds.
>
> I didn't mean changing speeds during I/O, that would be quite a feat;
> what seems to be perceived is slowing speed in idle states.

Perceived by whom? The drives has 2 steady state speeds. Spun up and 
stopped. Nothing in between.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 12:36   ` Nuno Magalhães
  2014-06-16 13:19     ` Mark Knecht
  2014-06-16 14:28     ` Phil Turmel
@ 2014-06-18  7:33     ` Wilson, Jonathan
  2014-06-18 10:19     ` Dag Nygren
  3 siblings, 0 replies; 23+ messages in thread
From: Wilson, Jonathan @ 2014-06-18  7:33 UTC (permalink / raw)
  To: Nuno Magalhães; +Cc: linux-raid

On Mon, 2014-06-16 at 13:36 +0100, Nuno Magalhães wrote:

> 
> Are there any - recommended - consumer-grade drives out there that do
> support TLER? I was going for 1TB@7200, but that can change. I started
> disliking Seagate after a ST31000528AS died on me. They bought Maxtor
> a long time ago and recently bought Samsung's disk division. I've had
> no qualms with Toshiba. I'm not so sure on WD, they used to be less
> that good; and haven't tried Hitachi. What say you?

i would say the closest thing to a "consumer" drive with ILER is the WD
reds. Looking at a UK end user supplier site the 3tb Green is currently
89.26 (was 98.50) and the Red is 90.98 (was 107.99) so even without
discount the price difference is negligible.

The one proviso is that supposedly the reds shouldn't have more than 5
in a nas, although I think this is more a marketing factor to sell for
use in "small nas boxes" such as home and Small businesses.

The RE's are 170.00 (was 214.27) but then again they are a whole
different ball game.

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-16 12:36   ` Nuno Magalhães
                       ` (2 preceding siblings ...)
  2014-06-18  7:33     ` Wilson, Jonathan
@ 2014-06-18 10:19     ` Dag Nygren
  2014-06-18 11:58       ` Mathias Burén
  3 siblings, 1 reply; 23+ messages in thread
From: Dag Nygren @ 2014-06-18 10:19 UTC (permalink / raw)
  To: Nuno Magalhães; +Cc: linux-raid

On Monday 16 June 2014 13:36:44 Nuno Magalhães wrote:
> Hi,
> 
> On Mon, Jun 16, 2014 at 3:23 AM, Phil Turmel <philip@turmel.org> wrote:
> > Before going any further, check the TLER/ERC support for these drives.
> 
> Both Seagate ST1000DM0003 don't have that:
> Warning: device does not support SCT Error Recovery Control command
> 
> The Toshiba DT01ACA100 seems to have, but disabled:
> SCT Error Recovery Control:
>            Read: Disabled
>           Write: Disabled
> 
> I couldn't find any specs with specific details on this feature for
> the Toshiba, and i'm not sure if it's safe to issue smartctl -l
> scterc,70,70 on a drive thay may not support it. If it's disabled as
> an incentive to buy more expensive drives, will the drive just ignore
> this command or will it decide to roast?
> 
> Are there any - recommended - consumer-grade drives out there that do
> support TLER? I was going for 1TB@7200, but that can change. I started
> disliking Seagate after a ST31000528AS died on me. They bought Maxtor
> a long time ago and recently bought Samsung's disk division. I've had
> no qualms with Toshiba. I'm not so sure on WD, they used to be less
> that good; and haven't tried Hitachi. What say you?

Just love my HGST 4 TB drives. The first ones that haven't
developed any pending/remapped sectors during the 9 months
they have been in 24/7 use. Previously tried different Seagates, with
constant problems (And yes - Had the SCT turned on). Also tested a WD red,
but saw similar symptoms developing and so degraded that to a storage
disk for my boy...
HGST was some 20€ more expensive and hard to get here but definitely
worth it.

YMMV

Dag

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-18 10:19     ` Dag Nygren
@ 2014-06-18 11:58       ` Mathias Burén
  2014-06-18 12:16         ` Brad Campbell
  2014-06-18 17:34         ` Roman Mamedov
  0 siblings, 2 replies; 23+ messages in thread
From: Mathias Burén @ 2014-06-18 11:58 UTC (permalink / raw)
  To: dag; +Cc: Nuno Magalhães, Linux-RAID

On 18 June 2014 11:19, Dag Nygren <dag@newtech.fi> wrote:
> On Monday 16 June 2014 13:36:44 Nuno Magalhães wrote:
>> Hi,
>>
>> On Mon, Jun 16, 2014 at 3:23 AM, Phil Turmel <philip@turmel.org> wrote:
>> > Before going any further, check the TLER/ERC support for these drives.
>>
>> Both Seagate ST1000DM0003 don't have that:
>> Warning: device does not support SCT Error Recovery Control command
>>
>> The Toshiba DT01ACA100 seems to have, but disabled:
>> SCT Error Recovery Control:
>>            Read: Disabled
>>           Write: Disabled
>>
>> I couldn't find any specs with specific details on this feature for
>> the Toshiba, and i'm not sure if it's safe to issue smartctl -l
>> scterc,70,70 on a drive thay may not support it. If it's disabled as
>> an incentive to buy more expensive drives, will the drive just ignore
>> this command or will it decide to roast?
>>
>> Are there any - recommended - consumer-grade drives out there that do
>> support TLER? I was going for 1TB@7200, but that can change. I started
>> disliking Seagate after a ST31000528AS died on me. They bought Maxtor
>> a long time ago and recently bought Samsung's disk division. I've had
>> no qualms with Toshiba. I'm not so sure on WD, they used to be less
>> that good; and haven't tried Hitachi. What say you?
>
> Just love my HGST 4 TB drives. The first ones that haven't
> developed any pending/remapped sectors during the 9 months
> they have been in 24/7 use. Previously tried different Seagates, with
> constant problems (And yes - Had the SCT turned on). Also tested a WD red,
> but saw similar symptoms developing and so degraded that to a storage
> disk for my boy...
> HGST was some 20€ more expensive and hard to get here but definitely
> worth it.
>
> YMMV
>
> Dag
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


If we're comparing drives, my 8x WD20EARS / EARX have been spinning
for years. 2 drives just failed after ~3.4 years uptime (24/7) and
over 400,000 head parkings. They had weekly SMART self-test and
monthly RAID6 scrubs, and a few unexpected power losses. Good value.
But YMMV.

Mathias
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-18 11:58       ` Mathias Burén
@ 2014-06-18 12:16         ` Brad Campbell
  2014-06-18 17:34         ` Roman Mamedov
  1 sibling, 0 replies; 23+ messages in thread
From: Brad Campbell @ 2014-06-18 12:16 UTC (permalink / raw)
  To: Mathias Burén, dag; +Cc: Nuno Magalhães, Linux-RAID

On 18/06/14 19:58, Mathias Burén wrote:
> On 18 June 2014 11:19, Dag Nygren <dag@newtech.fi> wrote:

>> Just love my HGST 4 TB drives. The first ones that haven't
>> developed any pending/remapped sectors during the 9 months
>> they have been in 24/7 use. Previously tried different Seagates, with
>> constant problems (And yes - Had the SCT turned on). Also tested a WD red,
>> but saw similar symptoms developing and so degraded that to a storage
>> disk for my boy...
>> HGST was some 20€ more expensive and hard to get here but definitely
>> worth it.
>>
>
>
> If we're comparing drives, my 8x WD20EARS / EARX have been spinning
> for years. 2 drives just failed after ~3.4 years uptime (24/7) and
> over 400,000 head parkings. They had weekly SMART self-test and
> monthly RAID6 scrubs, and a few unexpected power losses. Good value.
> But YMMV.

My experience has been almost any drive, no matter how naff can be made 
to last well by simply keeping them spinning and at a roughly constant 
temperature. I generally replace my bulk storage drives at about 4 
years. My first lot were Maxtor 250G cheapies and all 12 of them went 
for over 4.5 years. My current batch are WD Green 2TB and they all have 
~26,000 hours on them. One of my SAS drives has over 49,000 hours on it. 
Even a set of cheap Maxtor DiamondMAX 22 1TB drives lasted near enough 
to 25,000 hours before I relegated them to an off-line backup machine.

The constant has been keeping them running 24/7 and doing a pretty 
thorough burn in to weed out the early life failures.

For example the random WD Green 2T I picked has 26,917 hours and 60 
power cycles. Most of those would have been caused over a 12 week period 
when I had power issues with the UPS and none would have been off long 
enough to noticeably let the drive cool down.

Keep 'em spinning and warm.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-18 11:58       ` Mathias Burén
  2014-06-18 12:16         ` Brad Campbell
@ 2014-06-18 17:34         ` Roman Mamedov
  2014-06-19  6:45           ` Wilson, Jonathan
  1 sibling, 1 reply; 23+ messages in thread
From: Roman Mamedov @ 2014-06-18 17:34 UTC (permalink / raw)
  To: Mathias Burén; +Cc: dag, Nuno Magalhães, Linux-RAID

[-- Attachment #1: Type: text/plain, Size: 626 bytes --]

On Wed, 18 Jun 2014 12:58:59 +0100
Mathias Burén <mathias.buren@gmail.com> wrote:

> If we're comparing drives, my 8x WD20EARS / EARX have been spinning
> for years. 2 drives just failed after ~3.4 years uptime (24/7) and
> over 400,000 head parkings. They had weekly SMART self-test and
> monthly RAID6 scrubs, and a few unexpected power losses. Good value.

Why do you hate your drives so much, or haven't you heard of 'wdidle3'?
Or maybe I misunderstand and this whole endeavor was actually an experiment on
how soon will they fail from all that ridiculous amount of head parkings.

-- 
With respect,
Roman

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-18 17:34         ` Roman Mamedov
@ 2014-06-19  6:45           ` Wilson, Jonathan
  2014-06-29 13:22             ` Nuno Magalhães
  0 siblings, 1 reply; 23+ messages in thread
From: Wilson, Jonathan @ 2014-06-19  6:45 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: Mathias Burén, dag, Nuno Magalhães, Linux-RAID

On Wed, 2014-06-18 at 23:34 +0600, Roman Mamedov wrote:
> On Wed, 18 Jun 2014 12:58:59 +0100
> Mathias Burén <mathias.buren@gmail.com> wrote:
> 
> > If we're comparing drives, my 8x WD20EARS / EARX have been spinning
> > for years. 2 drives just failed after ~3.4 years uptime (24/7) and
> > over 400,000 head parkings. They had weekly SMART self-test and
> > monthly RAID6 scrubs, and a few unexpected power losses. Good value.
> 
> Why do you hate your drives so much, or haven't you heard of 'wdidle3'?
> Or maybe I misunderstand and this whole endeavor was actually an experiment on
> how soon will they fail from all that ridiculous amount of head parkings.

I was going to say the same, but you beat me to it. The "huge" head
parking figure is down to a firmware bug. I would also note that "some"
WD REDS also have the same problem (seems to be limited batch(es) in my
personal experience, 2 out of 6, and idle3ctl for linux fixes the
problem).



--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: RAID newbie, 1 vs 5, chunk sizes
  2014-06-19  6:45           ` Wilson, Jonathan
@ 2014-06-29 13:22             ` Nuno Magalhães
  0 siblings, 0 replies; 23+ messages in thread
From: Nuno Magalhães @ 2014-06-29 13:22 UTC (permalink / raw)
  To: Linux-RAID

Thank you all for the input.

As i now have 2x WD Red (5400rpm depending on whom you ask) and 2x
Toshiba DT01ACA100 (7200) it's not that feasible to set up a RAID5
array (otherwise it would be neat to benchmark it on Xen and compare
it with Mark Knecht's results).

So i'm opting for 2 RAID1 arrays, one for data with the WDs, another
for VMs and what not. I'll let the Debian installer decide on chunk
size and throw LVM on top of them.

And now i've been reading about ZFS, but i dislike the apparent memory
requirements for dom0 and it's not that native on Linux.

Cheers,
Nuno

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2014-06-29 13:22 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-14 21:01 RAID newbie, 1 vs 5, chunk sizes Nuno Magalhães
2014-06-14 21:56 ` Stan Hoeppner
2014-06-15 11:50   ` Nuno Magalhães
     [not found]     ` <pmrj5x4rq3qbl6unnu7guho9.1402849407134@email.android.com>
2014-06-15 17:40       ` Nuno Magalhães
2014-06-16  2:23 ` Phil Turmel
2014-06-16 12:36   ` Nuno Magalhães
2014-06-16 13:19     ` Mark Knecht
2014-06-16 14:28     ` Phil Turmel
2014-06-16 16:14       ` Nuno Magalhães
2014-06-16 20:13         ` Phil Turmel
2014-06-16 21:02           ` Mark Knecht
2014-06-17  0:41         ` Nuno Magalhães
2014-06-17  0:47           ` Brad Campbell
2014-06-17  9:09             ` Nuno Magalhães
2014-06-17 19:50               ` Stan Hoeppner
2014-06-18  2:55               ` Brad Campbell
2014-06-18  7:33     ` Wilson, Jonathan
2014-06-18 10:19     ` Dag Nygren
2014-06-18 11:58       ` Mathias Burén
2014-06-18 12:16         ` Brad Campbell
2014-06-18 17:34         ` Roman Mamedov
2014-06-19  6:45           ` Wilson, Jonathan
2014-06-29 13:22             ` Nuno Magalhães

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.