All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID types & chunks sizes for new NAS drives
@ 2020-06-21 16:23 Ian Pilcher
  2020-06-23  1:45 ` John Stoffel
  0 siblings, 1 reply; 16+ messages in thread
From: Ian Pilcher @ 2020-06-21 16:23 UTC (permalink / raw)
  To: linux-raid

I'm replacing the drives in my 5-bay NAS, and planning how I'm going to
divide them up.  My general plan is to create a matching set of
partitions on the drives, and then create RAID devices across the sets
of partitions, for example:

   md1:  /dev/sdb1  /dev/sdc1  /dev/sdd1  /dev/sde1  /dev/sdf1
   md2:  /dev/sdb2  /dev/sdc2  /dev/sdd2  /dev/sde2  /dev/sdf2
    ⋮         ⋮          ⋮          ⋮          ⋮          ⋮
   md16: /dev/sdb16 /dev/sdc16 /dev/sdd16 /dev/sde16 /dev/sdf16

This will give me the flexibility to create RAID devices of different
types, as well as maybe(?) reducing the "blast radius" if a particular
portion of a disk goes bad.

I believe that it makes sense to use at least 2 different RAID levels -
RAID-10 for "general" use and RAID-6 for media content.  Does this make
sense?

If so, does anyone have any thoughts or pointers on the chunk size,
particularly for RAID-10?  (I assume that RAID-6 will have similar
considerations to RAID-5, and so a large chunk size would make sense,
particularly for large media files.)

Any other thoughts?

Thanks!

-- 
========================================================================
                  In Soviet Russia, Google searches you!
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-21 16:23 RAID types & chunks sizes for new NAS drives Ian Pilcher
@ 2020-06-23  1:45 ` John Stoffel
  2020-06-23  2:31   ` o1bigtenor
                     ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: John Stoffel @ 2020-06-23  1:45 UTC (permalink / raw)
  To: Ian Pilcher; +Cc: linux-raid

>>>>> "Ian" == Ian Pilcher <arequipeno@gmail.com> writes:

Ian> I'm replacing the drives in my 5-bay NAS, and planning how I'm
Ian> going to divide them up.  My general plan is to create a matching
Ian> set of partitions on the drives, and then create RAID devices
Ian> across the sets of partitions, for example:

Ian>    md1:  /dev/sdb1  /dev/sdc1  /dev/sdd1  /dev/sde1  /dev/sdf1
Ian>    md2:  /dev/sdb2  /dev/sdc2  /dev/sdd2  /dev/sde2  /dev/sdf2
Ian>     ⋮         ⋮          ⋮          ⋮          ⋮          ⋮
Ian>    md16: /dev/sdb16 /dev/sdc16 /dev/sdd16 /dev/sde16 /dev/sdf16

Ian> This will give me the flexibility to create RAID devices of different
Ian> types, as well as maybe(?) reducing the "blast radius" if a particular
Ian> portion of a disk goes bad.

This is a terrible idea.  Just think about how there is just one head
per disk, and it takes a signifigant amount of time to seek from track
to track, and then add in rotational latecy.  This all adds up.

So now create multiple seperate RAIDS across all these disks, with
competing seek patterns, and you're just going to thrash you disks.  

If you really have two types of data, I'd only setup two partitions at
most, one for your RAID10 (with one hot spare partition) and then
RAID5 or even RAID6 (three data, two parity) on the other five
drives for your bulk data that doesnt' change much.  Say photos,
movies, CDs you've ripped, etc.  

Ian> I believe that it makes sense to use at least 2 different RAID
Ian> levels - RAID-10 for "general" use and RAID-6 for media content.
Ian> Does this make sense?

Sorta kinda maybe... In either case, you only get 1 drive more space
with RAID 6 vs RAID10.  You can suffer any two disk failure, while
RAID10 is limited to one half of each pair.  It's a tradeoff.

Look at the recent Arstechnica article on RAID levels and
performance.  It's an eye opener.

Ian> If so, does anyone have any thoughts or pointers on the chunk
Ian> size, particularly for RAID-10?  (I assume that RAID-6 will have
Ian> similar considerations to RAID-5, and so a large chunk size would
Ian> make sense, particularly for large media files.)

I don't think larger chunk sizes really make all that much difference,
especially with your plan to use multiple partitions.

You also don't say how *big* your disks will be, and if your 5 bay NAS
box can even split like that, and if it has the CPU to handle that.
Is it an NFS connection to the rest of your systems?

Honestly, I'd just setup two RAID1 mirrors with a single hot spare,
then use LVM on top to build the volumes you need.  With 8tb disks,
this only gives you 16Tb of space, but you get performance, quicker
rebuild speed if there's a problem with a disk, and simpler
management.

With only five drives, you are limited in what you can do.  Now if you
could add a pair of mirror SSDs for caching, then I'd be more into
building a single large RAID6 backing device for media content, then
use the mirrored SSDs as a cache for a smaller block of day to day
storage.

But it all depends on what you're going to do.

In any case, make sure you get NAS rated disks, either the newest WD
RED+ (or is it Blue?)  In any case, make sure to NOT get the SMR
(Shingled Magnetic Recording) format drives.  See previous threads in
this group, as well as the arstechnica.com discussion about it all
that they disk last month.  Very informative.

Personally, with regular hard disks, I still kinda think 4gb is the
sweet spot, since you can just mirror pairs of the disks and then
stripe across on top as needed.  I like my storage simple, because
when (not if!) it all hits the fan, simple is easier to recover from.

John

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23  1:45 ` John Stoffel
@ 2020-06-23  2:31   ` o1bigtenor
  2020-06-23 17:01     ` John Stoffel
  2020-06-23 12:26   ` Nix
                     ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: o1bigtenor @ 2020-06-23  2:31 UTC (permalink / raw)
  To: John Stoffel; +Cc: Ian Pilcher, Linux-RAID

On Mon, Jun 22, 2020 at 9:06 PM John Stoffel <john@stoffel.org> wrote:
>
snip

> In any case, make sure you get NAS rated disks, either the newest WD
> RED+ (or is it Blue?)  In any case, make sure to NOT get the SMR
> (Shingled Magnetic Recording) format drives.  See previous threads in
> this group, as well as the arstechnica.com discussion about it all
> that they disk last month.  Very informative.
>
> Personally, with regular hard disks, I still kinda think 4gb is the
> sweet spot, since you can just mirror pairs of the disks and then
> stripe across on top as needed.  I like my storage simple, because
> when (not if!) it all hits the fan, simple is easier to recover from.
>
Did you mean 4 TB or 4 GB as you wrote?
(Somewhat of a difference I do believe.)

Regards

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23  1:45 ` John Stoffel
  2020-06-23  2:31   ` o1bigtenor
@ 2020-06-23 12:26   ` Nix
  2020-06-23 18:50     ` John Stoffel
  2020-06-23 15:36   ` antlists
  2020-06-23 20:27   ` Ian Pilcher
  3 siblings, 1 reply; 16+ messages in thread
From: Nix @ 2020-06-23 12:26 UTC (permalink / raw)
  To: John Stoffel; +Cc: Ian Pilcher, linux-raid

On 23 Jun 2020, John Stoffel told this:

> You also don't say how *big* your disks will be, and if your 5 bay NAS
> box can even split like that, and if it has the CPU to handle that.
> Is it an NFS connection to the rest of your systems?

Side note: NFSv4 really is much much better at this stuff than v3 ever
was. With a fast enough network connection, I find NFSv4 as fast for
more or less all workloads as NFSv3 was, mostly because of the lease
support in v4 allowing client-side caching of the vast majority of files
and directories that are either not written to or only written to by one
client in a given short time window. (Obviously it also helps if your
network is fast enough: 1GbE is going to be saturated many times over by
a RAID array of any but the slowest modern HDDs. 10GbE and small
10GbE-capable switches are not very costly these days and is definitely
worth investing in on the NFS server and any clients you care about.)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23  1:45 ` John Stoffel
  2020-06-23  2:31   ` o1bigtenor
  2020-06-23 12:26   ` Nix
@ 2020-06-23 15:36   ` antlists
  2020-06-23 18:55     ` John Stoffel
  2020-06-24 12:32     ` Phil Turmel
  2020-06-23 20:27   ` Ian Pilcher
  3 siblings, 2 replies; 16+ messages in thread
From: antlists @ 2020-06-23 15:36 UTC (permalink / raw)
  To: John Stoffel, Ian Pilcher; +Cc: linux-raid

On 23/06/2020 02:45, John Stoffel wrote:
> In any case, make sure you get NAS rated disks, either the newest WD
> RED+ (or is it Blue?)  In any case, make sure to NOT get the SMR
> (Shingled Magnetic Recording) format drives.  See previous threads in
> this group, as well as the arstechnica.com discussion about it all
> that they disk last month.  Very informative.

I'd just avoid WD completely. They advertise REDs as raid-capable, and 
they are (mostly) SMR and unfit for purpose. BLUEs are supposedly the 
"desktop performance" drives, so sticking them in a raid is not advised 
anyway. But you've only got to start hammering your BLUE performance 
drive, and performance would be abysmal. If you are going for WD then 
RED PRO is what you need.

At least Seagate don't advertise unsuitable drives - DON'T touch 
Barracudas! They say Ironwolf or Ironwolf Pro for raid, both are fine to 
the best of my knowledge.

And nobody seems to buy Toshiba but - I think it's the N300 - they do a 
raid range as well.

Don't buy REDs or Barracudas or other known-problematic drives. They 
will appear to work fine until there's a problem, at which point they 
will bork the raid and lose your data. A good raid drive will tell the 
raid it's failed and let the raid recover. These problem drives DON'T 
tell the raid what's going on, and by the time the raid finds out it's 
too late.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23  2:31   ` o1bigtenor
@ 2020-06-23 17:01     ` John Stoffel
  2020-06-24 22:13       ` o1bigtenor
  0 siblings, 1 reply; 16+ messages in thread
From: John Stoffel @ 2020-06-23 17:01 UTC (permalink / raw)
  To: o1bigtenor; +Cc: John Stoffel, Ian Pilcher, Linux-RAID

>>>>> "o1bigtenor" == o1bigtenor  <o1bigtenor@gmail.com> writes:

o1bigtenor> On Mon, Jun 22, 2020 at 9:06 PM John Stoffel <john@stoffel.org> wrote:
>> 
o1bigtenor> snip

>> In any case, make sure you get NAS rated disks, either the newest WD
>> RED+ (or is it Blue?)  In any case, make sure to NOT get the SMR
>> (Shingled Magnetic Recording) format drives.  See previous threads in
>> this group, as well as the arstechnica.com discussion about it all
>> that they disk last month.  Very informative.
>> 
>> Personally, with regular hard disks, I still kinda think 4gb is the
>> sweet spot, since you can just mirror pairs of the disks and then
>> stripe across on top as needed.  I like my storage simple, because
>> when (not if!) it all hits the fan, simple is easier to recover from.
>> 
o1bigtenor> Did you mean 4 TB or 4 GB as you wrote?
o1bigtenor> (Somewhat of a difference I do believe.)

LOL!  I meant 4Tb of course... but I do remember when 10mb HDs were
amazing... :-)

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 12:26   ` Nix
@ 2020-06-23 18:50     ` John Stoffel
  0 siblings, 0 replies; 16+ messages in thread
From: John Stoffel @ 2020-06-23 18:50 UTC (permalink / raw)
  To: Nix; +Cc: John Stoffel, Ian Pilcher, linux-raid

>>>>> "Nix" == Nix  <nix@esperi.org.uk> writes:

Nix> On 23 Jun 2020, John Stoffel told this:
>> You also don't say how *big* your disks will be, and if your 5 bay NAS
>> box can even split like that, and if it has the CPU to handle that.
>> Is it an NFS connection to the rest of your systems?

Nix> Side note: NFSv4 really is much much better at this stuff than v3
Nix> ever was. With a fast enough network connection, I find NFSv4 as
Nix> fast for more or less all workloads as NFSv3 was, mostly because
Nix> of the lease support in v4 allowing client-side caching of the
Nix> vast majority of files and directories that are either not
Nix> written to or only written to by one client in a given short time
Nix> window. (Obviously it also helps if your network is fast enough:
Nix> 1GbE is going to be saturated many times over by a RAID array of
Nix> any but the slowest modern HDDs. 10GbE and small 10GbE-capable
Nix> switches are not very costly these days and is definitely worth
Nix> investing in on the NFS server and any clients you care about.)

I've been thinking about moving to NFSv4 at home, since my main file
server has my main desktop as an NFS and LDAP client for
authentication.  Works quite well, and I don't care if my dekstop
reboots on my, since the homedir is elsewhere.

John

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 15:36   ` antlists
@ 2020-06-23 18:55     ` John Stoffel
  2020-06-24 12:32     ` Phil Turmel
  1 sibling, 0 replies; 16+ messages in thread
From: John Stoffel @ 2020-06-23 18:55 UTC (permalink / raw)
  To: antlists; +Cc: John Stoffel, Ian Pilcher, linux-raid

>>>>> "antlists" == antlists  <antlists@youngman.org.uk> writes:

antlists> On 23/06/2020 02:45, John Stoffel wrote:
>> In any case, make sure you get NAS rated disks, either the newest WD
>> RED+ (or is it Blue?)  In any case, make sure to NOT get the SMR
>> (Shingled Magnetic Recording) format drives.  See previous threads in
>> this group, as well as the arstechnica.com discussion about it all
>> that they disk last month.  Very informative.

antlists> I'd just avoid WD completely. They advertise REDs as
antlists> raid-capable, and they are (mostly) SMR and unfit for
antlists> purpose. BLUEs are supposedly the "desktop performance"
antlists> drives, so sticking them in a raid is not advised
antlists> anyway. But you've only got to start hammering your BLUE
antlists> performance drive, and performance would be abysmal. If you
antlists> are going for WD then RED PRO is what you need.

Don't throw out all WD drives, just because they screwed up on their
low end NAS drives.  And I do recommend that people buy drives with
the longest warranty possible, so you get a drive that the
manufacturer expects to support for quite a while.  

antlists> At least Seagate don't advertise unsuitable drives - DON'T
antlists> touch Barracudas! They say Ironwolf or Ironwolf Pro for
antlists> raid, both are fine to the best of my knowledge.

Agree.  I remember when Barracudas were the best Seagate drives... no
sticktion there.

antlists> And nobody seems to buy Toshiba but - I think it's the N300
antlists> - they do a raid range as well.

I have Toshiba drives.  Another good thing to do is to buy competing
vendor drives and pair them, because you're less likely to get hit by
a bad batch of drives.  

antlists> Don't buy REDs or Barracudas or other known-problematic
antlists> drives. They will appear to work fine until there's a
antlists> problem, at which point they will bork the raid and lose
antlists> your data. A good raid drive will tell the raid it's failed
antlists> and let the raid recover. These problem drives DON'T tell
antlists> the raid what's going on, and by the time the raid finds out
antlists> it's too late.

The RED+ I think are all supposed to be CMR drives no matter what.

As Wols has said in the past, getting drives with SCTERC support is
key.  They're going to be more expensive, but how much is your data
worth?

I think Wols and I are in agreement overall, just differing in
details.  He's done a great job with the RAID wiki and helping people
when they get into trouble.  

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23  1:45 ` John Stoffel
                     ` (2 preceding siblings ...)
  2020-06-23 15:36   ` antlists
@ 2020-06-23 20:27   ` Ian Pilcher
  2020-06-23 21:30     ` John Stoffel
  3 siblings, 1 reply; 16+ messages in thread
From: Ian Pilcher @ 2020-06-23 20:27 UTC (permalink / raw)
  To: John Stoffel; +Cc: linux-raid

On 6/22/20 8:45 PM, John Stoffel wrote:
> This is a terrible idea.  Just think about how there is just one head
> per disk, and it takes a signifigant amount of time to seek from track
> to track, and then add in rotational latecy.  This all adds up.
> 
> So now create multiple seperate RAIDS across all these disks, with
> competing seek patterns, and you're just going to thrash you disks.

Hmm.  Does that answer change if those partition-based RAID devices
(of the same RAID level/settings) are combined into LVM volume groups?

I think it does, as the physical layout of the data on the disks will
end up pretty much identical, so the drive heads won't go unnecessarily
skittering between partitions.

> Sorta kinda maybe... In either case, you only get 1 drive more space
> with RAID 6 vs RAID10.  You can suffer any two disk failure, while
> RAID10 is limited to one half of each pair.  It's a tradeoff.

Yeah.  For some reason I had it in my head that RAID 10 could survive a
double failure.  Not sure how I got that idea.  As you mention, the only
way to get close to that would be to do a 4-drive/partition RAID 10 with
a hot-spare.  Which would actually give me a reason for the partitioned
setup, as I would want to try to avoid a 4TB or 8TB rebuild.  (My new
drives are 8TB Seagate Ironwolfs.)

> Look at the recent Arstechnica article on RAID levels and
> performance.  It's an eye opener.

I assume that you're referring to this?

 
https://arstechnica.com/information-technology/2020/04/understanding-raid-how-performance-scales-from-one-disk-to-eight/

There's nothing really new in there.  Parity RAID sucks.  If you can't
afford 3-legged mirrors, just go home, etc., etc.

> I don't think larger chunk sizes really make all that much difference,
> especially with your plan to use multiple partitions.

 From what I understand about "parity RAID" (RAID-5, RAID-6, and exotic
variants thereof), one wants a smaller stripe size if one is doing
smaller writes (to minimize RMW cycles), but larger chunks increase the
speed of multiple concurrent sequential readers.

> You also don't say how *big* your disks will be, and if your 5 bay NAS
> box can even split like that, and if it has the CPU to handle that.
> Is it an NFS connection to the rest of your systems?

The disks are 8TB Seagate Ironwolf drives.  This is my home NAS, so it
need to handle all sorts of different workloads - everything from media
serving acting as an iSCSI target for test VMs.

It runs NFS, Samba, iSCSI, various media servers, Apache, etc.  The
good news is that there isn't really any performance requirement (other
than my own level of patience).  I basically just want to avoid
handicapping the performance of the NAS with a pathological setting
(such as putting VM root disks on a RAID-6 device with a large chunk
size perhaps?).

> Honestly, I'd just setup two RAID1 mirrors with a single hot spare,
> then use LVM on top to build the volumes you need.  With 8tb disks,
> this only gives you 16Tb of space, but you get performance, quicker
> rebuild speed if there's a problem with a disk, and simpler
> management.

I'm not willing to give up that much space *and* give up tolerance
against double-failures.  Having come to my senses on what RAID-10
can and can't do, I'll probably be doing RAID-6 everywhere, possibly
with a couple of different chunk sizes.

> With only five drives, you are limited in what you can do.  Now if you
> could add a pair of mirror SSDs for caching, then I'd be more into
> building a single large RAID6 backing device for media content, then
> use the mirrored SSDs as a cache for a smaller block of day to day
> storage.

No space for any SSDs unfortunately.

Thanks for the feedback!

-- 
========================================================================
                  In Soviet Russia, Google searches you!
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 20:27   ` Ian Pilcher
@ 2020-06-23 21:30     ` John Stoffel
  2020-06-23 23:16       ` Ian Pilcher
  0 siblings, 1 reply; 16+ messages in thread
From: John Stoffel @ 2020-06-23 21:30 UTC (permalink / raw)
  To: Ian Pilcher; +Cc: John Stoffel, linux-raid

>>>>> "Ian" == Ian Pilcher <arequipeno@gmail.com> writes:

Ian> On 6/22/20 8:45 PM, John Stoffel wrote:
>> This is a terrible idea.  Just think about how there is just one head
>> per disk, and it takes a signifigant amount of time to seek from track
>> to track, and then add in rotational latecy.  This all adds up.
>> 
>> So now create multiple seperate RAIDS across all these disks, with
>> competing seek patterns, and you're just going to thrash you disks.

Ian> Hmm.  Does that answer change if those partition-based RAID
Ian> devices (of the same RAID level/settings) are combined into LVM
Ian> volume groups?

Yeah, it does change, as you add LVM groups, you can still lead to
thrashing of the heads.  As you add layers, its harder and harder for
different filesystems to coordinate disk access.

But to me another big reason is KISS, Keep It Simple Stupid, so that
when things go wrong, it's not as hard to fix.  

Ian> I think it does, as the physical layout of the data on the disks
Ian> will end up pretty much identical, so the drive heads won't go
Ian> unnecessarily skittering between partitions.

Well, as you add LVM volumes to a VG, I don't honestly know offhand if
the areas are pre-allocated, or not, I think they are pre-allocated,
but if you add/remove/resize LVs, you can start to get fragmentation,
which will hurt performance.

And note, I'm talking about harddisks here, with one read/write head.
SSDs are a different beast.  

>> Sorta kinda maybe... In either case, you only get 1 drive more space
>> with RAID 6 vs RAID10.  You can suffer any two disk failure, while
>> RAID10 is limited to one half of each pair.  It's a tradeoff.

Ian> Yeah.  For some reason I had it in my head that RAID 10 could
Ian> survive a double failure.  Not sure how I got that idea.  As you
Ian> mention, the only way to get close to that would be to do a
Ian> 4-drive/partition RAID 10 with a hot-spare.  Which would actually
Ian> give me a reason for the partitioned setup, as I would want to
Ian> try to avoid a 4TB or 8TB rebuild.  (My new drives are 8TB
Ian> Seagate Ironwolfs.)

No, you still do not want the partitioned setup, becuase if you lose a
disk, you want to rebuild it entirely, all at once.  Personally, 5 x
8Tb disks setup in RAID10 with a hot spare sounds just fine to me.
You can survive a two disk failure if it doesn't hit both halves of
the mirror.  But the hot spare should help protect you.

One thing I really like to do is mix vendors in my array, just so I
dont' get caught by a bad batch.  And the RAID10 performance advantage
over RAID6 is big.  You'd only get 8Tb (only! :-) more space, but much
worse interactive response.  

>> Look at the recent Arstechnica article on RAID levels and
>> performance.  It's an eye opener.

Ian> I assume that you're referring to this?

 
Ian> https://arstechnica.com/information-technology/2020/04/understanding-raid-how-performance-scales-from-one-disk-to-eight/

Yup.  

Ian> There's nothing really new in there.  Parity RAID sucks.  If you can't
Ian> afford 3-legged mirrors, just go home, etc., etc.

Physics sucks, don't it?  :-)

>> I don't think larger chunk sizes really make all that much difference,
>> especially with your plan to use multiple partitions.

Ian>  From what I understand about "parity RAID" (RAID-5, RAID-6, and exotic
Ian> variants thereof), one wants a smaller stripe size if one is doing
Ian> smaller writes (to minimize RMW cycles), but larger chunks increase the
Ian> speed of multiple concurrent sequential readers.

It's all about tradeoffs.  

>> You also don't say how *big* your disks will be, and if your 5 bay NAS
>> box can even split like that, and if it has the CPU to handle that.
>> Is it an NFS connection to the rest of your systems?

Ian> The disks are 8TB Seagate Ironwolf drives.  This is my home NAS, so it
Ian> need to handle all sorts of different workloads - everything from media
Ian> serving acting as an iSCSI target for test VMs.

So that to me would tend to make me want to go with RAID10 to get the
best performance.  You could even go three disk RAID5 and a single
RAID10 to mix up the workloads, but then you lose the hot spare.  

Ian> It runs NFS, Samba, iSCSI, various media servers, Apache, etc.  The
Ian> good news is that there isn't really any performance requirement (other
Ian> than my own level of patience).  I basically just want to avoid
Ian> handicapping the performance of the NAS with a pathological setting
Ian> (such as putting VM root disks on a RAID-6 device with a large chunk
Ian> size perhaps?).

What I do is have a pair of mirrored SSDs setup to cache my RAID1
arrays, to give me more performance.  Not really sure if it's helping
or hurting really.  dm-cache isn't really great at reporting stats,
and I never bothered to test it hard.

My main box is an old AMD Phenom(tm) II X4 945 Processor, which is now
something like 10 years old.  It's fast enough for what I do.  I'm
more concerned with data loss than I am performance.  

>> Honestly, I'd just setup two RAID1 mirrors with a single hot spare,
>> then use LVM on top to build the volumes you need.  With 8tb disks,
>> this only gives you 16Tb of space, but you get performance, quicker
>> rebuild speed if there's a problem with a disk, and simpler
>> management.

Ian> I'm not willing to give up that much space *and* give up tolerance
Ian> against double-failures.  Having come to my senses on what RAID-10
Ian> can and can't do, I'll probably be doing RAID-6 everywhere, possibly
Ian> with a couple of different chunk sizes.

Sure, go for it.

>> With only five drives, you are limited in what you can do.  Now if you
>> could add a pair of mirror SSDs for caching, then I'd be more into
>> building a single large RAID6 backing device for media content, then
>> use the mirrored SSDs as a cache for a smaller block of day to day
>> storage.

Ian> No space for any SSDs unfortunately.

Get a bigger case then.  :-)  

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 21:30     ` John Stoffel
@ 2020-06-23 23:16       ` Ian Pilcher
  2020-06-24  0:34         ` John Stoffel
  0 siblings, 1 reply; 16+ messages in thread
From: Ian Pilcher @ 2020-06-23 23:16 UTC (permalink / raw)
  To: John Stoffel; +Cc: linux-raid

On 6/23/20 4:30 PM, John Stoffel wrote:
> Well, as you add LVM volumes to a VG, I don't honestly know offhand if
> the areas are pre-allocated, or not, I think they are pre-allocated,
> but if you add/remove/resize LVs, you can start to get fragmentation,
> which will hurt performance.

LVs are pre-allocated, and they definitely can become fragmented.
That's orthogonal to whether the VG is on a single RAID device or a
set of smaller adjacent RAID devices.

> No, you still do not want the partitioned setup, becuase if you lose a
> disk, you want to rebuild it entirely, all at once.  Personally, 5 x
> 8Tb disks setup in RAID10 with a hot spare sounds just fine to me.
> You can survive a two disk failure if it doesn't hit both halves of
> the mirror.  But the hot spare should help protect you.

It depends on what sort of failure you're trying to protect against.  If
you lose the entire disk (because of an electronic/mechanical failure,
for example) your doing either an 8TB rebuild/resync or (for example)
16x 512GB rebuild/resyncs, which is effectively the same thing.

OTOH, if you have a patch of sectors go bad in the partitioned case,
the RAID layer is only going to automatically rebuild/resync one of the
partition-based RAID devices.  To my thinking, this will reduce the
chance of a double-failure.

I think it's important to state that this NAS is pretty actively
monitored/managed.  So if such a failure were to occur, I would
absolutely be taking steps to retire the drive with the failed sectors.
But that's something that I'd rather do manually, rather than kicking
off (for example) and 8TB rebuild with a hot-spare.

> One thing I really like to do is mix vendors in my array, just so I
> dont' get caught by a bad batch.  And the RAID10 performance advantage
> over RAID6 is big.  You'd only get 8Tb (only! :-) more space, but much
> worse interactive response.

Mixing vendors (or at least channels) is one of those things that I
know that I should do, but I always get impatient.

But do I need the better performance.  Choices, choices ...  :-)

> Physics sucks, don't it?  :-)

LOL!  Indeed it does!
> 
> What I do is have a pair of mirrored SSDs setup to cache my RAID1
> arrays, to give me more performance.  Not really sure if it's helping
> or hurting really.  dm-cache isn't really great at reporting stats,
> and I never bothered to test it hard.

I've played with both bcache and dm-cache, although it's been a few
years.  Neither one really did much for me, but that's probably because
I was using write-through caching, as I didn't trust "newfangled" SSDs
at the time.

> My main box is an old AMD Phenom(tm) II X4 945 Processor, which is now
> something like 10 years old.  It's fast enough for what I do.  I'm
> more concerned with data loss than I am performance.

Same here.  I mainly want to feel comfortable that I haven't crippled my
performance by doing something stupid, but as long as the NAS can stream
a movie to media room it's good enough.

My NAS has an Atom D2550, so it's almost certainly slower than your
Phenom.

> Get a bigger case then.  :-)

-- 
========================================================================
                  In Soviet Russia, Google searches you!
========================================================================

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 23:16       ` Ian Pilcher
@ 2020-06-24  0:34         ` John Stoffel
  0 siblings, 0 replies; 16+ messages in thread
From: John Stoffel @ 2020-06-24  0:34 UTC (permalink / raw)
  To: Ian Pilcher; +Cc: John Stoffel, linux-raid

>>>>> "Ian" == Ian Pilcher <arequipeno@gmail.com> writes:

Ian> On 6/23/20 4:30 PM, John Stoffel wrote:
>> Well, as you add LVM volumes to a VG, I don't honestly know offhand if
>> the areas are pre-allocated, or not, I think they are pre-allocated,
>> but if you add/remove/resize LVs, you can start to get fragmentation,
>> which will hurt performance.

Ian> LVs are pre-allocated, and they definitely can become fragmented.
Ian> That's orthogonal to whether the VG is on a single RAID device or a
Ian> set of smaller adjacent RAID devices.

>> No, you still do not want the partitioned setup, becuase if you lose a
>> disk, you want to rebuild it entirely, all at once.  Personally, 5 x
>> 8Tb disks setup in RAID10 with a hot spare sounds just fine to me.
>> You can survive a two disk failure if it doesn't hit both halves of
>> the mirror.  But the hot spare should help protect you.

Ian> It depends on what sort of failure you're trying to protect against.  If
Ian> you lose the entire disk (because of an electronic/mechanical failure,
Ian> for example) your doing either an 8TB rebuild/resync or (for example)
Ian> 16x 512GB rebuild/resyncs, which is effectively the same thing.

Ian> OTOH, if you have a patch of sectors go bad in the partitioned case,
Ian> the RAID layer is only going to automatically rebuild/resync one of the
Ian> partition-based RAID devices.  To my thinking, this will reduce the
Ian> chance of a double-failure.

Once a disk starts throwing errors like this, it's toast.  Get rid of
it now.  

Ian> I think it's important to state that this NAS is pretty actively
Ian> monitored/managed.  So if such a failure were to occur, I would
Ian> absolutely be taking steps to retire the drive with the failed sectors.
Ian> But that's something that I'd rather do manually, rather than kicking
Ian> off (for example) and 8TB rebuild with a hot-spare.

Sure, if you think that's going to happen when you're on vacation and
out of town and the disk starts flaking out... :-)

>> One thing I really like to do is mix vendors in my array, just so I
>> dont' get caught by a bad batch.  And the RAID10 performance advantage
>> over RAID6 is big.  You'd only get 8Tb (only! :-) more space, but much
>> worse interactive response.

Ian> Mixing vendors (or at least channels) is one of those things that I
Ian> know that I should do, but I always get impatient.

Ian> But do I need the better performance.  Choices, choices ...  :-)

>> Physics sucks, don't it?  :-)

Ian> LOL!  Indeed it does!

>> What I do is have a pair of mirrored SSDs setup to cache my RAID1
>> arrays, to give me more performance.  Not really sure if it's helping
>> or hurting really.  dm-cache isn't really great at reporting stats,
>> and I never bothered to test it hard.

Ian> I've played with both bcache and dm-cache, although it's been a few
Ian> years.  Neither one really did much for me, but that's probably because
Ian> I was using write-through caching, as I didn't trust "newfangled" SSDs
Ian> at the time.

Sure, I understand that.  It makes a difference for me when doing
kernel builds... not that I regularly upgrade.  

>> My main box is an old AMD Phenom(tm) II X4 945 Processor, which is now
>> something like 10 years old.  It's fast enough for what I do.  I'm
>> more concerned with data loss than I am performance.

Ian> Same here.  I mainly want to feel comfortable that I haven't crippled my
Ian> performance by doing something stupid, but as long as the NAS can stream
Ian> a movie to media room it's good enough.

Ian> My NAS has an Atom D2550, so it's almost certainly slower than your
Ian> Phenom.

Yeah, so that's another strike (possibly) against RAID6, since it will
be more CPU overhead, esp if you're running VMs at the same time on
there.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 15:36   ` antlists
  2020-06-23 18:55     ` John Stoffel
@ 2020-06-24 12:32     ` Phil Turmel
  2020-06-24 14:49       ` John Stoffel
  1 sibling, 1 reply; 16+ messages in thread
From: Phil Turmel @ 2020-06-24 12:32 UTC (permalink / raw)
  To: antlists, John Stoffel, Ian Pilcher; +Cc: linux-raid

On 6/23/20 11:36 AM, antlists wrote:

> And nobody seems to buy Toshiba but - I think it's the N300 - they do a 
> raid range as well.

I do.  I have a couple N300s and they are holding up well.  One has 22k 
hours.

Phil

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-24 12:32     ` Phil Turmel
@ 2020-06-24 14:49       ` John Stoffel
  2020-06-24 18:41         ` Wols Lists
  0 siblings, 1 reply; 16+ messages in thread
From: John Stoffel @ 2020-06-24 14:49 UTC (permalink / raw)
  To: Phil Turmel; +Cc: antlists, John Stoffel, Ian Pilcher, linux-raid

>>>>> "Phil" == Phil Turmel <philip@turmel.org> writes:

Phil> On 6/23/20 11:36 AM, antlists wrote:
>> And nobody seems to buy Toshiba but - I think it's the N300 - they do a 
>> raid range as well.

Phil> I do.  I have a couple N300s and they are holding up well.  One has 22k 
Phil> hours.

Same here... they make good drives.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-24 14:49       ` John Stoffel
@ 2020-06-24 18:41         ` Wols Lists
  0 siblings, 0 replies; 16+ messages in thread
From: Wols Lists @ 2020-06-24 18:41 UTC (permalink / raw)
  To: John Stoffel, Phil Turmel; +Cc: Ian Pilcher, linux-raid

On 24/06/20 15:49, John Stoffel wrote:
>>>>>> "Phil" == Phil Turmel <philip@turmel.org> writes:
> 
> Phil> On 6/23/20 11:36 AM, antlists wrote:
>>> And nobody seems to buy Toshiba but - I think it's the N300 - they do a 
>>> raid range as well.
> 
> Phil> I do.  I have a couple N300s and they are holding up well.  One has 22k 
> Phil> hours.
> 
> Same here... they make good drives.
> 
Good to know. I just never see them come up - it's always been WD Reds,
and now me talking about Ironwolves.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: RAID types & chunks sizes for new NAS drives
  2020-06-23 17:01     ` John Stoffel
@ 2020-06-24 22:13       ` o1bigtenor
  0 siblings, 0 replies; 16+ messages in thread
From: o1bigtenor @ 2020-06-24 22:13 UTC (permalink / raw)
  To: John Stoffel; +Cc: Ian Pilcher, Linux-RAID

On Tue, Jun 23, 2020 at 12:01 PM John Stoffel <john@stoffel.org> wrote:

MEC 2548HT

>
> >>>>> "o1bigtenor" == o1bigtenor  <o1bigtenor@gmail.com> writes:
>
> o1bigtenor> On Mon, Jun 22, 2020 at 9:06 PM John Stoffel <john@stoffel.org> wrote:
> >>
> o1bigtenor> snip
>
> >> In any case, make sure you get NAS rated disks, either the newest WD
> >> RED+ (or is it Blue?)  In any case, make sure to NOT get the SMR
> >> (Shingled Magnetic Recording) format drives.  See previous threads in
> >> this group, as well as the arstechnica.com discussion about it all
> >> that they disk last month.  Very informative.
> >>
> >> Personally, with regular hard disks, I still kinda think 4gb is the
> >> sweet spot, since you can just mirror pairs of the disks and then
> >> stripe across on top as needed.  I like my storage simple, because
> >> when (not if!) it all hits the fan, simple is easier to recover from.
> >>
> o1bigtenor> Did you mean 4 TB or 4 GB as you wrote?
> o1bigtenor> (Somewhat of a difference I do believe.)
>
> LOL!  I meant 4Tb of course... but I do remember when 10mb HDs were
> amazing... :-)

I can remember buying a 40 MB drive where the serial number and drive
information
was hand lettered - - - - That was when 5s were sorta common and 10 MB was
thought to be BIG. Oh well - - - thought it was just a typo but prefer
to make sure of
details!!!

Thanks

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-06-24 22:13 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-21 16:23 RAID types & chunks sizes for new NAS drives Ian Pilcher
2020-06-23  1:45 ` John Stoffel
2020-06-23  2:31   ` o1bigtenor
2020-06-23 17:01     ` John Stoffel
2020-06-24 22:13       ` o1bigtenor
2020-06-23 12:26   ` Nix
2020-06-23 18:50     ` John Stoffel
2020-06-23 15:36   ` antlists
2020-06-23 18:55     ` John Stoffel
2020-06-24 12:32     ` Phil Turmel
2020-06-24 14:49       ` John Stoffel
2020-06-24 18:41         ` Wols Lists
2020-06-23 20:27   ` Ian Pilcher
2020-06-23 21:30     ` John Stoffel
2020-06-23 23:16       ` Ian Pilcher
2020-06-24  0:34         ` John Stoffel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.