All of lore.kernel.org
 help / color / mirror / Atom feed
* RAID10, 3 copies, 3 disks
@ 2020-01-11  7:06 Gandalf Corvotempesta
       [not found] ` <82a7d9ec-f991-ad25-bf1f-eee74be90b1b@youngman.org.uk>
  0 siblings, 1 reply; 5+ messages in thread
From: Gandalf Corvotempesta @ 2020-01-11  7:06 UTC (permalink / raw)
  To: Linux RAID Mailing List

Hi to all
i've read that with md is possible to create non-standard RAID layouts
like RAID10 but without being forced to use an even number of disks to
create the mirrors (like a standard RAID10)

So, would be possible to create a 4-disks RAID10, with redundancy set
to 3 (I need to survive up to to ANY 2 disks failures, like with
RAID6) ?

I think it would be something like the following?

A1 A1 A1 A2
A2 A2 A3 A3
A3 A4 A4 A4

Currently, when i need something similar, i use a 3-way RAID1 with LVM
on top of it to aggregate multiple mirrors in one bigger volume, but
this requires 3 drives for each mirror.

Other solutions ? I don't wan't to use any parity raid this time.
(wasting space with a 3way mirror and LVM on top would be ok, is
nothing better is available)

The raid *must* be scalable, i need to grow it on-the-fly by adding
one or more disks, when needed.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RAID10, 3 copies, 3 disks
       [not found] ` <82a7d9ec-f991-ad25-bf1f-eee74be90b1b@youngman.org.uk>
@ 2020-01-11 20:55   ` Gandalf Corvotempesta
  2020-01-11 21:25     ` Wols Lists
  0 siblings, 1 reply; 5+ messages in thread
From: Gandalf Corvotempesta @ 2020-01-11 20:55 UTC (permalink / raw)
  To: Wol, Linux RAID Mailing List

Il giorno sab 11 gen 2020 alle ore 20:11 Wol
<antlists@youngman.org.uk> ha scritto:
> The "standard" as you call it is actually RAID1+0. This is *not* "linux
> raid10", which is as you describe it - the number of disks can be any
> number greater than the number of mirrors.

Actually, what I need to do is simple: a scalable array with at least
3way-mirrors.

I've thought in using multiple 3way mirrors (RAID1) merged together with LVM or
just a single RAID10 (with 3 disks mirrors) and LVM on top of it as
volume manager.

Don't know which one is better, the result is similar.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RAID10, 3 copies, 3 disks
  2020-01-11 20:55   ` Gandalf Corvotempesta
@ 2020-01-11 21:25     ` Wols Lists
  2020-01-11 21:36       ` Gandalf Corvotempesta
  0 siblings, 1 reply; 5+ messages in thread
From: Wols Lists @ 2020-01-11 21:25 UTC (permalink / raw)
  To: Gandalf Corvotempesta, Linux RAID Mailing List

On 11/01/20 20:55, Gandalf Corvotempesta wrote:
> Il giorno sab 11 gen 2020 alle ore 20:11 Wol
> <antlists@youngman.org.uk> ha scritto:
>> The "standard" as you call it is actually RAID1+0. This is *not* "linux
>> raid10", which is as you describe it - the number of disks can be any
>> number greater than the number of mirrors.
> 
> Actually, what I need to do is simple: a scalable array with at least
> 3way-mirrors.
> 
> I've thought in using multiple 3way mirrors (RAID1) merged together with LVM or
> just a single RAID10 (with 3 disks mirrors) and LVM on top of it as
> volume manager.
> 
> Don't know which one is better, the result is similar.
> 
Multiple 3-way mirrors (1+0) requires disks in multiples of 3. Raid10
simply requires "4 or more" disks. If you expect/want to expand your
storage in small increments, then 10 is clearly better. BUT.

Depending on your filesystem - for example XFS - changing the disk
layout underneath it can severely impact performance - when the
filesystem is created it queries the layout and optimises for it. When I
discussed it with one of the XFS guys he said "use 1+0 and add a fresh
*set* of disks (or completely recreate the filesystem), because XFS
optimises layout based on what disks it thinks its got."

Cheers,
Wol

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RAID10, 3 copies, 3 disks
  2020-01-11 21:25     ` Wols Lists
@ 2020-01-11 21:36       ` Gandalf Corvotempesta
  2020-01-14 12:28         ` Nix
  0 siblings, 1 reply; 5+ messages in thread
From: Gandalf Corvotempesta @ 2020-01-11 21:36 UTC (permalink / raw)
  To: Wols Lists; +Cc: Linux RAID Mailing List

Il giorno sab 11 gen 2020 alle ore 22:25 Wols Lists
<antlists@youngman.org.uk> ha scritto:
> Multiple 3-way mirrors (1+0) requires disks in multiples of 3. Raid10
> simply requires "4 or more" disks. If you expect/want to expand your
> storage in small increments, then 10 is clearly better. BUT.

I'll start with 8TB usable (more than enough for me atm) and would be ok
for at least 1 year, thus saving space is not a problem. Next year, if needed,
i'll add 3 disks more (or i'll grow the existing ones)

> Depending on your filesystem - for example XFS - changing the disk
> layout underneath it can severely impact performance - when the
> filesystem is created it queries the layout and optimises for it. When I
> discussed it with one of the XFS guys he said "use 1+0 and add a fresh
> *set* of disks (or completely recreate the filesystem), because XFS
> optimises layout based on what disks it thinks its got."

No XFS, i'll use ext4.
I had *TONS* of issues with XFS

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: RAID10, 3 copies, 3 disks
  2020-01-11 21:36       ` Gandalf Corvotempesta
@ 2020-01-14 12:28         ` Nix
  0 siblings, 0 replies; 5+ messages in thread
From: Nix @ 2020-01-14 12:28 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: Wols Lists, Linux RAID Mailing List

On 11 Jan 2020, Gandalf Corvotempesta said:

> Il giorno sab 11 gen 2020 alle ore 22:25 Wols Lists
> <antlists@youngman.org.uk> ha scritto:
>> Multiple 3-way mirrors (1+0) requires disks in multiples of 3. Raid10
>> simply requires "4 or more" disks. If you expect/want to expand your
>> storage in small increments, then 10 is clearly better. BUT.
>
> I'll start with 8TB usable (more than enough for me atm) and would be ok
> for at least 1 year, thus saving space is not a problem. Next year, if needed,
> i'll add 3 disks more (or i'll grow the existing ones)
>
>> Depending on your filesystem - for example XFS - changing the disk
>> layout underneath it can severely impact performance - when the
>> filesystem is created it queries the layout and optimises for it. When I
>> discussed it with one of the XFS guys he said "use 1+0 and add a fresh
>> *set* of disks (or completely recreate the filesystem), because XFS
>> optimises layout based on what disks it thinks its got."
>
> No XFS, i'll use ext4.

ext4 does the same thing. In both cases you can specify the layout by
hand, and sometimes you have to because not all block device layers pass
the layout up: e.g. my layering of md->lvm->bcache->cryptsetup->fs loses
the layout at (at least) the bcache level.

What I did when I knew I had a reshape coming up (because I was buying
another disk a few months after buying a machine, and reshaping onto it)
was to create the original array with the filesystem told about the
*intended final* shape, and verify after reshaping that everything was
fine (as with alignment, you can check this with blktrace's btrace tool,
doing stuff and seeing if most changes come a whole stripe at a time or
if all are misaligned and cross stripes). That way the fs starts off
less than optimal and improves after the reshape -- if you got
everything right, which sometimes feels like tightrope-walking.

For ext4, the mkfs options to look for are -E stride=and -E
stripe-width=, usually used together. For XFS, the options to look for
are sunit and swidth (and often agcount is useful too).

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-01-14 12:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-11  7:06 RAID10, 3 copies, 3 disks Gandalf Corvotempesta
     [not found] ` <82a7d9ec-f991-ad25-bf1f-eee74be90b1b@youngman.org.uk>
2020-01-11 20:55   ` Gandalf Corvotempesta
2020-01-11 21:25     ` Wols Lists
2020-01-11 21:36       ` Gandalf Corvotempesta
2020-01-14 12:28         ` Nix

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.