All of lore.kernel.org
 help / color / mirror / Atom feed
* New setup: partitions or raw devices
@ 2017-11-29 16:22 Gandalf Corvotempesta
  2017-11-29 16:44 ` Reindl Harald
  0 siblings, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 16:22 UTC (permalink / raw)
  To: Linux RAID Mailing List

Hi to all
I have to setup some new servers, which is the best practise, currently?
Should I use raw devices or partitions ? (one huge partition for mdadm, and then
LVM on top of it)

Any advantage/drawbacks for both configurations ?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 16:22 New setup: partitions or raw devices Gandalf Corvotempesta
@ 2017-11-29 16:44 ` Reindl Harald
  2017-11-29 16:52   ` Phil Turmel
  2017-11-29 17:38   ` Gandalf Corvotempesta
  0 siblings, 2 replies; 35+ messages in thread
From: Reindl Harald @ 2017-11-29 16:44 UTC (permalink / raw)
  To: Gandalf Corvotempesta, Linux RAID Mailing List



Am 29.11.2017 um 17:22 schrieb Gandalf Corvotempesta:
> Hi to all
> I have to setup some new servers, which is the best practise, currently?
> Should I use raw devices or partitions ? (one huge partition for mdadm, and then
> LVM on top of it)
> 
> Any advantage/drawbacks for both configurations?

i would make partitions for 2 reasons:

* leave some space free in case a replacement disk has
   a slighty different usebale size - some SSD's do
   this instead overprovisioning for wear leveling

* whenever you decide you replace it against SSD's
   free space will enhance the overall lifetime

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 16:44 ` Reindl Harald
@ 2017-11-29 16:52   ` Phil Turmel
  2017-11-29 17:42     ` Gandalf Corvotempesta
  2017-11-29 22:10     ` Chris Murphy
  2017-11-29 17:38   ` Gandalf Corvotempesta
  1 sibling, 2 replies; 35+ messages in thread
From: Phil Turmel @ 2017-11-29 16:52 UTC (permalink / raw)
  To: Reindl Harald, Gandalf Corvotempesta, Linux RAID Mailing List

Hi Gandalf,

On 11/29/2017 11:44 AM, Reindl Harald wrote:

> Am 29.11.2017 um 17:22 schrieb Gandalf Corvotempesta:
>> Hi to all
>> I have to setup some new servers, which is the best practise, currently?
>> Should I use raw devices or partitions ? (one huge partition for
>> mdadm, and then
>> LVM on top of it)
>>
>> Any advantage/drawbacks for both configurations?
> 
> i would make partitions for 2 reasons:
> 
> * leave some space free in case a replacement disk has
>   a slighty different usebale size - some SSD's do
>   this instead overprovisioning for wear leveling
> 
> * whenever you decide you replace it against SSD's
>   free space will enhance the overall lifetime

These are good reasons.  I have also seen reports of NAS devices/distros
unconditionally partitioning devices that don't have one.  Although I
have in the past used raw devices with mdadm, I don't plan to do so with
any future systems.

Phil

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 16:44 ` Reindl Harald
  2017-11-29 16:52   ` Phil Turmel
@ 2017-11-29 17:38   ` Gandalf Corvotempesta
  2017-11-29 18:28     ` Reindl Harald
  1 sibling, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 17:38 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux RAID Mailing List

2017-11-29 17:44 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
> * leave some space free in case a replacement disk has
>   a slighty different usebale size - some SSD's do
>   this instead overprovisioning for wear leveling

Good reason.
How much space would you reserve ? 100MB? 1GB ?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 16:52   ` Phil Turmel
@ 2017-11-29 17:42     ` Gandalf Corvotempesta
  2017-11-29 17:49       ` Phil Turmel
  2017-11-29 22:10     ` Chris Murphy
  1 sibling, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 17:42 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Reindl Harald, Linux RAID Mailing List

2017-11-29 17:52 GMT+01:00 Phil Turmel <philip@turmel.org>:
> These are good reasons.  I have also seen reports of NAS devices/distros
> unconditionally partitioning devices that don't have one.  Although I
> have in the past used raw devices with mdadm, I don't plan to do so with
> any future systems.

I always used partitioning.
But last time i've tried to configure mdadm to automatically replace a
disk when replaced:
https://linux.die.net/man/5/mdadm.conf
"POLICY" section, tried all but the best candidate seems to be "spare-same-slot"

I was never able to make that working. I've tried everything: insert a
brand new disk, insert an already partitioned disks and so on.
I always though that the main issue was due to disk partitioning,
mdadm won't be able to re-add a brand new disks if raid is made
with paritions (the new disks doesn't have any partition)

Any idea ?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 17:42     ` Gandalf Corvotempesta
@ 2017-11-29 17:49       ` Phil Turmel
       [not found]         ` <CAJH6TXjFoUOCySnq2ErjTT9rb10XSc2saY=Q3RDheT7thOOFPg@mail.gmail.com>
  0 siblings, 1 reply; 35+ messages in thread
From: Phil Turmel @ 2017-11-29 17:49 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: Reindl Harald, Linux RAID Mailing List

On 11/29/2017 12:42 PM, Gandalf Corvotempesta wrote:
> 2017-11-29 17:52 GMT+01:00 Phil Turmel <philip@turmel.org>:
>> These are good reasons.  I have also seen reports of NAS devices/distros
>> unconditionally partitioning devices that don't have one.  Although I
>> have in the past used raw devices with mdadm, I don't plan to do so with
>> any future systems.
> 
> I always used partitioning.
> But last time i've tried to configure mdadm to automatically replace a
> disk when replaced:
> https://linux.die.net/man/5/mdadm.conf
> "POLICY" section, tried all but the best candidate seems to be "spare-same-slot"
> 
> I was never able to make that working. I've tried everything: insert a
> brand new disk, insert an already partitioned disks and so on.
> I always though that the main issue was due to disk partitioning,
> mdadm won't be able to re-add a brand new disks if raid is made
> with paritions (the new disks doesn't have any partition)
> 
> Any idea ?

I don't let mdadm do any such automatic allocation of new devices.  If
you have to manually put the device in the server, its only a few
seconds more to note which device it is in dmesg, and add it to the
array.  (And partition it, if not already.)

Phil

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 17:38   ` Gandalf Corvotempesta
@ 2017-11-29 18:28     ` Reindl Harald
  2017-11-29 19:51       ` Gandalf Corvotempesta
  0 siblings, 1 reply; 35+ messages in thread
From: Reindl Harald @ 2017-11-29 18:28 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: Linux RAID Mailing List



Am 29.11.2017 um 18:38 schrieb Gandalf Corvotempesta:
> 2017-11-29 17:44 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>> * leave some space free in case a replacement disk has
>>    a slighty different usebale size - some SSD's do
>>    this instead overprovisioning for wear leveling
> 
> Good reason.
> How much space would you reserve ? 100MB? 1GB ?

if i would be in the position without create arrays from scratch these 
das i would leave 2-5% of the disks for overprovisioning instead only 20 
MB as i did in 2011 because i am about to replace with SSD which i did 
not consider that days

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 18:28     ` Reindl Harald
@ 2017-11-29 19:51       ` Gandalf Corvotempesta
  2017-11-29 20:02         ` Reindl Harald
  0 siblings, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 19:51 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux RAID Mailing List

2-5% ? With current hdd size (ie, 8TB), 2% is about 160GB.
I don't have disks with that size, but only 2 or 4TB. Are hundreds of
GB needed ? Why not 100MB ?

2017-11-29 19:28 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>
>
> Am 29.11.2017 um 18:38 schrieb Gandalf Corvotempesta:
>>
>> 2017-11-29 17:44 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>>>
>>> * leave some space free in case a replacement disk has
>>>    a slighty different usebale size - some SSD's do
>>>    this instead overprovisioning for wear leveling
>>
>>
>> Good reason.
>> How much space would you reserve ? 100MB? 1GB ?
>
>
> if i would be in the position without create arrays from scratch these das i
> would leave 2-5% of the disks for overprovisioning instead only 20 MB as i
> did in 2011 because i am about to replace with SSD which i did not consider
> that days

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Fwd: New setup: partitions or raw devices
       [not found]             ` <CAJH6TXgbfgg_dk9oasVExn=RPVZqQDKN2AWAmPi1U2=PiACAHA@mail.gmail.com>
@ 2017-11-29 19:54               ` Gandalf Corvotempesta
  0 siblings, 0 replies; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 19:54 UTC (permalink / raw)
  To: Linux RAID Mailing List

repost due to previous html mail


---------- Forwarded message ----------
From: Gandalf Corvotempesta <gandalf.corvotempesta@gmail.com>
Date: 2017-11-29 19:09 GMT+01:00
Subject: Re: New setup: partitions or raw devices
To: Phil Turmel <philip@turmel.org>
Cc: Reindl Harald <h.reindl@thelounge.net>, Linux RAID Mailing List
<linux-raid@vger.kernel.org>


Sure, but this require a sysadmin logged on the system

If I'm far away, nobody would be able to change a disk because I'm the
only with root privilege

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 19:51       ` Gandalf Corvotempesta
@ 2017-11-29 20:02         ` Reindl Harald
  2017-11-29 22:02           ` Gandalf Corvotempesta
  2017-11-29 22:20           ` Wol's lists
  0 siblings, 2 replies; 35+ messages in thread
From: Reindl Harald @ 2017-11-29 20:02 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: Linux RAID Mailing List



Am 29.11.2017 um 20:51 schrieb Gandalf Corvotempesta:
> 2-5% ? With current hdd size (ie, 8TB), 2% is about 160GB.
> I don't have disks with that size, but only 2 or 4TB. Are hundreds of
> GB needed ? Why not 100MB ?

in case of 2 TB disks it's 40 GB per disk

because 100 MB IMHO don't help that much for SSD overprovisioning and 
given how it may extend the lifetime versus the price and you don't have 
your disks typically filled completly

in a 4 drive RAID10 it's 80 GB missing useable disk space for 
overprovisioning out of around 3.6 TB (1000 versus 1024) while you have 
a lightning fast array which should last as long as possible without 
replace terrible expensive drives

why not RAID5/6? besides 
https://www.askdbmgt.com/why-raid5-should-be-avoided-at-all-costs.html 
the parity data are additional writes wearing out the drives

> 2017-11-29 19:28 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>>
>>
>> Am 29.11.2017 um 18:38 schrieb Gandalf Corvotempesta:
>>>
>>> 2017-11-29 17:44 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>>>>
>>>> * leave some space free in case a replacement disk has
>>>>     a slighty different usebale size - some SSD's do
>>>>     this instead overprovisioning for wear leveling
>>>
>>>
>>> Good reason.
>>> How much space would you reserve ? 100MB? 1GB ?
>>
>>
>> if i would be in the position without create arrays from scratch these das i
>> would leave 2-5% of the disks for overprovisioning instead only 20 MB as i
>> did in 2011 because i am about to replace with SSD which i did not consider
>> that days

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 20:02         ` Reindl Harald
@ 2017-11-29 22:02           ` Gandalf Corvotempesta
  2017-11-29 22:10             ` Reindl Harald
  2017-11-29 22:20           ` Wol's lists
  1 sibling, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 22:02 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux RAID Mailing List

2017-11-29 21:02 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
> in case of 2 TB disks it's 40 GB per disk

Yes

> because 100 MB IMHO don't help that much for SSD overprovisioning and given
> how it may extend the lifetime versus the price and you don't have your
> disks typically filled completly

I'm talking about multi-tera disks, I don't think i'll ever (in the
near future) replace them
with SSDs....

> in a 4 drive RAID10 it's 80 GB missing useable disk space for
> overprovisioning out of around 3.6 TB (1000 versus 1024) while you have a
> lightning fast array which should last as long as possible without replace
> terrible expensive drives

Yes, but you are talking about SSD...
Here I have many HDD....

> why not RAID5/6? besides
> https://www.askdbmgt.com/why-raid5-should-be-avoided-at-all-costs.html the
> parity data are additional writes wearing out the drives

I think exactly the opposite.
I will never ever run anything with redundancy level lower than 2.
I'm totally againt RAID10, after i've loose (4 times!) the whole RAID due to
double failure on the same mirror.

Now, I'm only using RAID-6 on very small arrays (6 disks maximum)
If I need more space, I'll create a RAID-60

My newer server with SSD are running with 4 SSD in RAID-6 (or RAID-Z2,
it depends)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 16:52   ` Phil Turmel
  2017-11-29 17:42     ` Gandalf Corvotempesta
@ 2017-11-29 22:10     ` Chris Murphy
  2017-11-29 22:14       ` Gandalf Corvotempesta
  2017-11-29 22:14       ` Chris Murphy
  1 sibling, 2 replies; 35+ messages in thread
From: Chris Murphy @ 2017-11-29 22:10 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Reindl Harald, Gandalf Corvotempesta, Linux RAID Mailing List

On Wed, Nov 29, 2017 at 9:52 AM, Phil Turmel <philip@turmel.org> wrote:

> These are good reasons.  I have also seen reports of NAS devices/distros
> unconditionally partitioning devices that don't have one.  Although I
> have in the past used raw devices with mdadm, I don't plan to do so with
> any future systems.

I'm in this same boat. Historically I've used raw devices, no
partitioning, but I'm going to GPT partition everything from here on
out, even if it's just a single partition.

a. It makes it unambiguous this drive has had some purpose, it's not a
blank slate.
b. GPT is unambiguous, checksummed, has redundancy, and a user
definable partition name 72 bytes UTF-16E to make it even more
unambiguous
c. The MBR had too few type codes, leading to a lot of ambiguity as to
a partition's contents, so on Linux the idea was for libblkid to have
a thorough understanding of (most) every conceivable volume format
signature. But GPT solves this in a more standard way, and the
explosion of undefined volume formats and binary blobs installed on
partitions has made keeping up with their identification in blkid
increasingly challenging.

So anyway... GPT is good. And when LUKS2 gets a little farther along
the torture testing I'll move to that as well, as it too has redundant
metadata.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:02           ` Gandalf Corvotempesta
@ 2017-11-29 22:10             ` Reindl Harald
  2017-11-29 22:25               ` Gandalf Corvotempesta
  0 siblings, 1 reply; 35+ messages in thread
From: Reindl Harald @ 2017-11-29 22:10 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: Linux RAID Mailing List



Am 29.11.2017 um 23:02 schrieb Gandalf Corvotempesta:
> 2017-11-29 21:02 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>> in case of 2 TB disks it's 40 GB per disk
> 
> Yes
> 
>> because 100 MB IMHO don't help that much for SSD overprovisioning and given
>> how it may extend the lifetime versus the price and you don't have your
>> disks typically filled completly
> 
> I'm talking about multi-tera disks, I don't think i'll ever (in the
> near future) replace them
> with SSDs....

i thought the same in 2011 and now i regret it *because* i change the 2 
TB disks against SSD's (while i would sell my sould for RAID10 
supporting the RAID1 writemoostly feature to only have writes on the 
current 2 remaining HDD and reads de-facto from the SSD stripe-part)

[harry@srv-rhsoft:~]$ df
Dateisystem    Typ  Größe Benutzt Verf. Verw% Eingehängt auf
/dev/md1       ext4   29G    6,8G   22G   24% /
/dev/md0       ext4  485M     35M  446M    8% /boot
/dev/md2       ext4  3,6T    1,9T  1,8T   53% /mnt/data

[0:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sda
[1:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sdb
[2:0:0:0]    disk    ATA      WDC WD2003FYYS-0 1D01  /dev/sdc
[3:0:0:0]    disk    ATA      ST2000DX002-2DV1 CC41  /dev/sdd

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:10     ` Chris Murphy
@ 2017-11-29 22:14       ` Gandalf Corvotempesta
  2017-11-29 22:27         ` Chris Murphy
  2017-11-29 22:14       ` Chris Murphy
  1 sibling, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 22:14 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Phil Turmel, Reindl Harald, Linux RAID Mailing List

2017-11-29 23:10 GMT+01:00 Chris Murphy <lists@colorremedies.com>:
> a. It makes it unambiguous this drive has had some purpose, it's not a
> blank slate.
> b. GPT is unambiguous, checksummed, has redundancy, and a user
> definable partition name 72 bytes UTF-16E to make it even more
> unambiguous
> c. The MBR had too few type codes, leading to a lot of ambiguity as to
> a partition's contents, so on Linux the idea was for libblkid to have
> a thorough understanding of (most) every conceivable volume format
> signature. But GPT solves this in a more standard way, and the
> explosion of undefined volume formats and binary blobs installed on
> partitions has made keeping up with their identification in blkid
> increasingly challenging.

So, if you where me, how many gpt partition and which types would you make ?

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:10     ` Chris Murphy
  2017-11-29 22:14       ` Gandalf Corvotempesta
@ 2017-11-29 22:14       ` Chris Murphy
  1 sibling, 0 replies; 35+ messages in thread
From: Chris Murphy @ 2017-11-29 22:14 UTC (permalink / raw)
  To: Linux RAID Mailing List

An exception to the single partition "rule" is with UDF. In order to
get a fully cross platform supported UDF volume, it needs to own the
entire physical block device. It's quite a bit more sane to use as a
cross platform volume format than FAT32 or exFAT, for sticks or
drives.


Chris Murphy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 20:02         ` Reindl Harald
  2017-11-29 22:02           ` Gandalf Corvotempesta
@ 2017-11-29 22:20           ` Wol's lists
  2017-11-29 22:27             ` Reindl Harald
  1 sibling, 1 reply; 35+ messages in thread
From: Wol's lists @ 2017-11-29 22:20 UTC (permalink / raw)
  To: Reindl Harald, Gandalf Corvotempesta; +Cc: Linux RAID Mailing List

On 29/11/17 20:02, Reindl Harald wrote:
> why not RAID5/6? besides 
> https://www.askdbmgt.com/why-raid5-should-be-avoided-at-all-costs.html 
> the parity data are additional writes wearing out the drives

So, if I have a four-drive raid 5, for every 3 blocks of data I write I 
write 1 parity block. But with raid 1 or 10, for every 3 blocks of data 
I write, I write *3* "parity" blocks!

What was that about "the additional writes wearing out the drives" then?

(Yes, I get the write amplification thing - but if you are writing a lot 
of data, then raid 5 needs far *fewer* writes.)

Cheers,
Wol

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:10             ` Reindl Harald
@ 2017-11-29 22:25               ` Gandalf Corvotempesta
  2017-11-29 22:34                 ` Reindl Harald
  0 siblings, 1 reply; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-11-29 22:25 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Linux RAID Mailing List

2017-11-29 23:10 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
> [0:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sda
> [1:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sdb

these are consumer SSDs.
Even the "PRO" is consumer and a 2TB is priced about 800 USD

Look at one of Intel DC SSD, the S3610 1.6TB is about 1500 USD. Twice
the price, 40% less capacity

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:14       ` Gandalf Corvotempesta
@ 2017-11-29 22:27         ` Chris Murphy
  0 siblings, 0 replies; 35+ messages in thread
From: Chris Murphy @ 2017-11-29 22:27 UTC (permalink / raw)
  To: Gandalf Corvotempesta
  Cc: Chris Murphy, Phil Turmel, Reindl Harald, Linux RAID Mailing List

On Wed, Nov 29, 2017 at 3:14 PM, Gandalf Corvotempesta
<gandalf.corvotempesta@gmail.com> wrote:

>
> So, if you where me, how many gpt partition and which types would you make ?

What's the complete use case scenario? I can't really tell from the first post.

What I do is use gdisk type code 8300 (generic linux partition), and
then format that partition as LUKS1. And then I open the LUKS device
and make it a PV, create VG, and do that on every physical device.

Now you can make however many LV's you want, any size you want, in
arbitrary order, with whatever raid level you want. LVM's
raid0,1,10,5,6 use the md driver in the kernel, same as mdadm, but
it's LVM metadata which means everything is managed with LVM (disk
replacement and so forth). And there are still features mdadm has that
LVM raid does not implement. So you it's not a question of which is
better overall, it's a question of which one fits your use case best,
and the tools with which you're most familiar.

But it's pretty neat to be able to have one big VG, and to be able to
arbitrarily create LV's that themselves have the raid type. It's a lot
easier to manage in the case where you need different levels of
redundancy but you're not completely certain what the utilization of
those volumes will be, you can leave some unallocated space in the VG
"pool" so you can increase the size of any LV on demand whenever you
want.

If you're more comfortable with a conventional approach: GPT > LUKS >
mdadm > LVM > file system is just fine. I pretty much always start out
with LUKS for data drives because it's the only way to be certain
you're not leaking data should the drive need to be repurposed or
returned under warranty and won't spin up. It's also too easy to just
luksErase or luksFormat to obliterate everything on the drive, rather
than have to do a complete tear down of each layer's signature, and
you really shouldn't leave stale storage stack layers behind when
repurposing drives, it can cause confusion later.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:20           ` Wol's lists
@ 2017-11-29 22:27             ` Reindl Harald
  2017-12-01 16:19               ` Nix
  0 siblings, 1 reply; 35+ messages in thread
From: Reindl Harald @ 2017-11-29 22:27 UTC (permalink / raw)
  To: Wol's lists, Gandalf Corvotempesta; +Cc: Linux RAID Mailing List



Am 29.11.2017 um 23:20 schrieb Wol's lists:
> On 29/11/17 20:02, Reindl Harald wrote:
>> why not RAID5/6? besides 
>> https://www.askdbmgt.com/why-raid5-should-be-avoided-at-all-costs.html 
>> the parity data are additional writes wearing out the drives
> 
> So, if I have a four-drive raid 5, for every 3 blocks of data I write I 
> write 1 parity block. But with raid 1 or 10, for every 3 blocks of data 
> I write, I write *3* "parity" blocks!
> 
> What was that about "the additional writes wearing out the drives" then?
> 
> (Yes, I get the write amplification thing - but if you are writing a lot 
> of data, then raid 5 needs far *fewer* writes.)

RAID10 has a lot of other benefits:

* better performance at all
* lower CPU usage
* only 1 disk is affected with reads at rebuild
* you can clone a whole phyiscal machine easily by just move
   half of the drives to a new machine and start rebuild on
   both - in the past years i cloned multiple machines that
   way which are after that in sync with "synch-machine.sh push/get"

especially the clone thing saves a lot of time when you collected over 
time machines with all sorts of use-cases and storage sizes and you just 
need to consider which one has the most similar setip to the new one

before i drive home from the office "synch-machine.sh push && power off" 
and when i arrive at the office -> CTL+ALT+F" -> Login as root -> 
"synch-machine.sh get", CTRL+ALT+F1 -> grapical login

that way of work is BTW a great backup for human mistakes :-)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:25               ` Gandalf Corvotempesta
@ 2017-11-29 22:34                 ` Reindl Harald
  2017-12-01 16:18                   ` Nix
  0 siblings, 1 reply; 35+ messages in thread
From: Reindl Harald @ 2017-11-29 22:34 UTC (permalink / raw)
  To: Gandalf Corvotempesta; +Cc: Linux RAID Mailing List



Am 29.11.2017 um 23:25 schrieb Gandalf Corvotempesta:
> 2017-11-29 23:10 GMT+01:00 Reindl Harald <h.reindl@thelounge.net>:
>> [0:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sda
>> [1:0:0:0]    disk    ATA      Samsung SSD 850  2B6Q  /dev/sdb
> 
> these are consumer SSDs

so what

> Even the "PRO" is consumer and a 2TB is priced about 800 USD

in doubt - so what too - nothing you buy every day and in case of a 
RAID10 i put out the drives holding the OS and data and move them to the 
next machine, change the MAC of the internal NIC in the config and i am 
done (the other network cards and the wireless card are move too)

> Look at one of Intel DC SSD, the S3610 1.6TB is about 1500 USD. Twice
> the price, 40% less capacity

and do you really need them?
for what usecase?

performance?

well, the RAID10 versus RAID1 with two nearly double priced achieves the 
same or at least nothing different you could notice in real world usage 
and if one dies that's what RAID is for (the I is for inexpensive) and a 
few years later you get a "consumer SSD" with the same size probably 
much cheaper while a ton of tests prove that the consumer SSD's suck a 
lot of writes over a long time too

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:34                 ` Reindl Harald
@ 2017-12-01 16:18                   ` Nix
  2017-12-02 13:01                     ` Gandalf Corvotempesta
  0 siblings, 1 reply; 35+ messages in thread
From: Nix @ 2017-12-01 16:18 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Gandalf Corvotempesta, Linux RAID Mailing List

On 29 Nov 2017, Reindl Harald said:

> Am 29.11.2017 um 23:25 schrieb Gandalf Corvotempesta:
>> Look at one of Intel DC SSD, the S3610 1.6TB is about 1500 USD. Twice
>> the price, 40% less capacity
>
> and do you really need them?
> for what usecase?

Not bricking or corrupting themselves when the power goes out.

Intel DC SSDs are the only SSDs I have *ever* heard of surviving such
tests.

(btw, if you really are overprovisioning SSDS by hand, 2--5% is not
remotely enough. 10% is not really enough. I'd go for 15--20%.

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-11-29 22:27             ` Reindl Harald
@ 2017-12-01 16:19               ` Nix
  2017-12-01 16:27                 ` Reindl Harald
  0 siblings, 1 reply; 35+ messages in thread
From: Nix @ 2017-12-01 16:19 UTC (permalink / raw)
  To: Reindl Harald
  Cc: Wol's lists, Gandalf Corvotempesta, Linux RAID Mailing List

On 29 Nov 2017, Reindl Harald said:

> Am 29.11.2017 um 23:20 schrieb Wol's lists:
>> On 29/11/17 20:02, Reindl Harald wrote:
>>> why not RAID5/6? besides https://www.askdbmgt.com/why-raid5-should-be-avoided-at-all-costs.html the parity data are additional
>>> writes wearing out the drives
>>
>> So, if I have a four-drive raid 5, for every 3 blocks of data I write I write 1 parity block. But with raid 1 or 10, for every 3
>> blocks of data I write, I write *3* "parity" blocks!
>>
>> What was that about "the additional writes wearing out the drives" then?
>>
>> (Yes, I get the write amplification thing - but if you are writing a lot of data, then raid 5 needs far *fewer* writes.)
>
> RAID10 has a lot of other benefits:

That's not actually answering the question that was asked, y'know. If
you're against RAID 5 because the parity writes wear the drives out, you
should be much more strongly against RAID 10 for the same reason.

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 16:19               ` Nix
@ 2017-12-01 16:27                 ` Reindl Harald
  2017-12-01 17:18                   ` Wols Lists
  0 siblings, 1 reply; 35+ messages in thread
From: Reindl Harald @ 2017-12-01 16:27 UTC (permalink / raw)
  To: Nix; +Cc: Wol's lists, Gandalf Corvotempesta, Linux RAID Mailing List



Am 01.12.2017 um 17:19 schrieb Nix:
> That's not actually answering the question that was asked, y'know. If
> you're against RAID 5 because the parity writes wear the drives out, you
> should be much more strongly against RAID 10 for the same reason

RAID10 is simple mirroring of stripes



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 16:27                 ` Reindl Harald
@ 2017-12-01 17:18                   ` Wols Lists
  2017-12-01 22:22                     ` Nix
  2017-12-01 23:44                     ` Reindl Harald
  0 siblings, 2 replies; 35+ messages in thread
From: Wols Lists @ 2017-12-01 17:18 UTC (permalink / raw)
  To: Reindl Harald, Nix; +Cc: Gandalf Corvotempesta, Linux RAID Mailing List

On 01/12/17 16:27, Reindl Harald wrote:
> 
> 
> Am 01.12.2017 um 17:19 schrieb Nix:
>> That's not actually answering the question that was asked, y'know. If
>> you're against RAID 5 because the parity writes wear the drives out, you
>> should be much more strongly against RAID 10 for the same reason
> 
> RAID10 is simple mirroring of stripes
> 
And?

The more I think about it, the more I come to the conclusion that
raid-10 is a bad idea for (a) minimising writes (and wear), and (b) for
safeguarding your data.

Yes it does have advantages, and yes I plan to put a raid-10 array on my
new system, but if reducing wear or protecting data are your priorities,
raid-10 is the wrong choice.

Cheers,
Wol


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 17:18                   ` Wols Lists
@ 2017-12-01 22:22                     ` Nix
  2017-12-01 23:44                     ` Reindl Harald
  1 sibling, 0 replies; 35+ messages in thread
From: Nix @ 2017-12-01 22:22 UTC (permalink / raw)
  To: Wols Lists; +Cc: Reindl Harald, Gandalf Corvotempesta, Linux RAID Mailing List

On 1 Dec 2017, Wols Lists stated:
> Yes it does have advantages, and yes I plan to put a raid-10 array on my
> new system, but if reducing wear or protecting data are your priorities,
> raid-10 is the wrong choice.

It depends on the access patterns. It's no worse at safeguarding data
than RAID-5 and can be better (some, but not all, combinations of
multiple disk failures will lead to no data loss, something which would
otherwise require RAID-6). It's faster at reads and much faster at
random writes. The cost: possibly quite a lot more spinning rust for the
amount of available storage. (For me, power-consumption considerations
led me to stick with RAID, though I've gone to RAID-6.)

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 17:18                   ` Wols Lists
  2017-12-01 22:22                     ` Nix
@ 2017-12-01 23:44                     ` Reindl Harald
  2017-12-02 13:14                       ` Gandalf Corvotempesta
  2017-12-02 13:19                       ` Nix
  1 sibling, 2 replies; 35+ messages in thread
From: Reindl Harald @ 2017-12-01 23:44 UTC (permalink / raw)
  To: Wols Lists, Nix; +Cc: Gandalf Corvotempesta, Linux RAID Mailing List



Am 01.12.2017 um 18:18 schrieb Wols Lists:
> On 01/12/17 16:27, Reindl Harald wrote:
>>
>>
>> Am 01.12.2017 um 17:19 schrieb Nix:
>>> That's not actually answering the question that was asked, y'know. If
>>> you're against RAID 5 because the parity writes wear the drives out, you
>>> should be much more strongly against RAID 10 for the same reason
>>
>> RAID10 is simple mirroring of stripes
>>
> And?

why should mirroring on a different disk wear more?

> The more I think about it, the more I come to the conclusion that
> raid-10 is a bad idea for (a) minimising writes (and wear), and (b) for
> safeguarding your data.

why?

> Yes it does have advantages, and yes I plan to put a raid-10 array on my
> new system, but if reducing wear or protecting data are your priorities,
> raid-10 is the wrong choice
wrong!

rebuilds are much faster then with RAID5 or even RAID6 which is worst 
and so the degraded timeslot is smaller, a RAID10 *can* survive two 
faile disks while a RAID5 is for sure dead

RAID10 is practically a RAID1 with the performance enhancement of RAID0 
and if the linux kernel would support "writemostly" on RAID10 as it does 
with RAID1 it would be the perfect choice for a hybrid RAID for 
workloads which are mostly read-bound and so combine all worlds (RAID0 
at read from only the two SSD stripes, redundancy like RAID1 and only 
the need of 2 SSD's and two HDD's for redundancy)

RAID5 has *zero* benefits against RAID10 except costs while RAID6 is 
terrible for performance and wear out of disks

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 16:18                   ` Nix
@ 2017-12-02 13:01                     ` Gandalf Corvotempesta
  0 siblings, 0 replies; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-12-02 13:01 UTC (permalink / raw)
  To: Nix, Reindl Harald; +Cc: Linux RAID Mailing List

Il 01/12/2017 17:18, Nix ha scritto:
>
> Not bricking or corrupting themselves when the power goes out.
>
> Intel DC SSDs are the only SSDs I have *ever* heard of surviving such
> tests.

Exactly.
And additionally, enterprise SSDs are far more reliable than consumer SSDs,
in a RAID-1, where both disks will get the same identical write pattern, 
the risk
for a double failure at the same time is high.

And if you have to proactively replace an SSD before it fails,
then your are not using "inexpesive" disks anymore.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 23:44                     ` Reindl Harald
@ 2017-12-02 13:14                       ` Gandalf Corvotempesta
  2017-12-02 13:56                         ` Reindl Harald
  2017-12-02 17:12                         ` Phil Turmel
  2017-12-02 13:19                       ` Nix
  1 sibling, 2 replies; 35+ messages in thread
From: Gandalf Corvotempesta @ 2017-12-02 13:14 UTC (permalink / raw)
  To: Reindl Harald, Wols Lists, Nix; +Cc: Linux RAID Mailing List

Il 02/12/2017 00:44, Reindl Harald ha scritto:
> a RAID10 can survive two faile disks while a RAID5 is for sure dead
Absolutely not.
This is a common misconception.
RAID-10 can survive two failed disks IF AND ONLY IF these disks are on 
different mirrors.

I had multiple (more than 4) RAID-10s totally lost due to double failure 
in the same mirror.
Each disk in a mirror will get the same write pattern, thus, you'll risk 
a double failure for any
firmware bugs or similiar. With SSDs this risk is higher, because you 
wear out both members
in the same way at the same time.
If the controller kicks out two adiacent disks, you'll loose everything
(this is happened to me: fully synced RAID10: disk0 was kicked out by 
the controller, RAID
survived and after a couple of minutes, disk0 was automatically 
reactivated and started a rebuild.
during the rebuild, disk1 was kicked out. RAID lost: disk0 out-of-sync 
and disk1 kicked out)

RAID-6 is *much* more safe than RAID-1/RAID-10 as it can survive ANY TWO 
disks failure,
you will loose data on the third failure.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-01 23:44                     ` Reindl Harald
  2017-12-02 13:14                       ` Gandalf Corvotempesta
@ 2017-12-02 13:19                       ` Nix
  2017-12-02 14:01                         ` Reindl Harald
  1 sibling, 1 reply; 35+ messages in thread
From: Nix @ 2017-12-02 13:19 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Wols Lists, Gandalf Corvotempesta, Linux RAID Mailing List

On 1 Dec 2017, Reindl Harald said:

> RAID5 has *zero* benefits against RAID10 except costs while RAID6 is
> terrible for performance and wear out of disks

You still haven't explained why RAID-6 wears out disks more than RAID-10
does.

RAID-5 has one huge benefit over RAID-10: more accessible storage for a
given number of disks, and, given that power is not free, that means
lower running costs as well, as well as lower noise, lower vibration,
lower maintenance costs (since having more disks does mean you have to
replace disks more often, even if the chance of losing data when one
fails is reduced by RAID).

If RAID-5 and RAID-6 had no benefits at all over RAID-10 it is unlikely
they would still be in wide use. They are, even for new installations,
because they truly do offer benefits for some use cases. They may not
for yours, but that doesn't mean they don't for anyone.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-02 13:14                       ` Gandalf Corvotempesta
@ 2017-12-02 13:56                         ` Reindl Harald
  2017-12-02 17:12                         ` Phil Turmel
  1 sibling, 0 replies; 35+ messages in thread
From: Reindl Harald @ 2017-12-02 13:56 UTC (permalink / raw)
  To: Gandalf Corvotempesta, Wols Lists, Nix; +Cc: Linux RAID Mailing List



Am 02.12.2017 um 14:14 schrieb Gandalf Corvotempesta:
> Il 02/12/2017 00:44, Reindl Harald ha scritto:
>> a RAID10 can survive two faile disks while a RAID5 is for sure dead
> Absolutely not.
> This is a common misconception.
> RAID-10 can survive two failed disks IF AND ONLY IF these disks are on 
> different mirrors.

i know that - thats the difference between "will" and "can"

> I had multiple (more than 4) RAID-10s totally lost due to double failure 
> in the same mirror.
> Each disk in a mirror will get the same write pattern, thus, you'll risk 
> a double failure for any
> firmware bugs or similiar. With SSDs this risk is higher, because you 
> wear out both members
> in the same way at the same time

when you are not an idiot the mirror is not the same disk or not the 
same age, guess why i replaced first two of the HDD's with SSD's and 
somewhere next year i plan to replace the remaining two too

and for 4 disk RAID10 with HDD's i use typicaly 4 different disks at all 
from as many vendors as possible (the machine in the office is from a 
time where you had 4 real vendors)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-02 13:19                       ` Nix
@ 2017-12-02 14:01                         ` Reindl Harald
  2017-12-02 20:35                           ` Nix
  0 siblings, 1 reply; 35+ messages in thread
From: Reindl Harald @ 2017-12-02 14:01 UTC (permalink / raw)
  To: Nix; +Cc: Wols Lists, Gandalf Corvotempesta, Linux RAID Mailing List



Am 02.12.2017 um 14:19 schrieb Nix:
> On 1 Dec 2017, Reindl Harald said:
> 
>> RAID5 has *zero* benefits against RAID10 except costs while RAID6 is
>> terrible for performance and wear out of disks
> 
> You still haven't explained why RAID-6 wears out disks more than RAID-10
> does.

what about common sense?
double parity anyone?

a write goes at least to 3 disks of the array instead two in case of a 4 
meber RAID10

> RAID-5 has one huge benefit over RAID-10: more accessible storage for a
> given number of disks, and, given that power is not free, that means
> lower running costs as well, as well as lower noise, lower vibration,
> lower maintenance costs (since having more disks does mean you have to
> replace disks more often, even if the chance of losing data when one
> fails is reduced by RAID).

the more accessible storage is the only real one

> If RAID-5 and RAID-6 had no benefits at all over RAID-10 it is unlikely
> they would still be in wide use. They are, even for new installations,
> because they truly do offer benefits for some use cases. They may not
> for yours, but that doesn't mean they don't for anyone

yes, when you need far more than 8 TB...

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-02 13:14                       ` Gandalf Corvotempesta
  2017-12-02 13:56                         ` Reindl Harald
@ 2017-12-02 17:12                         ` Phil Turmel
  2017-12-02 18:39                           ` Reindl Harald
  1 sibling, 1 reply; 35+ messages in thread
From: Phil Turmel @ 2017-12-02 17:12 UTC (permalink / raw)
  To: Gandalf Corvotempesta, Reindl Harald, Wols Lists, Nix
  Cc: Linux RAID Mailing List

Hi Gandalf,

You and Reindl are both wrong. (-:

On 12/02/2017 08:14 AM, Gandalf Corvotempesta wrote:
> Il 02/12/2017 00:44, Reindl Harald ha scritto:
>> a RAID10 can survive two faile disks while a RAID5 is for sure dead
> Absolutely not.
> This is a common misconception.
> RAID-10 can survive two failed disks IF AND ONLY IF these disks are on
> different mirrors.

You (Gandalf) are ignoring the ability of raid10 to use multiple copies
across device counts that aren't multiples of the number of copies.  A
raid10,n3 array on five devices provides a total capacity of 1-2/3 the
capacity of one member, but can fail any two members just like a raid6.

It has blistering fast read performance, especially for parallel loads.
It has the same write performance as as similarly capacity raid0.  It
has terrible write amplification on SSDs, but is very low CPU
utilization (no parity computation and never any Read-Modify-Write cycles).

Raid10 has an valuable role to play in high performance systems, and I
use LVM on top of raid10,n3 for my own home directories, general purpose
user partitions, and the root LVs for performance-critical VMs.  I use
LVM on top of raid6 for media files, large capacity databases, and
not-so-critical VMs.

> RAID-6 is *much* more safe than RAID-1/RAID-10 as it can survive ANY TWO
> disks failure,
> you will loose data on the third failure.

You (Reindl) are crazy to argue about can-vs-will on a raid10,n2 layout.
 You're crazy to rely on any single mirror layout for data you care
about.  It is a disaster waiting to happen (during rebuild) in exactly
the same way raid5 is vulnerable during rebuild.  But raid10,n3 is just
as safe as raid6.

Phil

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-02 17:12                         ` Phil Turmel
@ 2017-12-02 18:39                           ` Reindl Harald
  0 siblings, 0 replies; 35+ messages in thread
From: Reindl Harald @ 2017-12-02 18:39 UTC (permalink / raw)
  To: Phil Turmel, Gandalf Corvotempesta, Wols Lists, Nix
  Cc: Linux RAID Mailing List



Am 02.12.2017 um 18:12 schrieb Phil Turmel:
>> RAID-6 is *much* more safe than RAID-1/RAID-10 as it can survive ANY TWO
>> disks failure,
>> you will loose data on the third failure.
> 
> You (Reindl) are crazy to argue about can-vs-will on a raid10,n2 layout.
> You're crazy to rely on any single mirror layout for data you care
> about.  It is a disaster waiting to happen (during rebuild) in exactly
> the same way raid5 is vulnerable during rebuild.  But raid10,n3 is just
> as safe as raid6

besides that this above is not a quote of me nor makes this quote sense

no, i take all the benefits RAID10 has *anyways* and when you have the 
mirrors on different drives.... but that's only *one* minor point for RAID10

probability is always a valid point when you have to make technical 
decisions be it how long needs my UPS typically give me power based on 
typical power outages, and yes the first one could be longer as anyone 
you have ever seen and you still lost

i explained a lot of other benefits of RAID10 so i have no idea why you 
hang on one nuance - the most importatnt is that i can clone a fukcing 
machine by just put half of the drives in a differnt one and start 
rebuild on both machines after that

the whole porblem of that thread in fact his dumb "but you are talking 
about SSD" - yes damned - i talk about how i would setup a machine *now* 
and how i would have done it many years ago with the knowledge of today 
while the 40 GB overprovisioning won't matter for storage size but be 
good for the disks (and no i don't need another one whining that it#s 
not enough because that's nonsense - there is no definite enough - no 
overprovisioning won't make it better at all)


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-02 14:01                         ` Reindl Harald
@ 2017-12-02 20:35                           ` Nix
  2017-12-02 21:41                             ` Reindl Harald
  0 siblings, 1 reply; 35+ messages in thread
From: Nix @ 2017-12-02 20:35 UTC (permalink / raw)
  To: Reindl Harald; +Cc: Wols Lists, Gandalf Corvotempesta, Linux RAID Mailing List

On 2 Dec 2017, Reindl Harald outgrape:

> Am 02.12.2017 um 14:19 schrieb Nix:
>> If RAID-5 and RAID-6 had no benefits at all over RAID-10 it is unlikely
>> they would still be in wide use. They are, even for new installations,
>> because they truly do offer benefits for some use cases. They may not
>> for yours, but that doesn't mean they don't for anyone
>
> yes, when you need far more than 8 TB...

I can't figure out whether you're castigating people who need more than
8TiB, castigating people who need less, applauding people who need more
than 8TiB, applauding people who need less, saying that people who need
more than 8TiB should use RAID 10, saying that people who need more
should use RAID 6, or *what*. All I can tell is that you want to be as
unpleasant as possible while saying it. (I suspect this ambiguity is
intentional, so that no matter what people assume you meant, you can
flame them for not interpreting it the other way.)

But then that's not surprising because you've morphed again to try to
escape from people's killfiles. Back you go into mine, *yet again*.

-- 
NULL && (void)

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: New setup: partitions or raw devices
  2017-12-02 20:35                           ` Nix
@ 2017-12-02 21:41                             ` Reindl Harald
  0 siblings, 0 replies; 35+ messages in thread
From: Reindl Harald @ 2017-12-02 21:41 UTC (permalink / raw)
  To: Nix; +Cc: Wols Lists, Gandalf Corvotempesta, Linux RAID Mailing List



Am 02.12.2017 um 21:35 schrieb Nix:
> On 2 Dec 2017, Reindl Harald outgrape:
> 
>> Am 02.12.2017 um 14:19 schrieb Nix:
>>> If RAID-5 and RAID-6 had no benefits at all over RAID-10 it is unlikely
>>> they would still be in wide use. They are, even for new installations,
>>> because they truly do offer benefits for some use cases. They may not
>>> for yours, but that doesn't mean they don't for anyone
>>
>> yes, when you need far more than 8 TB...
> 
> I can't figure out whether you're castigating people who need more than
> 8TiB, castigating people who need less, applauding people who need more
> than 8TiB, applauding people who need less, saying that people who need
> more than 8TiB should use RAID 10, saying that people who need more
> should use RAID 6, or *what*. All I can tell is that you want to be as
> unpleasant as possible while saying it. (I suspect this ambiguity is
> intentional, so that no matter what people assume you meant, you can
> flame them for not interpreting it the other way.)

i don't castigating anybody - it's simply always about price, size, 
performance and redundancy level and since disk sizes and their prices 
as well as technology changes there is no definite answer - but 
*currently* up to 8 TB storage is doable with *any* raid level without 
spend many thousands of money

for storgae sizes up to 8 TB *these days* you get away with 4 disks 
which are not that expensive, if you need 100 TB the overhead of needed 
disks will simply kill you in context of price

and for performance you have practically the same "problems" to solve, 
small-to-middle arrays with SSD and RAID10 are possible, for middle 
large arrays you can reach similar performance with RAID10 and HHD by 
just use more drives

> But then that's not surprising because you've morphed again to try to
> escape from people's killfiles. Back you go into mine, *yet again*

i could not care less about you - seriously

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2017-12-02 21:41 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-29 16:22 New setup: partitions or raw devices Gandalf Corvotempesta
2017-11-29 16:44 ` Reindl Harald
2017-11-29 16:52   ` Phil Turmel
2017-11-29 17:42     ` Gandalf Corvotempesta
2017-11-29 17:49       ` Phil Turmel
     [not found]         ` <CAJH6TXjFoUOCySnq2ErjTT9rb10XSc2saY=Q3RDheT7thOOFPg@mail.gmail.com>
     [not found]           ` <CAJH6TXhK5XgY-1v49oHcRXBugDMZ6QagKSa-deCA-Q7tPPLRyA@mail.gmail.com>
     [not found]             ` <CAJH6TXgbfgg_dk9oasVExn=RPVZqQDKN2AWAmPi1U2=PiACAHA@mail.gmail.com>
2017-11-29 19:54               ` Fwd: " Gandalf Corvotempesta
2017-11-29 22:10     ` Chris Murphy
2017-11-29 22:14       ` Gandalf Corvotempesta
2017-11-29 22:27         ` Chris Murphy
2017-11-29 22:14       ` Chris Murphy
2017-11-29 17:38   ` Gandalf Corvotempesta
2017-11-29 18:28     ` Reindl Harald
2017-11-29 19:51       ` Gandalf Corvotempesta
2017-11-29 20:02         ` Reindl Harald
2017-11-29 22:02           ` Gandalf Corvotempesta
2017-11-29 22:10             ` Reindl Harald
2017-11-29 22:25               ` Gandalf Corvotempesta
2017-11-29 22:34                 ` Reindl Harald
2017-12-01 16:18                   ` Nix
2017-12-02 13:01                     ` Gandalf Corvotempesta
2017-11-29 22:20           ` Wol's lists
2017-11-29 22:27             ` Reindl Harald
2017-12-01 16:19               ` Nix
2017-12-01 16:27                 ` Reindl Harald
2017-12-01 17:18                   ` Wols Lists
2017-12-01 22:22                     ` Nix
2017-12-01 23:44                     ` Reindl Harald
2017-12-02 13:14                       ` Gandalf Corvotempesta
2017-12-02 13:56                         ` Reindl Harald
2017-12-02 17:12                         ` Phil Turmel
2017-12-02 18:39                           ` Reindl Harald
2017-12-02 13:19                       ` Nix
2017-12-02 14:01                         ` Reindl Harald
2017-12-02 20:35                           ` Nix
2017-12-02 21:41                             ` Reindl Harald

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.