All of lore.kernel.org
 help / color / mirror / Atom feed
* put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
@ 2021-02-03 19:04  
  2021-02-03 19:33 `  
  2021-02-03 20:23 ` Andrew Luke Nesbit
  0 siblings, 2 replies; 8+ messages in thread
From:   @ 2021-02-03 19:04 UTC (permalink / raw)
  To: , linux-btrfs

­Hi All,

I am looking for a way to make a raid 1 of two SSD's, and to be able to detect corrupted blocks, much like btrfs does that. I recall being told about a month ago to use a specific piece of software for that, but i forgot to make a note of it, and I can't find it anymore.

What's that called?

Cheers,
Cedric

---

Take your mailboxes with you. Free, fast and secure Mail & Cloud: https://www.eclipso.eu - Time to change!



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-03 19:04 put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?  
@ 2021-02-03 19:33 `  
  2021-02-04 10:54   ` Andy Smith
  2021-02-03 20:23 ` Andrew Luke Nesbit
  1 sibling, 1 reply; 8+ messages in thread
From:   @ 2021-02-03 19:33 UTC (permalink / raw)
  To: Cedric.dewijs; +Cc: linux-btrfs


--- Ursprüngliche Nachricht ---
Von: " " <Cedric.dewijs@eclipso.eu>
Datum: 03.02.2021 20:04:32
An: ", linux-btrfs" <linux-btrfs@vger.kernel.org>
Betreff: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does,         what's that called?

­Hi All,

I am looking for a way to make a raid 1 of two SSD's, and to be able to detect
corrupted blocks, much like btrfs does that. I recall being told about a
month ago to use a specific piece of software for that, but i forgot to make
a note of it, and I can't find it anymore.

What's that called?

Cheers,
Cedric


Hi All,

it's called "dm-integrity", as mentioned in this e-mail:
https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg93037.html

Apologies for the noise,

Cheers,
Cedric



---

Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-03 19:04 put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?  
  2021-02-03 19:33 `  
@ 2021-02-03 20:23 ` Andrew Luke Nesbit
  2021-02-04  6:57   ` Forza
  1 sibling, 1 reply; 8+ messages in thread
From: Andrew Luke Nesbit @ 2021-02-03 20:23 UTC (permalink / raw)
  To: Cedric.dewijs, linux-btrfs

On 03/02/2021 19:04, Cedric.dewijs@eclipso.eu wrote:
> I am looking for a way to make a raid 1 of two SSD's, and to be able to detect corrupted blocks, much like btrfs does that. I recall being told about a month ago to use a specific piece of software for that, but i forgot to make a note of it, and I can't find it anymore.

Running SSD's in RAID1 has been contentious from the perspective that I 
have been researching storage technology.

Is there any serious, properly researched, and learned infornmation 
available about this?

The reason I ask is that, in a related situation, I have 4x high quality 
HGST SLC SAS SSD's, and I was seriously thinking that RAID0 might be the 
appropriate way to configure them.  This assumes a well designed backup 
strategy of course.

Is this foolhardy?

Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-03 20:23 ` Andrew Luke Nesbit
@ 2021-02-04  6:57   ` Forza
  0 siblings, 0 replies; 8+ messages in thread
From: Forza @ 2021-02-04  6:57 UTC (permalink / raw)
  To: Andrew Luke Nesbit, Cedric.dewijs, linux-btrfs



On 2021-02-03 21:23, Andrew Luke Nesbit wrote:
> On 03/02/2021 19:04, Cedric.dewijs@eclipso.eu wrote:
>> I am looking for a way to make a raid 1 of two SSD's, and to be able 
>> to detect corrupted blocks, much like btrfs does that. I recall being 
>> told about a month ago to use a specific piece of software for that, 
>> but i forgot to make a note of it, and I can't find it anymore.
> 
> Running SSD's in RAID1 has been contentious from the perspective that I 
> have been researching storage technology.
> 
> Is there any serious, properly researched, and learned infornmation 
> available about this?
> 
> The reason I ask is that, in a related situation, I have 4x high quality 
> HGST SLC SAS SSD's, and I was seriously thinking that RAID0 might be the 
> appropriate way to configure them.  This assumes a well designed backup 
> strategy of course.
> 
> Is this foolhardy?
> 
> Andrew

Is there a reason why you are not considering Btrfs RAID1? It provides 
redundancy and checksums to protect against bit errors on either mirror. 
Remember that Btrfs RAID1 does not work the same way as mdadm's 
alternative.

RAID0 provides no fault tolerance at all. Is there any added performance 
you need from RAID0 in your application?

Forza

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-03 19:33 `  
@ 2021-02-04 10:54   ` Andy Smith
  2021-02-04 17:43     `  
  0 siblings, 1 reply; 8+ messages in thread
From: Andy Smith @ 2021-02-04 10:54 UTC (permalink / raw)
  To: linux-btrfs; +Cc: linux-raid

Hi Cedric,

On Wed, Feb 03, 2021 at 08:33:18PM +0100,   wrote:
> it's called "dm-integrity", as mentioned in this e-mail:
> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg93037.html

If you do this it would be very interesting to see performance
figures for the following setups:

- btrfs with raid1 meta and data allocation
- mdadm raid1 on raw devices
- mdadm raid1 on dm-integrity (no encryption) on raw devices
- mdadm raid1 on dm-integrity (encryption) on raw devices

just to see what kind of performance loss dm-integrity and
encryption is going to impose.

After doing it, it would find a nice home on the Linux RAID wiki:

    https://raid.wiki.kernel.org/index.php/Dm-integrity

Cheers,
Andy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-04 10:54   ` Andy Smith
@ 2021-02-04 17:43     `  
  2021-02-04 18:13       ` Goffredo Baroncelli
  0 siblings, 1 reply; 8+ messages in thread
From:   @ 2021-02-04 17:43 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-btrfs, linux-raid


--- Ursprüngliche Nachricht ---
Von: Andy Smith <andy@strugglers.net>
Datum: 04.02.2021 11:54:57
An: linux-btrfs@vger.kernel.org
Betreff: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs  does, what's that called?

Hi Cedric,

On Wed, Feb 03, 2021 at 08:33:18PM +0100,   wrote:
> it's called "dm-integrity", as mentioned in this e-mail:
> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg93037.html


If you do this it would be very interesting to see performance
figures for the following setups:

- btrfs with raid1 meta and data allocation
- mdadm raid1 on raw devices
- mdadm raid1 on dm-integrity (no encryption) on raw devices
- mdadm raid1 on dm-integrity (encryption) on raw devices

just to see what kind of performance loss dm-integrity and
encryption is going to impose.

After doing it, it would find a nice home on the Linux RAID wiki:

    https://raid.wiki.kernel.org/index.php/Dm-integrity

Cheers,
Andy

Hey Andy,

I would rather see performance figures for these setups:
A) btrfs with 2 (or more) hard drives and one SSD in writeback bcache configuration (unsafe against failure of the ssd):
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/sdk1         |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

B) btrfs with 2 (or more) hard drives and two SSD's in dm-raid 1 writeback bcache configuration (unsafe against corruption of any of the ssd's): 
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/dm0          |
+--------------+--------------+
| 2x SSD in mdadm raid 1      |
| /dev/sdk1       /dev/sdl1   |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

C) Full stack: btrfs with 2 (or more) hard drives and two identical SSD's in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/dm0          |
+--------------+--------------+
| 2 x dm-integrity devices    |
| in mdadm raid 1             |
+--------------+--------------+
| SSD hosting  | SSD hosting  |
| dm-integrity | dm-integrity |
| /dev/sdk1    | /dev/sdl1    |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

D) Full stack: btrfs with 2 (or more) hard drives and two SSD's (one slow, and one very fast) in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/dm0          |
+--------------+--------------+
| 2 x dm-integrity devices    |
| in mdadm raid 1             |
+--------------+--------------+
| SSD hosting  | SSD hosting  |
| dm-integrity | dm-integrity |
| /dev/sdk1    | /dev/sdl1    |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

In all these setups, the performance of the hard drives is irrelevant, because the speed of the setups comes from the bcache SSD.

Cheers,
Cedric

---

Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-04 17:43     `  
@ 2021-02-04 18:13       ` Goffredo Baroncelli
  2021-02-04 19:58         `  
  0 siblings, 1 reply; 8+ messages in thread
From: Goffredo Baroncelli @ 2021-02-04 18:13 UTC (permalink / raw)
  To: Cedric.dewijs, Andy Smith; +Cc: linux-btrfs, linux-raid

[...]
> Hey Andy,
> 
> I would rather see performance figures for these setups:
> A) btrfs with 2 (or more) hard drives and one SSD in writeback bcache configuration (unsafe against failure of the ssd):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/sdk1         |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+

Doing that, you loose the protection of raid1 redundancy: now there is a single point of failure /dev/sdk1. Writeback is even more dangerous...

> 
> B) btrfs with 2 (or more) hard drives and two SSD's in dm-raid 1 writeback bcache configuration (unsafe against corruption of any of the ssd's):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/dm0          |
> +--------------+--------------+
> | 2x SSD in mdadm raid 1      |
> | /dev/sdk1       /dev/sdl1   |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+
> 
> C) Full stack: btrfs with 2 (or more) hard drives and two identical SSD's in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/dm0          |
> +--------------+--------------+
> | 2 x dm-integrity devices    |
> | in mdadm raid 1             |
> +--------------+--------------+
> | SSD hosting  | SSD hosting  |
> | dm-integrity | dm-integrity |
> | /dev/sdk1    | /dev/sdl1    |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+
> 
> D) Full stack: btrfs with 2 (or more) hard drives and two SSD's (one slow, and one very fast) in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/dm0          |
> +--------------+--------------+
> | 2 x dm-integrity devices    |
> | in mdadm raid 1             |
> +--------------+--------------+
> | SSD hosting  | SSD hosting  |
> | dm-integrity | dm-integrity |
> | /dev/sdk1    | /dev/sdl1    |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+
> 
> In all these setups, the performance of the hard drives is irrelevant, because the speed of the setups comes from the bcache SSD.
> 
> Cheers,
> Cedric
> 
> ---
> 
> Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!
> 
> 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-04 18:13       ` Goffredo Baroncelli
@ 2021-02-04 19:58         `  
  0 siblings, 0 replies; 8+ messages in thread
From:   @ 2021-02-04 19:58 UTC (permalink / raw)
  To: kreijack; +Cc: andy, linux-btrfs, linux-raid


--- Ursprüngliche Nachricht ---
Von: Goffredo Baroncelli <kreijack@libero.it>
Datum: 04.02.2021 19:13:50
An: Cedric.dewijs@eclipso.eu, Andy Smith <andy@strugglers.net>
Betreff: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs  does, what's that called?

[...]
> Hey Andy,
> 
> I would rather see performance figures for these setups:
> A) btrfs with 2 (or more) hard drives and one SSD in writeback bcache
configuration (unsafe against failure of the ssd):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/sdk1         |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+

Doing that, you loose the protection of raid1 redundancy: now there is a
single point of failure /dev/sdk1. Writeback is even more dangerous...


Not really. if bcache is set to read cache, the SSD can die at any moment, without btrfs loosing any data. All written data has gone straight to the hard drives. I have not tried this scenario, but I would be very surprised if reading the data from /mnt is even interrupted for longer than a few seconds if the data cable from the ssd is pulled while data is written from another process.

You are correct about writeback cache, if /dev/sdk1 dies, all dirty data is lost, and even worse, both copies of the btrfs data are side by side on only the SSD. (But I already mentioned this in my previous mail: "unsafe against failure of the ssd")

Cheers,
Cedric

---

Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-02-04 20:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-03 19:04 put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?  
2021-02-03 19:33 `  
2021-02-04 10:54   ` Andy Smith
2021-02-04 17:43     `  
2021-02-04 18:13       ` Goffredo Baroncelli
2021-02-04 19:58         `  
2021-02-03 20:23 ` Andrew Luke Nesbit
2021-02-04  6:57   ` Forza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.