Linux-Raid Archives on lore.kernel.org
 help / color / Atom feed
* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
       [not found] ` <a2cd87208a74fb36224539fa10727066@mail.eclipso.de>
@ 2021-02-04 10:54   ` Andy Smith
  2021-02-04 17:43     `  
  0 siblings, 1 reply; 4+ messages in thread
From: Andy Smith @ 2021-02-04 10:54 UTC (permalink / raw)
  To: linux-btrfs; +Cc: linux-raid

Hi Cedric,

On Wed, Feb 03, 2021 at 08:33:18PM +0100,   wrote:
> it's called "dm-integrity", as mentioned in this e-mail:
> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg93037.html

If you do this it would be very interesting to see performance
figures for the following setups:

- btrfs with raid1 meta and data allocation
- mdadm raid1 on raw devices
- mdadm raid1 on dm-integrity (no encryption) on raw devices
- mdadm raid1 on dm-integrity (encryption) on raw devices

just to see what kind of performance loss dm-integrity and
encryption is going to impose.

After doing it, it would find a nice home on the Linux RAID wiki:

    https://raid.wiki.kernel.org/index.php/Dm-integrity

Cheers,
Andy

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-04 10:54   ` put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called? Andy Smith
@ 2021-02-04 17:43     `  
  2021-02-04 18:13       ` Goffredo Baroncelli
  0 siblings, 1 reply; 4+ messages in thread
From:   @ 2021-02-04 17:43 UTC (permalink / raw)
  To: Andy Smith; +Cc: linux-btrfs, linux-raid


--- Ursprüngliche Nachricht ---
Von: Andy Smith <andy@strugglers.net>
Datum: 04.02.2021 11:54:57
An: linux-btrfs@vger.kernel.org
Betreff: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs  does, what's that called?

Hi Cedric,

On Wed, Feb 03, 2021 at 08:33:18PM +0100,   wrote:
> it's called "dm-integrity", as mentioned in this e-mail:
> https://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg93037.html


If you do this it would be very interesting to see performance
figures for the following setups:

- btrfs with raid1 meta and data allocation
- mdadm raid1 on raw devices
- mdadm raid1 on dm-integrity (no encryption) on raw devices
- mdadm raid1 on dm-integrity (encryption) on raw devices

just to see what kind of performance loss dm-integrity and
encryption is going to impose.

After doing it, it would find a nice home on the Linux RAID wiki:

    https://raid.wiki.kernel.org/index.php/Dm-integrity

Cheers,
Andy

Hey Andy,

I would rather see performance figures for these setups:
A) btrfs with 2 (or more) hard drives and one SSD in writeback bcache configuration (unsafe against failure of the ssd):
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/sdk1         |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

B) btrfs with 2 (or more) hard drives and two SSD's in dm-raid 1 writeback bcache configuration (unsafe against corruption of any of the ssd's): 
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/dm0          |
+--------------+--------------+
| 2x SSD in mdadm raid 1      |
| /dev/sdk1       /dev/sdl1   |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

C) Full stack: btrfs with 2 (or more) hard drives and two identical SSD's in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/dm0          |
+--------------+--------------+
| 2 x dm-integrity devices    |
| in mdadm raid 1             |
+--------------+--------------+
| SSD hosting  | SSD hosting  |
| dm-integrity | dm-integrity |
| /dev/sdk1    | /dev/sdl1    |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

D) Full stack: btrfs with 2 (or more) hard drives and two SSD's (one slow, and one very fast) in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
+-----------------------------+
|      btrfs raid 1 /mnt      |
+--------------+--------------+
| /dev/Bcache0 | /dev/Bcache1 |
+--------------+--------------+
|   bcache writeback Cache    |
|           /dev/dm0          |
+--------------+--------------+
| 2 x dm-integrity devices    |
| in mdadm raid 1             |
+--------------+--------------+
| SSD hosting  | SSD hosting  |
| dm-integrity | dm-integrity |
| /dev/sdk1    | /dev/sdl1    |
+--------------+--------------+
| Data         | Data         |
| /dev/sdv1    | /dev/sdw1    |
+--------------+--------------+

In all these setups, the performance of the hard drives is irrelevant, because the speed of the setups comes from the bcache SSD.

Cheers,
Cedric

---

Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-04 17:43     `  
@ 2021-02-04 18:13       ` Goffredo Baroncelli
  2021-02-04 19:58         `  
  0 siblings, 1 reply; 4+ messages in thread
From: Goffredo Baroncelli @ 2021-02-04 18:13 UTC (permalink / raw)
  To: Cedric.dewijs, Andy Smith; +Cc: linux-btrfs, linux-raid

[...]
> Hey Andy,
> 
> I would rather see performance figures for these setups:
> A) btrfs with 2 (or more) hard drives and one SSD in writeback bcache configuration (unsafe against failure of the ssd):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/sdk1         |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+

Doing that, you loose the protection of raid1 redundancy: now there is a single point of failure /dev/sdk1. Writeback is even more dangerous...

> 
> B) btrfs with 2 (or more) hard drives and two SSD's in dm-raid 1 writeback bcache configuration (unsafe against corruption of any of the ssd's):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/dm0          |
> +--------------+--------------+
> | 2x SSD in mdadm raid 1      |
> | /dev/sdk1       /dev/sdl1   |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+
> 
> C) Full stack: btrfs with 2 (or more) hard drives and two identical SSD's in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/dm0          |
> +--------------+--------------+
> | 2 x dm-integrity devices    |
> | in mdadm raid 1             |
> +--------------+--------------+
> | SSD hosting  | SSD hosting  |
> | dm-integrity | dm-integrity |
> | /dev/sdk1    | /dev/sdl1    |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+
> 
> D) Full stack: btrfs with 2 (or more) hard drives and two SSD's (one slow, and one very fast) in dm-raid 1 with dm-integrity writeback bcache configuration (safe against any failed drive):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/dm0          |
> +--------------+--------------+
> | 2 x dm-integrity devices    |
> | in mdadm raid 1             |
> +--------------+--------------+
> | SSD hosting  | SSD hosting  |
> | dm-integrity | dm-integrity |
> | /dev/sdk1    | /dev/sdl1    |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+
> 
> In all these setups, the performance of the hard drives is irrelevant, because the speed of the setups comes from the bcache SSD.
> 
> Cheers,
> Cedric
> 
> ---
> 
> Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!
> 
> 


-- 
gpg @keyserver.linux.it: Goffredo Baroncelli <kreijackATinwind.it>
Key fingerprint BBF5 1610 0B64 DAC6 5F7D  17B2 0EDA 9B37 8B82 E0B5

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called?
  2021-02-04 18:13       ` Goffredo Baroncelli
@ 2021-02-04 19:58         `  
  0 siblings, 0 replies; 4+ messages in thread
From:   @ 2021-02-04 19:58 UTC (permalink / raw)
  To: kreijack; +Cc: andy, linux-btrfs, linux-raid


--- Ursprüngliche Nachricht ---
Von: Goffredo Baroncelli <kreijack@libero.it>
Datum: 04.02.2021 19:13:50
An: Cedric.dewijs@eclipso.eu, Andy Smith <andy@strugglers.net>
Betreff: Re: put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs  does, what's that called?

[...]
> Hey Andy,
> 
> I would rather see performance figures for these setups:
> A) btrfs with 2 (or more) hard drives and one SSD in writeback bcache
configuration (unsafe against failure of the ssd):
> +-----------------------------+
> |      btrfs raid 1 /mnt      |
> +--------------+--------------+
> | /dev/Bcache0 | /dev/Bcache1 |
> +--------------+--------------+
> |   bcache writeback Cache    |
> |           /dev/sdk1         |
> +--------------+--------------+
> | Data         | Data         |
> | /dev/sdv1    | /dev/sdw1    |
> +--------------+--------------+

Doing that, you loose the protection of raid1 redundancy: now there is a
single point of failure /dev/sdk1. Writeback is even more dangerous...


Not really. if bcache is set to read cache, the SSD can die at any moment, without btrfs loosing any data. All written data has gone straight to the hard drives. I have not tried this scenario, but I would be very surprised if reading the data from /mnt is even interrupted for longer than a few seconds if the data cable from the ssd is pulled while data is written from another process.

You are correct about writeback cache, if /dev/sdk1 dies, all dirty data is lost, and even worse, both copies of the btrfs data are side by side on only the SSD. (But I already mentioned this in my previous mail: "unsafe against failure of the ssd")

Cheers,
Cedric

---

Take your mailboxes with you. Free, fast and secure Mail &amp; Cloud: https://www.eclipso.eu - Time to change!



^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, back to index

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <f5d8af48e8d5543267089286c01c476f@mail.eclipso.de>
     [not found] ` <a2cd87208a74fb36224539fa10727066@mail.eclipso.de>
2021-02-04 10:54   ` put 2 hard drives in mdadm raid 1 and detect bitrot like btrfs does, what's that called? Andy Smith
2021-02-04 17:43     `  
2021-02-04 18:13       ` Goffredo Baroncelli
2021-02-04 19:58         `  

Linux-Raid Archives on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-raid/0 linux-raid/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-raid linux-raid/ https://lore.kernel.org/linux-raid \
		linux-raid@vger.kernel.org
	public-inbox-index linux-raid

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-raid


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git