linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Stéphane Lesimple" <stephane_btrfs2@lesimple.fr>
To: Cedric.dewijs@eclipso.eu, "Qu Wenruo" <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD?
Date: Tue, 05 Jan 2021 19:19:38 +0000	[thread overview]
Message-ID: <ee715f644c3024a97efccbda2c8b22d2@lesimple.fr> (raw)
In-Reply-To: <8af92960a09701579b6bcbb9b51489cc@mail.eclipso.de>

January 5, 2021 7:20 PM, Cedric.dewijs@eclipso.eu wrote:

>>> I was expecting btrfs to do almost all reads from the fast SSD, as both
> 
> the data and the metadata is on that drive, so the slow hdd is only really
> needed when there's a bitflip on the SSD, and the data has to be reconstructed.
> 
>> IIRC there will be some read policy feature to do that, but not yet
>> merged, and even merged, you still need to manually specify the
>> priority, as there is no way for btrfs to know which driver is faster
>> (except the non-rotational bit, which is not reliable at all).
> 
> Manually specifying the priority drive would be a big step in the right direction. Maybe btrfs
> could get a routine that benchmarks the sequential and random read and write speed of the drives at
> (for instance) mount time, or triggered by an administrator? This could lead to misleading results
> if btrfs doesn't get the whole drive to itself.
> 
>>> Writing has to be done to both drives of course, but I don't expect
> 
> slowdowns from that, as the system RAM should cache that.
> 
>> Write can still slow down the system even you have tons of memory.
>> Operations like fsync() or sync() will still wait for the writeback,
>> thus in your case, it will also be slowed by the HDD no matter what.
>> 
>> In fact, in real world desktop, most of the writes are from sometimes
>> unnecessary fsync().
>> 
>> To get rid of such slow down, you have to go dangerous by disabling
>> barrier, which is never a safe idea.
> 
> I suggest a middle ground, where btrfs returns from fsync when one of the copies (instead of all
> the copies) of the data has been written completely to disk. This poses a small data risk, as this
> creates moments that there's only one copy of the data on disk, while the software above btrfs
> thinks all data is written on two disks. one problem I see if the server is told to shut down while
> there's a big backlog of data to be written to the slow drive, while the big drive is already done.
> Then the server could cut the power while the slow drive is still being written.
> 
> i think this setting should be given to the system administrator, it's not a good idea to just
> blindly enable this behavior.
> 
>>> Is there a way to tell btrfs to leave the slow hdd alone, and to prioritize
> 
> the SSD?
> 
>> Not in upstream kernel for now.


I happen to have written a custom patch for my own use for a similar use case:
I have a bunch of slow drives constituting a raid1 FS of dozens of terabytes,
and just one SSD, reserved only for metadata.

My patch adds an entry under sysfs for each FS so that the admin can select the
"metadata_only" devid. This is optional, if it's not done, the usual btrfs behavior
applies. When set, this device is:
- never considered for new data chunks allocation
- preferred for new metadata chunk allocations
- preferred for metadata reads

This way I still have raid1, but the metadata chunks on slow drives are only
there for redundancy and never accessed for reads as long as the SSD metadata
is valid.

This *drastically* improved my snapshots rotation, and even made qgroups usable
again. I think I've been running this for 1-2 years, but obviously I'd love seeing
such option on the vanilla kernel so that I can get rid of hacky patch :)

>> 
>> Thus I guess you need something like bcache to do this.
> 
> Agreed. However, one of the problems of bcache, it that it can't use 2 SSD's in mirrored mode to
> form a writeback cache in front of many spindles, so this structure is impossible:
> +-----------------------------------------------------------+--------------+--------------+
> | btrfs raid 1 (2 copies) /mnt |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> | /dev/bcache0 | /dev/bcache1 | /dev/bcache2 | /dev/bcache3 | /dev/bcache4 | /dev/bcache5 |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> | Write Cache (2xSSD in raid 1, mirrored) |
> | /dev/sda2 and /dev/sda3 |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> | Data | Data | Data | Data | Data | Data |
> | /dev/sda9 | /dev/sda10 | /dev/sda11 | /dev/sda12 | /dev/sda13 | /dev/sda14 |
> +--------------+--------------+--------------+--------------+--------------+--------------+
> 
> In order to get a system that has no data loss if a drive fails, the user either has to live with
> only a read cache, or the user has to put a separate writeback cache in front of each spindle like
> this:
> +-----------------------------------------------------------+
> | btrfs raid 1 (2 copies) /mnt |
> +--------------+--------------+--------------+--------------+
> | /dev/bcache0 | /dev/bcache1 | /dev/bcache2 | /dev/bcache3 |
> +--------------+--------------+--------------+--------------+
> | Write Cache | Write Cache | Write Cache | Write Cache |
> |(Flash Drive) |(Flash Drive) |(Flash Drive) |(Flash Drive) |
> | /dev/sda5 | /dev/sda6 | /dev/sda7 | /dev/sda8 |
> +--------------+--------------+--------------+--------------+
> | Data | Data | Data | Data |
> | /dev/sda9 | /dev/sda10 | /dev/sda11 | /dev/sda12 |
> +--------------+--------------+--------------+--------------+
> 
> In the mainline kernel is's impossible to put a bcache on top of a bcache, so a user does not have
> the option to have 4 small write caches below one fast, big read cache like this:
> +-----------------------------------------------------------+
> | btrfs raid 1 (2 copies) /mnt |
> +--------------+--------------+--------------+--------------+
> | /dev/bcache4 | /dev/bcache5 | /dev/bcache6 | /dev/bcache7 |
> +--------------+--------------+--------------++-------------+
> | Read Cache (SSD) |
> | /dev/sda4 |
> +--------------+--------------+--------------+--------------+
> | /dev/bcache0 | /dev/bcache1 | /dev/bcache2 | /dev/bcache3 |
> +--------------+--------------+--------------+--------------+
> | Write Cache | Write Cache | Write Cache | Write Cache |
> |(Flash Drive) |(Flash Drive) |(Flash Drive) |(Flash Drive) |
> | /dev/sda5 | /dev/sda6 | /dev/sda7 | /dev/sda8 |
> +--------------+--------------+--------------+--------------+
> | Data | Data | Data | Data |
> | /dev/sda9 | /dev/sda10 | /dev/sda11 | /dev/sda12 |
> +--------------+--------------+--------------+--------------+
> 
>> Thanks,
>> Qu
> 
> Thank you,
> Cedric
> 
> ---
> 
> Take your mailboxes with you. Free, fast and secure Mail & Cloud: https://www.eclipso.eu - Time to
> change!

  parent reply	other threads:[~2021-01-05 19:20 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-05  6:39 Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD?  
2021-01-05  6:53 ` Qu Wenruo
2021-01-05 18:19   `  
2021-01-07 22:11     ` Zygo Blaxell
2021-01-05 19:19   ` Stéphane Lesimple [this message]
2021-01-06  2:55   ` Anand Jain
2021-01-08  8:16 ` Andrea Gelmini
2021-01-08  8:36   `  
2021-01-08 14:00     ` Zygo Blaxell
2021-01-08 19:29     ` Andrea Gelmini
2021-01-09 21:40       ` Zygo Blaxell
2021-01-10  9:00         ` Andrea Gelmini
2021-01-16  1:04           ` Zygo Blaxell
2021-01-16 15:27             `  
2021-01-18  0:45               ` Zygo Blaxell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ee715f644c3024a97efccbda2c8b22d2@lesimple.fr \
    --to=stephane_btrfs2@lesimple.fr \
    --cc=Cedric.dewijs@eclipso.eu \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).