linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Wong <e@80x24.org>
To: Chris Murphy <lists@colorremedies.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: raid1 with several old drives and a big new one
Date: Fri, 31 Jul 2020 03:22:12 +0000	[thread overview]
Message-ID: <20200731032212.GA21797@dcvr> (raw)
In-Reply-To: <CAJCQCtS6fHYGBiHpqAJPu+-EoSzEKZ5YEaj4QjNxqPvO+JTACw@mail.gmail.com>

Chris Murphy <lists@colorremedies.com> wrote:
> On Thu, Jul 30, 2020 at 6:16 PM Eric Wong <e@80x24.org> wrote:
> >
> > Say I have three ancient 2TB HDDs and one new 6TB HDD, is there
> > a way I can ensure one raid1 copy of the data stays on the new
> > 6TB HDD?
> 
> Yes. Use mdadm --level=linear --raid-devices=2 to concatenate the two
> 2TB drives. Or use LVM (linear by default). Leave the 6TB out of this
> regime. And now you have two block devices (one is the concat virtual
> device) to do a raid1 with btrfs, and the 6TB will always get one of
> the raid1 chunks.
> 
> There isn't a way to do this with btrfs alone.

Thanks for the response(s), I was hoping to simplify my stack
with btrfs alone.

> When one of the 2TB fails, there's some likelihood that it'll behave
> like a partially failing device. Some reads and writes will succeed,
> others won't. So you'll need to be prepared strategy wise what to do.
> Ideal scenario is a new 4+TB drive, and use 'btrfs replace' to replace
> the md concat device. Due to the large number of errors possible with
> the 'btrfs replace' you might want to use -r option.

If I went ahead with btrfs alone and am prepared to lose some
(not "all") files; could part of the FS remain usable (and the
rest restorable from slow backups) w/o involving LVM?

I could make metadata (and maybe system chunks?) raid1c3 or even
raid1c4 since they seem small and important enough with ancient
HW in play.

I mainly wanted raid1 because restoring from backups is slow;
and btrfs would let me grow a single FS without much planning
or having to find identical or even similar drives.

> And on second thought...
> 
> You might do some rudimentary read/write benchmarks on all three

<snip>
Not performance critical at all, all that is on SSD :)

  reply	other threads:[~2020-07-31  3:22 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-31  0:16 raid1 with several old drives and a big new one Eric Wong
2020-07-31  2:57 ` Chris Murphy
2020-07-31  3:22   ` Eric Wong [this message]
2020-07-31  3:35     ` Chris Murphy
2020-08-01  9:05   ` Roman Mamedov
2020-07-31  8:29 ` Alberto Bursi
2020-07-31 10:06   ` Eric Wong
2020-07-31 16:13 ` Adam Borowski
2020-08-01  3:40   ` Zygo Blaxell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200731032212.GA21797@dcvr \
    --to=e@80x24.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).