All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chris Murphy <lists@colorremedies.com>
To: Juan Alberto Cirez <jacirez@rdcsafety.com>
Cc: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Add device while rebalancing
Date: Tue, 26 Apr 2016 18:58:06 -0600	[thread overview]
Message-ID: <CAJCQCtQbCbR9V7z4jZCejbKLJyhBbtrZJmcQBkX=VnxReBf46g@mail.gmail.com> (raw)
In-Reply-To: <CAHaPQf39H-JRhQmCssmgJ98RCxL_36poE_kObAmgmH6nkn4xoA@mail.gmail.com>

On Tue, Apr 26, 2016 at 5:44 AM, Juan Alberto Cirez
<jacirez@rdcsafety.com> wrote:
> Well,
> RAID1 offers no parity, striping, or spanning of disk space across
> multiple disks.

Btrfs raid1 does span, although it's typically called the "volume", or
a "pool" similar to ZFS terminology. e.g. 10 2TiB disks will get you a
single volume on which you can store about 10TiB of data with two
copies (called stripes in Btrfs). In effect the way chunk replication
works, it's a concat+raid1.


> RAID10 configuration, on the other hand, requires a minimum of four
> HDD, but it stripes data across mirrored pairs. As long as one disk in
> each mirrored pair is functional, data can be retrieved.

Not Btrfs raid10. It's not the devices that are mirrored pairs, but
rather the chunks. There's no way to control or determine on what
devices the pairs are on. It's certain you get at least a partial
failure (data for sure and likely metadata if it's also using raid10
profile) of the volume if you lose more than 1 device, planning wise
you have to assume you lose the entire array.



>
> With GlusterFS as a distributed volume, the files are already spread
> among the servers causing file I/O to be spread fairly evenly among
> them as well, thus probably providing the benefit one might expect
> with stripe (RAID10).

Yes, the raid1 of Btrfs is just so you don't have to rebuild volumes
if you lose a drive. But since raid1 is not n-way copies, and only
means two copies, you don't really want the file systems getting that
big or you increase the chances of a double failure.

I've always though it'd be neat in a Btrfs + GlusterFS, if it were
possible for Btrfs to inform Gluster FS of "missing/corrupt" files,
and then for Btrfs to drop reference for those files, instead of
either rebuilding or remaining degraded. And then let GlusterFS deal
with replication of those files to maintain redundancy. i.e. the Btrfs
volumes would be single profile for data, and raid1 for metadata. When
there's n-way raid1, each drive can have a copy of the file system,
and it'd tolerate in effect n-1 drive failures and the file system
could at least still inform Gluster (or Ceph) of the missing data, the
file system still remains valid, only briefly degraded, and can still
be expanded when new drives become available.

I'm not a big fan of hot (or cold) spares. They contribute nothing,
but take up physical space and power.




-- 
Chris Murphy

  parent reply	other threads:[~2016-04-27  0:58 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-22 20:36 Add device while rebalancing Juan Alberto Cirez
2016-04-23  5:38 ` Duncan
2016-04-25 11:18   ` Austin S. Hemmelgarn
2016-04-25 12:43     ` Duncan
2016-04-25 13:02       ` Austin S. Hemmelgarn
2016-04-26 10:50         ` Juan Alberto Cirez
2016-04-26 11:11           ` Austin S. Hemmelgarn
2016-04-26 11:44             ` Juan Alberto Cirez
2016-04-26 12:04               ` Austin S. Hemmelgarn
2016-04-26 12:14                 ` Juan Alberto Cirez
2016-04-26 12:44                   ` Austin S. Hemmelgarn
2016-04-27  0:58               ` Chris Murphy [this message]
2016-04-27 10:37                 ` Duncan
2016-04-27 11:22                 ` Austin S. Hemmelgarn
2016-04-27 15:58                   ` Juan Alberto Cirez
2016-04-27 16:29                     ` Holger Hoffstätte
2016-04-27 16:38                       ` Juan Alberto Cirez
2016-04-27 16:40                         ` Juan Alberto Cirez
2016-04-27 17:23                           ` Holger Hoffstätte
2016-04-27 23:19                   ` Chris Murphy
2016-04-28 11:21                     ` Austin S. Hemmelgarn

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJCQCtQbCbR9V7z4jZCejbKLJyhBbtrZJmcQBkX=VnxReBf46g@mail.gmail.com' \
    --to=lists@colorremedies.com \
    --cc=ahferroin7@gmail.com \
    --cc=jacirez@rdcsafety.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.