All of lore.kernel.org
 help / color / mirror / Atom feed
From: GWB <gwb@2realms.com>
To: ct@flyingcircus.io, Linux fs Btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Shrinking a device - performance?
Date: Thu, 30 Mar 2017 20:00:22 -0500	[thread overview]
Message-ID: <CAP8EXU2ufgLWL+kgKtaLjqHDBGV1OB8GWy+efCJf4Knb0DaEPQ@mail.gmail.com> (raw)
In-Reply-To: <67132222-17c3-b198-70c1-c3ae0c1cb8e7@siedziba.pl>

Hello, Christiane,

I very much enjoyed the discussion you sparked with your original
post.  My ability in btrfs is very limited, much less than the others
who have replied here, so this may not be much help.

Let us assume that you have been able to shrink the device to the size
you need, and you are now merrily on your way to moving the data to
XFS.  If so, ignore this email, delete, whatever, and read no further.

If that is not the case, perhaps try something like the following.

Can you try to first dedup the btrfs volume?  This is probably out of
date, but you could try one of these:

https://btrfs.wiki.kernel.org/index.php/Deduplication

If that does not work, this is a longer shot, but you might consider
adding an intermediate step of creating yet another btrfs volume on
the underlying lvm2 device mapper, turning on dedup, compression, and
whatever else can squeeze some extra space out of the current btrfs
volume.  You could then try to copy over files and see if you get the
results you need (or try sending the current btrfs volume as a
snapshot, but I'm guessing 20TB is too much).

Once the new btrfs volume on top of lvm2 is complete, you could just
delete the old one, and then transfer the (hopefully compressed and
deduped) data to XFS.

Yep, that's probably a lot of work.

I use both btrfs (on root on Ubuntu) and zfs (for data, home), and I
try to do as little as possible with live mounted file systems other
than snapshots.  I avoid sending and receive snapshots from the live
system (mostly zfs, but sometimes btrfs) but instead write increment
snapshots as a file on the backup disks, and then import the
incremental snaps into a backup pool at night.

My recollection is that btrfs handles deduplication differently than
zfs, but both of them can be very, very slow (from the human
perspective; call that what you will; a sub optimal relationship of
the parameters of performance and speed).

The advantage you have is that with lvm you can create a number of
different file systems.  And lvm can also create snapshots.  I think
both zfs and btrfs both have a more "elegant" way of dealing with
snapshots, but lvm allows a file system without that feature to have
it.  Others on the list can tell you about the disadvantages.

I would be curious how it turns out for you.  If you are able to move
the data to XFS running on top of lvm, what is your plan for snapshots
in lvm?

Again, I'm not an expert in btrfs, but in most cases a full balance
and scrub takes care of any problems on the root partition, but that
is a relatively small partition.  A full balance (without the options)
and scrub on 20 TiB must take a very long time even with robust
hardware, would it not?

CentOS, Redhat, and Oracle seem to take the position that very large
data subvolumes using btrfs should work fine.  But I would be curious
what the rest of the list thinks about 20 TiB in one volume/subvolume.

Gordon



On Thu, Mar 30, 2017 at 5:13 PM, Piotr Pawłow <pp@siedziba.pl> wrote:
>> The proposed "move whole chunks" implementation helps only if
>> there are enough unallocated chunks "below the line". If regular
>> 'balance' is done on the filesystem there will be some, but that
>> just spreads the cost of the 'balance' across time, it does not
>> by itself make a «risky, difficult, slow operation» any less so,
>> just spreads the risk, difficulty, slowness across time.
>
> Isn't that too pessimistic? Most of my filesystems have 90+% of free
> space unallocated, even those I never run balance on. For me it wouldn't
> just spread the cost, it would reduce it considerably.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2017-03-31  1:01 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-27 11:17 Shrinking a device - performance? Christian Theune
2017-03-27 13:07 ` Hugo Mills
2017-03-27 13:20   ` Christian Theune
2017-03-27 13:24     ` Hugo Mills
2017-03-27 13:46       ` Austin S. Hemmelgarn
2017-03-27 13:50         ` Christian Theune
2017-03-27 13:54           ` Christian Theune
2017-03-27 14:17             ` Austin S. Hemmelgarn
2017-03-27 14:49               ` Christian Theune
2017-03-27 15:06                 ` Roman Mamedov
2017-04-01  9:05                   ` Kai Krakow
2017-03-27 14:14           ` Austin S. Hemmelgarn
2017-03-27 14:48     ` Roman Mamedov
2017-03-27 14:53       ` Christian Theune
2017-03-28 14:43         ` Peter Grandi
2017-03-28 14:50           ` Tomasz Kusmierz
2017-03-28 15:06             ` Peter Grandi
2017-03-28 15:35               ` Tomasz Kusmierz
2017-03-28 16:20                 ` Peter Grandi
2017-03-28 14:59           ` Peter Grandi
2017-03-28 15:20             ` Peter Grandi
2017-03-28 15:56           ` Austin S. Hemmelgarn
2017-03-30 15:55             ` Peter Grandi
2017-03-31 12:41               ` Austin S. Hemmelgarn
2017-03-31 17:25                 ` Peter Grandi
2017-03-31 19:38                   ` GWB
2017-03-31 20:27                     ` Peter Grandi
2017-04-01  0:02                       ` GWB
2017-04-01  2:42                         ` Duncan
2017-04-01  4:26                           ` GWB
2017-04-01 11:30                             ` Peter Grandi
2017-03-30 15:00           ` Piotr Pawłow
2017-03-30 16:13             ` Peter Grandi
2017-03-30 22:13               ` Piotr Pawłow
2017-03-31  1:00                 ` GWB [this message]
2017-03-31  5:26                   ` Duncan
2017-03-31  5:38                     ` Duncan
2017-03-31 12:37                       ` Peter Grandi
2017-03-31 11:37                   ` Peter Grandi
2017-03-31 10:51                 ` Peter Grandi
2017-03-27 11:51 Christian Theune
2017-03-27 12:55 ` Christian Theune

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAP8EXU2ufgLWL+kgKtaLjqHDBGV1OB8GWy+efCJf4Knb0DaEPQ@mail.gmail.com \
    --to=gwb@2realms.com \
    --cc=ct@flyingcircus.io \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.