All of lore.kernel.org
 help / color / mirror / Atom feed
From: Gian-Carlo Pascutto <gcp@sjeng.org>
To: linux-btrfs@vger.kernel.org
Subject: Re: Big disk space usage difference, even after defrag, on identical data
Date: Mon, 13 Apr 2015 16:06:39 +0200	[thread overview]
Message-ID: <552BCD6F.6080509@sjeng.org> (raw)
In-Reply-To: <pan$f11f0$7ca65f9$58551aba$5ec77500@cox.net>

On 13-04-15 07:06, Duncan wrote:

>> So what can explain this? Where did the 66G go?
> 
> Out of curiosity, does a balance on the actively used btrfs help?
> 
> You mentioned defrag -v -r -clzo, but didn't use the -f (flush) or -t 
> (minimum size file) options.  Does adding -f -t1 help?

Unfortunately I can no longer try this, see the other reply why. But the
problem turned out to be some 1G-sized files, written using 3-5 extents,
that for whatever reason defrag was not touching.

> You aren't doing btrfs snapshots of either subvolume, are you?

No :-) I should've mentioned that.

> Defrag should force the rewrite of entire files and take care of this, 
> but obviously it's not returning to "clean" state.  I forgot what the 
> default minimum file size is if -t isn't set, maybe 128 MiB?  But a -t1 
> will force it to defrag even small files, and I recall at least one 
> thread here where the poster said it made all the difference for him, so 
> try that.  And the -f should force a filesystem sync afterward, so you 
> know the numbers from any report you run afterward match the final state.

Reading the corresponding manual, the -t explanation says that "any
extent bigger than this size will be considered already defragged". So I
guess setting -t1 might've fixed the problem too...but after checking
the source, I'm not so sure.

I didn't find the -t default in the manpages - after browsing through
the source, the default is in the kernel:
https://github.com/torvalds/linux/blob/4f671fe2f9523a1ea206f63fe60a7c7b3a56d5c7/fs/btrfs/ioctl.c#L1268
(Not sure what units those are.)

I wonder if this is relevant:
https://github.com/torvalds/linux/blob/4f671fe2f9523a1ea206f63fe60a7c7b3a56d5c7/fs/btrfs/ioctl.c#L2572

This seems to reset the -t flag if compress (-c) is set? This looks a
bit fishy?

> Meanwhile, you may consider using the nocow attribute on those database 
> files.  It will disable compression on them,

I'm using btrfs specifically to get compression, so this isn't an option.

> While initial usage will  be higher due to the lack of compression,
> as you've discovered, over time, on an actively updated database,
> compression isn't all that effective anyway.

I don't see why. If you're referring to the additional overhead of
continuously compressing and decompressing everything - yes, of course.
But in my case I have a mostly-append workload to a huge amount of
fairly compressible data that's on magnetic storage, so compression is a
win in disk space and perhaps even in performance.

I'm well aware of the many caveats in using btrfs for databases -
they're well documented and although I much appreciate your extended
explanation, it wasn't new to me.

It turns out that if your dataset isn't update heavy (so it doesn't
fragment to begin with), or has to be queried via indexed access (i.e.
mostly via random seeks), the fragmentation doesn't matter much anyway.
Conversely, btrfs appears to have better sync performance with multiple
threads, and allows one to disable part of the partial-page-write
protection logic in the database (full_page_writes=off for PostgreSQL),
because btrfs is already doing the COW to ensure those can't actually
happen [1].

The net result is a *boost* from about 40 tps (ext4) to 55 tps (btrfs),
which certainly is contrary to popular wisdom. Maybe btrfs would fall
off eventually as fragementation does set in gradually, but given that
there's an offline defragmentation tool that can run in the background,
I don't care.

[1] I wouldn't be too surprised if database COW, which consists of
journal-writing a copy of the data out of band, then rewriting it again
in the original place, is actually functionally equivalent do disabling
COW in the database and running btrfs + defrag. Obviously you shouldn't
keep COW enabled in btrfs *AND* the DB, requiring all data to be copied
around at least 3 times...which I'm afraid almost everyone does because
it's the default...

-- 
GCP

  reply	other threads:[~2015-04-13 14:06 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-11 19:59 Big disk space usage difference, even after defrag, on identical data Gian-Carlo Pascutto
2015-04-13  4:04 ` Zygo Blaxell
2015-04-13  8:07   ` Duncan
2015-04-13 11:32   ` Gian-Carlo Pascutto
2015-04-13  5:06 ` Duncan
2015-04-13 14:06   ` Gian-Carlo Pascutto [this message]
2015-04-13 21:45     ` Zygo Blaxell
2015-04-14  3:18     ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=552BCD6F.6080509@sjeng.org \
    --to=gcp@sjeng.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.