All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tomasz Pala <gotar@polanet.pl>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: exclusive subvolume space missing
Date: Sat, 2 Dec 2017 10:33:01 +0100	[thread overview]
Message-ID: <20171202093301.GA28256@polanet.pl> (raw)
In-Reply-To: <af4ba078-8dc9-3f5d-b28c-7d72e0390294@gmx.com>

OK, I seriously need to address that, as during the night I lost
3 GB again:

On Sat, Dec 02, 2017 at 10:35:12 +0800, Qu Wenruo wrote:

>> #  btrfs fi sh /
>> Label: none  uuid: 17a3de25-6e26-4b0b-9665-ac267f6f6c4a
>>         Total devices 2 FS bytes used 44.10GiB
           Total devices 2 FS bytes used 47.28GiB

>> #  btrfs fi usage /
>> Overall:
>>     Used:                         88.19GiB
       Used:                         94.58GiB
>>     Free (estimated):             18.75GiB      (min: 18.75GiB)
       Free (estimated):             15.56GiB      (min: 15.56GiB)
>> 
>> #  btrfs dev usage /
- output not changed

>> #  btrfs fi df /    
>> Data, RAID1: total=51.97GiB, used=43.22GiB
   Data, RAID1: total=51.97GiB, used=46.42GiB
>> System, RAID1: total=32.00MiB, used=16.00KiB
>> Metadata, RAID1: total=2.00GiB, used=895.69MiB
>> GlobalReserve, single: total=131.14MiB, used=0.00B
   GlobalReserve, single: total=135.50MiB, used=0.00B
>> 
>> # df
>> /dev/sda2        64G   45G   19G  71% /
   /dev/sda2        64G   48G   16G  76% /
>> However the difference is on active root fs:
>> 
>> -0/291        24.29GiB      9.77GiB
>> +0/291        15.99GiB     76.00MiB
    0/291        19.19GiB      3.28GiB
> 
> Since you have already showed the size of the snapshots, which hardly
> goes beyond 1G, it may be possible that extent booking is the cause.
> 
> And considering it's all exclusive, defrag may help in this case.

I'm going to try defrag here, but have a bunch of questions before;
as defrag would break CoW, I don't want to defrag files that span
multiple snapshots, unless they have huge overhead:
1. is there any switch resulting in 'defrag only exclusive data'?
2. is there any switch resulting in 'defrag only extents fragmented more than X'
   or 'defrag only fragments that would be possibly freed'?
3. I guess there aren't, so how could I accomplish my target, i.e.
   reclaiming space that was lost due to fragmentation, without breaking
   spanshoted CoW where it would be not only pointless, but actually harmful?
4. How can I prevent this from happening again? All the files, that are
   written constantly (stats collector here, PostgreSQL database and
   logs on other machines), are marked with nocow (+C); maybe some new
   attribute to mark file as autodefrag? +t?

For example, the largest file from stats collector:
     Total   Exclusive  Set shared  Filename
 432.00KiB   176.00KiB   256.00KiB  load/load.rrd

but most of them has 'Set shared'==0.

5. The stats collector is running from the beginning, according to the
quota output was not the issue since something happened. If the problem
was triggered by (guessing) low space condition, and it results in even
more space lost, there is positive feedback that is dangerous, as makes
any filesystem unstable ("once you run out of space, you won't recover").
Does it mean btrfs is simply not suitable (yet?) for frequent updates usage
pattern, like RRD files?

6. Or maybe some extra steps just before taking snapshot should be taken?
I guess 'defrag exclusive' would be perfect here - reclaiming space
before it is being locked inside snapshot.
Rationale behind this is obvious: since the snapshot-aware defrag was
removed, allow to defrag snapshot exclusive data only.
This would of course result in partial file defragmentation, but that
should be enough for pathological cases like mine.

-- 
Tomasz Pala <gotar@pld-linux.org>

  reply	other threads:[~2017-12-02  9:33 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-01 16:15 exclusive subvolume space missing Tomasz Pala
2017-12-01 21:27 ` Duncan
2017-12-01 21:36 ` Hugo Mills
2017-12-02  0:53   ` Tomasz Pala
2017-12-02  1:05     ` Qu Wenruo
2017-12-02  1:43       ` Tomasz Pala
2017-12-02  2:17         ` Qu Wenruo
2017-12-02  2:56     ` Duncan
2017-12-02 16:28     ` Tomasz Pala
2017-12-02 17:18       ` Tomasz Pala
2017-12-03  1:45         ` Duncan
2017-12-03 10:47           ` Adam Borowski
2017-12-04  5:11             ` Chris Murphy
2017-12-10 10:49           ` Tomasz Pala
2017-12-04  4:58     ` Chris Murphy
2017-12-02  0:27 ` Qu Wenruo
2017-12-02  1:23   ` Tomasz Pala
2017-12-02  1:47     ` Qu Wenruo
2017-12-02  2:21       ` Tomasz Pala
2017-12-02  2:35         ` Qu Wenruo
2017-12-02  9:33           ` Tomasz Pala [this message]
2017-12-04  0:34             ` Qu Wenruo
2017-12-10 11:27               ` Tomasz Pala
2017-12-10 15:49                 ` Tomasz Pala
2017-12-10 23:44                 ` Qu Wenruo
2017-12-11  0:24                   ` Qu Wenruo
2017-12-11 11:40                   ` Tomasz Pala
2017-12-12  0:50                     ` Qu Wenruo
2017-12-15  8:22                       ` Tomasz Pala
2017-12-16  3:21                         ` Duncan
2017-12-05 18:47   ` How exclusive in parent qgroup is computed? (was: Re: exclusive subvolume space missing) Andrei Borzenkov
2017-12-05 23:57     ` How exclusive in parent qgroup is computed? Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171202093301.GA28256@polanet.pl \
    --to=gotar@polanet.pl \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.