All of lore.kernel.org
 help / color / mirror / Atom feed
* free space inode generation (0) did not match free space cache generation
@ 2014-03-22 18:13 Hendrik Friedel
  2014-03-22 19:23 ` Duncan
  0 siblings, 1 reply; 11+ messages in thread
From: Hendrik Friedel @ 2014-03-22 18:13 UTC (permalink / raw)
  To: linux-btrfs

Hello,

I have a file-system on which I cannot write anymore (no space left on 
device, which is not true
root@homeserver:~/btrfs/integration/devel# df -h
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
/dev/sdd2        30G     24G  5,1G   83% /mnt/test1
)

About the filesystem:
root@homeserver:~/btrfs/integration/devel# ./btrfs fi show /mnt/test1
Label: 'ROOT_BTRFS_RAID'  uuid: a2d5f2db-04ca-413a-aee1-cb754aa8fba5
         Total devices 2 FS bytes used 11.84GiB
         devid    1 size 14.85GiB used 14.67GiB path /dev/sde2
         devid    2 size 14.65GiB used 14.65GiB path /dev/sdd2
Btrfs this-will-become-v3.13-48-g57c3600


Check of the filesystem:
root@homeserver:~/btrfs/integration/devel# umount /mnt/test1
root@homeserver:~/btrfs/integration/devel# ./btrfsck /dev/sdd2
Checking filesystem on /dev/sdd2
UUID: a2d5f2db-04ca-413a-aee1-cb754aa8fba5
checking extents
checking free space cache
free space inode generation (0) did not match free space cache 
generation (41)
free space inode generation (0) did not match free space cache 
generation (7380)
free space inode generation (0) did not match free space cache 
generation (3081)
checking fs roots
checking csums
checking root refs
found 3680170466 bytes used err is 0
total csum bytes: 10071956
total tree bytes: 2398781440
total fs tree bytes: 2308784128
total extent tree bytes: 74203136
btree space waste bytes: 372004575
file data blocks allocated: 341759610880
  referenced 75292241920
Btrfs this-will-become-v3.13-48-g57c3600

Before the btrfsck I did a
  mount -o clear_cache  /dev/sdd2 /mnt/test1/

which in fact reduced the number of error messages (did not match free 
space cache generation) from more than ten to just three.

I do have a backup of the FS and in fact it would have been quicker just 
whiping the disk and using the backup (just 16GB), than writing this 
message.
But:
Is it of interest to look at fixing this for someone, so that the 
development of btrfs can profit of this, or should I just whipe the disc?

Greetings,
Hendrik

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-22 18:13 free space inode generation (0) did not match free space cache generation Hendrik Friedel
@ 2014-03-22 19:23 ` Duncan
  0 siblings, 0 replies; 11+ messages in thread
From: Duncan @ 2014-03-22 19:23 UTC (permalink / raw)
  To: linux-btrfs

Hendrik Friedel posted on Sat, 22 Mar 2014 19:13:48 +0100 as excerpted:

> I have a file-system on which I cannot write anymore (no space left on
> device, which is not true

> root@homeserver:~/btrfs/integration/devel# df -h
> Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
> /dev/sdd2        30G     24G  5,1G   83% /mnt/test1

> root@homeserver:~/btrfs/integration/devel# ./btrfs fi show /mnt/test1
> Label: 'ROOT_BTRFS_RAID'  uuid: a2d5f2db-04ca-413a-aee1-cb754aa8fba5
>          Total devices 2 FS bytes used 11.84GiB
>          devid    1 size 14.85GiB used 14.67GiB path /dev/sde2
>          devid    2 size 14.65GiB used 14.65GiB path /dev/sdd2
> Btrfs this-will-become-v3.13-48-g57c3600

That's a FAQ and I just replied to a different thread with a detailed 
explanation and procedure for fixing.  See

http://permalink.gmane.org/gmane.comp.file-systems.btrfs/33643

It's linked there, but you'll also want to read up on the btrfs wiki, 
particularly the free-space and balance sections of the FAQ, along with 
the balance filters page.

https://btrfs.wiki.kernel.org

Bookmark it! =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-25 20:10           ` Hugo Mills
  2014-03-25 21:28             ` Duncan
@ 2014-03-28  7:32             ` Hendrik Friedel
  1 sibling, 0 replies; 11+ messages in thread
From: Hendrik Friedel @ 2014-03-28  7:32 UTC (permalink / raw)
  To: Hugo Mills, Duncan, linux-btrfs

Hello,

after merely 5 days, I have the same Problem:

root@homeserver:~# ./btrfs/integration/devel/btrfs fi df /mnt/test1/
Disk size:                29.50GiB
Disk allocated:           29.30GiB
Disk unallocated:        202.00MiB
Used:                     13.84GiB
Free (Estimated):        929.95MiB      (Max: 1.01GiB, min: 929.95MiB)
Data to disk ratio:           50 %
root@homeserver:~# ./btrfs/integration/devel/btrfs fi show /mnt/test1/
Label: 'ROOT_BTRFS_RAID'  uuid: a2d5f2db-04ca-413a-aee1-cb754aa8fba5
         Total devices 2 FS bytes used 13.84GiB
         devid    1 size 14.85GiB used 14.65GiB path /dev/sde2
         devid    2 size 14.65GiB used 14.65GiB path /dev/sdd2


         root@homeserver:~# ./btrfs/integration/devel/btrfs fi df 
/mnt/test1/
Disk size:                29.50GiB
Disk allocated:           29.30GiB
Disk unallocated:        202.00MiB
Used:                     13.84GiB
Free (Estimated):        929.95MiB      (Max: 1.01GiB, min: 929.95MiB)
Data to disk ratio:           50 %
root@homeserver:~# ./btrfs/integration/devel/btrfs fi show /mnt/test1/
Label: 'ROOT_BTRFS_RAID'  uuid: a2d5f2db-04ca-413a-aee1-cb754aa8fba5
         Total devices 2 FS bytes used 13.84GiB
         devid    1 size 14.85GiB used 14.65GiB path /dev/sde2
         devid    2 size 14.65GiB used 14.65GiB path /dev/sdd2

Btrfs this-will-become-v3.13-48-g57c3600
root@homeserver:~# time ./btrfs/integration/devel/btrfs balance start 
-dusage=0 /mnt/test1
Done, had to relocate 0 out of 22 chunks

real    0m2.734s
user    0m0.000s
sys     0m0.022s


I increased dusage until I got:
root@homeserver:~# time ./btrfs/integration/devel/btrfs balance start 
-dusage=90 /mnt/test1
ERROR: error during balancing '/mnt/test1' - No space left on device
There may be more info in syslog - try dmesg | tail


Before I could do a full balance I had to delete all Snapshots:
~20 on my root subvolume
~40 on my /home and /root subvolume

I do not find this a extraordinary high number of snapshots. Also others 
should have higher numbers, when they use snapper.

Any Idea of what could be the reason here?

Regards,
Hendrik



Am 25.03.2014 21:10, schrieb Hugo Mills:
> On Tue, Mar 25, 2014 at 09:03:26PM +0100, Hendrik Friedel wrote:
>> Hi,
>>
>>> Well, given the relative immaturity of btrfs as a filesystem at this
>>> point in its lifetime, I think it's acceptable/tolerable.  However, for a
>>> filesystem feted[1] to ultimately replace the ext* series as an assumed
>>> Linux default, I'd definitely argue that the current situation should be
>>> changed such that btrfs can automatically manage its own de-allocation at
>>> some point, yes, and that said "some point" really needs to come before
>>> that point at which btrfs can be considered an appropriate replacement
>>> for ext2/3/4 as the assumed default Linux filesystem of the day.
>>
>> Agreed! I hope, this is on the ToDo List?!
>
> https://btrfs.wiki.kernel.org/index.php/Project_ideas#Block_group_reclaim
>
>     Yes. :)
>
>>> [1] feted: celebrated, honored.  I had to look it up to be sure my
>>> intuition on usage was correct, and indeed I had spelled it wrong
>>
>> :-)
>
>     Did you mean "fated": intended, destined?
>
>     Hugo.
>


-- 
Hendrik Friedel
Auf dem Brink 12
28844 Weyhe
Mobil 0178 1874363

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-25 21:28             ` Duncan
@ 2014-03-25 21:50               ` Hugo Mills
  0 siblings, 0 replies; 11+ messages in thread
From: Hugo Mills @ 2014-03-25 21:50 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 3114 bytes --]

On Tue, Mar 25, 2014 at 09:28:20PM +0000, Duncan wrote:
> Hugo Mills posted on Tue, 25 Mar 2014 20:10:20 +0000 as excerpted:
> 
> > Did you mean "fated": intended, destined?
> 
> No, I meant "feted", altho I understand in Europe the first "e" would 
> likely have a carot-hat (fêted), but us US-ASCII folks don't have such a 
> thing easily available, so unless I copy/paste as I just did or use 
> charselect, "feted" without the carot it is.

   Either word works in the context -- I wasn't knocking you at all. I
was just testing the fit of the homophone (particularly since you'd
mentioned checking the spelling).

> Where I've seen "feted" used it tends to have a slightly future-
> predictive hint to it, something that's considered a "shoe-in" to use 

   Or a shoo-in... :)

> another term, but that isn't necessarily certain just yet.  Alternatively 
> or as well, it can mean something that many or the majority considers/
> celebrates as true, but that the author isn't necessarily taking a 
> particular position on at this time, perhaps as part of the traditional 
> journalist's neutral observer's perspective, saying "other people 
> celebrate it as", without personally 100% endorsing the same position.
> 
> Which fit my usage exactly.  I wanted to indicate that btrfs' position as 
> a successor to the ext3/4 throne is a widely held expectation, but that 
> while I agree with the general sentiment, it's with a "wait and see if/
> when these few details get fixed" attitude, because I don't think that a 
> btrfs that a knowledgeable admin must babysit in ordered to be sure it 
> doesn't run out of unallocated chunks, for example, is quite ready for 
> usage by "the masses", that is, to take the throne as crowned successor 
> to ext3/4 just yet.  And "feted" seemed the perfect word to express and 
> acknowledge that expectation, while at the same time conveying my slight 
> personal reservation.

   Ack. There's a number of sharp edges like this hanging around.
Those of us who've been here for a while don't tend to notice them (or
at least, deprioritise them), and it's a good thing to have people
saying "do I really have to do this crap?" occasionally.

   Hugo.

> In fact, until I looked up the word I had no idea the word could also be 
> used as a noun in addition to my usage as a verb, and used as a noun, 
> that it meant a feast, celebration or carnival.  I was familiar only with 
> the usage I demonstrated here, including the slight hint of third party 
> neutrality or wait-and-see reservation, which was in fact my reason for 
> choosing the term in the first place.
> 
> (This is of course one reason I so enjoy newsgroups and mailing lists.  
> One never knows what sort of entirely unpredicted but useful thing one 
> might learn from them, even in my own replies sometimes! =:^)


-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
      --- Great oxymorons of the world, no. 10: Business Ethics ---      

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-25 20:10           ` Hugo Mills
@ 2014-03-25 21:28             ` Duncan
  2014-03-25 21:50               ` Hugo Mills
  2014-03-28  7:32             ` Hendrik Friedel
  1 sibling, 1 reply; 11+ messages in thread
From: Duncan @ 2014-03-25 21:28 UTC (permalink / raw)
  To: linux-btrfs

Hugo Mills posted on Tue, 25 Mar 2014 20:10:20 +0000 as excerpted:

> Did you mean "fated": intended, destined?

No, I meant "feted", altho I understand in Europe the first "e" would 
likely have a carot-hat (fêted), but us US-ASCII folks don't have such a 
thing easily available, so unless I copy/paste as I just did or use 
charselect, "feted" without the carot it is.

Where I've seen "feted" used it tends to have a slightly future-
predictive hint to it, something that's considered a "shoe-in" to use 
another term, but that isn't necessarily certain just yet.  Alternatively 
or as well, it can mean something that many or the majority considers/
celebrates as true, but that the author isn't necessarily taking a 
particular position on at this time, perhaps as part of the traditional 
journalist's neutral observer's perspective, saying "other people 
celebrate it as", without personally 100% endorsing the same position.

Which fit my usage exactly.  I wanted to indicate that btrfs' position as 
a successor to the ext3/4 throne is a widely held expectation, but that 
while I agree with the general sentiment, it's with a "wait and see if/
when these few details get fixed" attitude, because I don't think that a 
btrfs that a knowledgeable admin must babysit in ordered to be sure it 
doesn't run out of unallocated chunks, for example, is quite ready for 
usage by "the masses", that is, to take the throne as crowned successor 
to ext3/4 just yet.  And "feted" seemed the perfect word to express and 
acknowledge that expectation, while at the same time conveying my slight 
personal reservation.

In fact, until I looked up the word I had no idea the word could also be 
used as a noun in addition to my usage as a verb, and used as a noun, 
that it meant a feast, celebration or carnival.  I was familiar only with 
the usage I demonstrated here, including the slight hint of third party 
neutrality or wait-and-see reservation, which was in fact my reason for 
choosing the term in the first place.

(This is of course one reason I so enjoy newsgroups and mailing lists.  
One never knows what sort of entirely unpredicted but useful thing one 
might learn from them, even in my own replies sometimes! =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-25 20:03         ` Hendrik Friedel
@ 2014-03-25 20:10           ` Hugo Mills
  2014-03-25 21:28             ` Duncan
  2014-03-28  7:32             ` Hendrik Friedel
  0 siblings, 2 replies; 11+ messages in thread
From: Hugo Mills @ 2014-03-25 20:10 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: Duncan, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]

On Tue, Mar 25, 2014 at 09:03:26PM +0100, Hendrik Friedel wrote:
> Hi,
> 
> >Well, given the relative immaturity of btrfs as a filesystem at this
> >point in its lifetime, I think it's acceptable/tolerable.  However, for a
> >filesystem feted[1] to ultimately replace the ext* series as an assumed
> >Linux default, I'd definitely argue that the current situation should be
> >changed such that btrfs can automatically manage its own de-allocation at
> >some point, yes, and that said "some point" really needs to come before
> >that point at which btrfs can be considered an appropriate replacement
> >for ext2/3/4 as the assumed default Linux filesystem of the day.
> 
> Agreed! I hope, this is on the ToDo List?!

https://btrfs.wiki.kernel.org/index.php/Project_ideas#Block_group_reclaim

   Yes. :)

> >[1] feted: celebrated, honored.  I had to look it up to be sure my
> >intuition on usage was correct, and indeed I had spelled it wrong
> 
> :-)

   Did you mean "fated": intended, destined?

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
     --- IMPROVE YOUR ORGANISMS!!  -- Subject line of spam email ---     

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 811 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-25 13:00       ` Duncan
@ 2014-03-25 20:03         ` Hendrik Friedel
  2014-03-25 20:10           ` Hugo Mills
  0 siblings, 1 reply; 11+ messages in thread
From: Hendrik Friedel @ 2014-03-25 20:03 UTC (permalink / raw)
  To: Duncan, linux-btrfs

Hi,

> Well, given the relative immaturity of btrfs as a filesystem at this
> point in its lifetime, I think it's acceptable/tolerable.  However, for a
> filesystem feted[1] to ultimately replace the ext* series as an assumed
> Linux default, I'd definitely argue that the current situation should be
> changed such that btrfs can automatically manage its own de-allocation at
> some point, yes, and that said "some point" really needs to come before
> that point at which btrfs can be considered an appropriate replacement
> for ext2/3/4 as the assumed default Linux filesystem of the day.

Agreed! I hope, this is on the ToDo List?!

> [1] feted: celebrated, honored.  I had to look it up to be sure my
> intuition on usage was correct, and indeed I had spelled it wrong

:-)


Greetings,
Hendrik

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-24 20:52     ` Hendrik Friedel
@ 2014-03-25 13:00       ` Duncan
  2014-03-25 20:03         ` Hendrik Friedel
  0 siblings, 1 reply; 11+ messages in thread
From: Duncan @ 2014-03-25 13:00 UTC (permalink / raw)
  To: linux-btrfs

Hendrik Friedel posted on Mon, 24 Mar 2014 21:52:09 +0100 as excerpted:

>> But regardless of my experience with my own usage pattern, I suspect
>> that with reasonable monitoring, you'll eventually become familiar with
>> how fast the chunks are allocated and possibly with what sort of
>> actions beyond the obvious active moving stuff around on the filesystem
>> triggers those allocations, for your specific usage pattern, and can
>> then adapt as necessary.
> 
> Yes, that's a workaround. But really, that makes one the slave to your
> filesystem. That's not really acceptable, is it?

Well, given the relative immaturity of btrfs as a filesystem at this 
point in its lifetime, I think it's acceptable/tolerable.  However, for a 
filesystem feted[1] to ultimately replace the ext* series as an assumed 
Linux default, I'd definitely argue that the current situation should be 
changed such that btrfs can automatically manage its own de-allocation at 
some point, yes, and that said "some point" really needs to come before 
that point at which btrfs can be considered an appropriate replacement 
for ext2/3/4 as the assumed default Linux filesystem of the day.

---
[1] feted: celebrated, honored.  I had to look it up to be sure my 
intuition on usage was correct, and indeed I had spelled it wrong 
(fetted).  Yay for online wictionary and google-define! =:^)  Anyway, for 
others who may not be familiar with the term, since I have the links open 
ATM:
http://en.wiktionary.org/wiki/feted
https://www.google.com/search?q=define:feted

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-22 23:32   ` Duncan
@ 2014-03-24 20:52     ` Hendrik Friedel
  2014-03-25 13:00       ` Duncan
  0 siblings, 1 reply; 11+ messages in thread
From: Hendrik Friedel @ 2014-03-24 20:52 UTC (permalink / raw)
  To: Duncan, linux-btrfs

Hello,

>> I read through the FAQ you mentioned, but I must admit, that I do not
>> fully understand.
>
> My experience is that it takes a bit of time to soak in.  Between time,
> previous Linux experience, and reading this list for awhile, things do
> make more sense now, but my understanding has definitely changed and
> deepened over time.

Yes, I'm progressing. But I am a bit behind you :-)

>> What I am wondering about is, what caused this problem to arise. The
>> filesystem was hardly a week old, never mistreated (powered down without
>> unmounting or so) and not even half full. So what caused the data chunks
>> all being allocated?
>
> I can't really say, but it's worth noting that btrfs can normally
> allocate chunks, but doesn't (yet?) automatically deallocate them.  To
> deallocate, you balance.  Btrfs can reuse areas that have been deleted as
> the same thing, data or metadata, but it can't switch between them
> without a balance.

Ok, I do understand that. I don't know why it could not automatically 
deallocate them.. But then at least I'd expect it to automatically 
detect this problem and do a balance, when needed.
Note, that this Problem caused my System to become unavailable and it 
took days to find how to fix it (even if the fix was then very quick, 
thanks to your help).

> So the most obvious thing is that if you copy a bunch of stuff around so
> the filesystem is nearing full, then delete a bunch of it, consider
> checking your btrfs filesystem df/show stats and see whether you need a
> balance.  But like I said, that's obvious.

Yes. I did not really do much with the system. I copied everything onto 
the filesystem, rebooted and let it run for a week.

>> The only thing that I could think of is that I created hourly snapshots
>> with snapper.
>> In fact in order to be able to do the balance, I had to delete something
>> -so I deleted the snapshots.
>
> One possibility off the top of my head:  Do you have noatime set in your
> mount options?  That's definitely recommended with snapshotting, since
> otherwise, atime updates will be changes to the filesystem metadata since
> the last snapshot, and thus will add to the difference between snapshots
> that must be stored.  If you're doing hourly snapshots and are accessing
> much of the filesystem each hour, that'll add up!

Really? I do have noatime set, but I would expect the accesstime be 
stored in the metadata. So when snapshotting, only the changed metadata 
would have to be stored for the files that have been accesed between the 
two snapshots. That should not be a problem, is it?

> Additionally, I recommend snapshot thinning.  Hourly snapshots are nice
> but after some time, they just become noise.  Will you really know or
> care which specific hour it was if you're having to retrieve a snapshot
> from a month ago?

In fact, snapper does that for me.

> Also, it may or may not apply to you, but internal-rewrite (as opposed to
> simply appended) files are bad news for COW-based filesystems such as
> btrfs.

I don't see any applications that do internal re-writes on my system. 
Interesting nevertheless, esp. wrt. the posible solution. Thaks.

>> Besides this:
>> You recommend monitoring the output of btrfs fi show and to do a
>> balance, whenever unallocated space drops too low. I can monitor this
>> and let monit send me a message once that happens. Still, I'd like to
>> know how to make this less likely.
>
> I haven't had a problem with it here, but then I haven't been doing much
> snapshotting (and always manual when I do it), I don't run any VMs or
> large databases, I mounted with the autodefrag option from the beginning,
> and I've used noatime for nearing a decade now as it was also recommended
> for my previous filesystem, reiserfs.

The only differences are that my snapshotting is automated and the 
autodefrag is not set. No databases, no VMs, noatime set. It's a simple 
install of Ubuntu.

> But regardless of my experience with my own usage pattern, I suspect that
> with reasonable monitoring, you'll eventually become familiar with how
> fast the chunks are allocated and possibly with what sort of actions
> beyond the obvious active moving stuff around on the filesystem triggers
> those allocations, for your specific usage pattern, and can then adapt as
> necessary.

Yes, that's a workaround. But really, that makes one the slave to your 
filesystem. That's not really acceptable, is it?

Regards,
Hendrik



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
  2014-03-22 21:16 ` Hendrik Friedel
@ 2014-03-22 23:32   ` Duncan
  2014-03-24 20:52     ` Hendrik Friedel
  0 siblings, 1 reply; 11+ messages in thread
From: Duncan @ 2014-03-22 23:32 UTC (permalink / raw)
  To: linux-btrfs

Hendrik Friedel posted on Sat, 22 Mar 2014 22:16:27 +0100 as excerpted:

> I read through the FAQ you mentioned, but I must admit, that I do not
> fully understand.

My experience is that it takes a bit of time to soak in.  Between time, 
previous Linux experience, and reading this list for awhile, things do 
make more sense now, but my understanding has definitely changed and 
deepened over time.

> What I am wondering about is, what caused this problem to arise. The
> filesystem was hardly a week old, never mistreated (powered down without
> unmounting or so) and not even half full. So what caused the data chunks
> all being allocated?

I can't really say, but it's worth noting that btrfs can normally 
allocate chunks, but doesn't (yet?) automatically deallocate them.  To 
deallocate, you balance.  Btrfs can reuse areas that have been deleted as 
the same thing, data or metadata, but it can't switch between them 
without a balance.

So the most obvious thing is that if you copy a bunch of stuff around so 
the filesystem is nearing full, then delete a bunch of it, consider 
checking your btrfs filesystem df/show stats and see whether you need a 
balance.  But like I said, that's obvious.

> The only thing that I could think of is that I created hourly snapshots
> with snapper.
> In fact in order to be able to do the balance, I had to delete something
> -so I deleted the snapshots.

One possibility off the top of my head:  Do you have noatime set in your 
mount options?  That's definitely recommended with snapshotting, since 
otherwise, atime updates will be changes to the filesystem metadata since 
the last snapshot, and thus will add to the difference between snapshots 
that must be stored.  If you're doing hourly snapshots and are accessing 
much of the filesystem each hour, that'll add up!

Additionally, I recommend snapshot thinning.  Hourly snapshots are nice 
but after some time, they just become noise.  Will you really know or 
care which specific hour it was if you're having to retrieve a snapshot 
from a month ago?

So hourly snapshots, but after say a day, delete two out of three, 
leaving three-hourly snapshots.  After two days, delete another half, 
leaving six-hourly snapshots (four a day).  After a week, delete three of 
the four, leaving daily snapshots.  After a quarter (13 weeks) delete six 
of seven (or 4 of five if it's weekdays only), leaving weekly snapshots.  
After a year, delete 12 of the 13, leaving quarterly snapshots.  ...  Or 
something like that.  You get the idea.  Obviously script it, just like 
the snapshotting itself is scripted.

That will solve another problem too.  When btrfs gets into the thousands 
of snapshots, at it will pretty fast with unthinned hourly, certain 
operations slow down dramatically.  The problem was much worse at one 
point, but the snapshot aware defrag was disabled for the time being, as 
it simply didn't scale and people with thousands of snapshots were seeing 
balances or defrags go days with little visible progress.  But, few 
people really /need/ thousands of snapshots.  With a bit of reasonable 
thinning down to one a quarter, you end up with 200-300 snapshots and 
that's it.

Also, it may or may not apply to you, but internal-rewrite (as opposed to 
simply appended) files are bad news for COW-based filesystems such as 
btrfs.  The autodefrag mount option can help with this for smaller files 
(say to several hundred megabytes in size), but for larger (from say half 
a gig) actively rewritten files such as databases, VM images, and pre-
allocated torrent downloads until they're fully downloaded, setting the 
NOCOW attribute (chattr +C, change in-place, instead of using the normal 
copy-on-write) is strongly recommended.  But the catch is that the 
attribute needs to be set while the file is still zero-size, before it 
actually has any content.  The easiest way to do that is to create a 
dedicated directory for such files and to set the attribute on the 
directory, after which it'll automatically be inherited by any newly 
created files or subdirs in that directory.

But, there's a catch with snapshots.  The first change to a block after a 
snapshot forces a COW anyway, since the data has changed from that of the 
snapshot.  So for those making heavy use of snapshots, creating dedicated 
subvolumes for these NOCOW directories is a good idea, since snapshots 
are per subvolume and thus these dedicated subvolumes will be excluded 
from the general snapshots (just don't snapshot the dedicated subvolumes).

Of course that does limit the value of snapshots to some degree, but it's 
worth keeping in mind that most filesystems don't even offer the snapshot 
feature at all, so...

> Can you tell me where I can read about the causes for this problem?

The above wisdom is mostly from reading the list for awhile.  Like I 
said, it takes awhile to soak in, and my thinking on the subject has 
changed somewhat over time.  The fact that NOCOW wasn't NOCOW on the 
first change after a snapshot was a rather big epiphany to me, but AFAIK, 
that's not on the wiki or elsewhere yet.  It makes sense if you think 
about it, but someone specifically asked, and the devs confirmed it.  
Before that I had no idea, and was left wondering at some of the behavior 
being reported, even with nocow properly set.  (That was back when the 
broken snapshot aware defrag was still in place, as it simply didn't 
scale with snapshots and such files, and I couldn't figure out why NOCOW 
wasn't working to avoid the problem, until a dev confirmed that the first 
change after a snapshot was COW anyway, and it all dropped into place... 
continuously rewritten VM images, even if set NOCOW, would still be 
continuously fragmented, if people were doing regular snapshots on them.)

> Besides this:
> You recommend monitoring the output of btrfs fi show and to do a
> balance, whenever unallocated space drops too low. I can monitor this
> and let monit send me a message once that happens. Still, I'd like to
> know how to make this less likely.

I haven't had a problem with it here, but then I haven't been doing much 
snapshotting (and always manual when I do it), I don't run any VMs or 
large databases, I mounted with the autodefrag option from the beginning, 
and I've used noatime for nearing a decade now as it was also recommended 
for my previous filesystem, reiserfs.

But regardless of my experience with my own usage pattern, I suspect that 
with reasonable monitoring, you'll eventually become familiar with how 
fast the chunks are allocated and possibly with what sort of actions 
beyond the obvious active moving stuff around on the filesystem triggers 
those allocations, for your specific usage pattern, and can then adapt as 
necessary.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: free space inode generation (0) did not match free space cache generation
       [not found] <532DF38B.40409@friedels.name>
@ 2014-03-22 21:16 ` Hendrik Friedel
  2014-03-22 23:32   ` Duncan
  0 siblings, 1 reply; 11+ messages in thread
From: Hendrik Friedel @ 2014-03-22 21:16 UTC (permalink / raw)
  To: linux-btrfs


Hello,

thanks for your help, I appreciate your hint.
I think (reboot into the system with the fs mounted as root still
outstanding), it fixed my problem.
I read through the FAQ you mentioned, but I must admit, that I do not
fully understand.
What I am wondering about is, what caused this problem to arise. The
filesystem was hardly a week old, never mistreated (powered down without
unmounting or so) and not even half full. So what caused the data chunks
all being allocated?

The only thing that I could think of is that I created hourly snapshots
with snapper.
In fact in order to be able to do the balance, I had to delete something
-so I deleted the snapshots.

Can you tell me where I can read about the causes for this problem?
Besides this:
You recommend monitoring the output of btrfs fi show and to do a
balance, whenever unallocated space drops too low. I can monitor this
and let monit send me a message once that happens. Still, I'd like to
know how to make this less likely.

Greetings,
Hendrik



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2014-03-28  7:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-03-22 18:13 free space inode generation (0) did not match free space cache generation Hendrik Friedel
2014-03-22 19:23 ` Duncan
     [not found] <532DF38B.40409@friedels.name>
2014-03-22 21:16 ` Hendrik Friedel
2014-03-22 23:32   ` Duncan
2014-03-24 20:52     ` Hendrik Friedel
2014-03-25 13:00       ` Duncan
2014-03-25 20:03         ` Hendrik Friedel
2014-03-25 20:10           ` Hugo Mills
2014-03-25 21:28             ` Duncan
2014-03-25 21:50               ` Hugo Mills
2014-03-28  7:32             ` Hendrik Friedel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.