All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfs caused processes to freeze and exhausted all available memory during snapshot deletion
@ 2016-09-22 19:36 Simon Arlott
  0 siblings, 0 replies; only message in thread
From: Simon Arlott @ 2016-09-22 19:36 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 4372 bytes --]

I started using snapper (v0.2.4) on 2016-09-18 and then 2 days later
while deleting snapshots at 02:16 the system ran out of memory (2GB)
[1].

At this point there were 6 /home subvolumes with the following snapshots:
Type   | #  | Pre # | Date                         | User | Cleanup  | Description | Userdata

-------+----+-------+------------------------------+------+----------+-------------+---------
single | 0  |       |                              | root |          | current     |
single | 1  |       | Sun 18 Sep 2016 17:08:04 BST | root | timeline | timeline    |
single | 8  |       | Mon 19 Sep 2016 00:08:03 BST | root | timeline | timeline    |
single | 25 |       | Mon 19 Sep 2016 17:08:04 BST | root | timeline | timeline    |
single | 26 |       | Mon 19 Sep 2016 18:08:04 BST | root | timeline | timeline    |
single | 27 |       | Mon 19 Sep 2016 19:08:03 BST | root | timeline | timeline    |
single | 28 |       | Mon 19 Sep 2016 20:08:02 BST | root | timeline | timeline    |
single | 29 |       | Mon 19 Sep 2016 21:08:02 BST | root | timeline | timeline    |
single | 30 |       | Mon 19 Sep 2016 22:08:06 BST | root | timeline | timeline    |
single | 31 |       | Mon 19 Sep 2016 23:08:06 BST | root | timeline | timeline    |
single | 32 |       | Tue 20 Sep 2016 00:08:04 BST | root | timeline | timeline    |
single | 33 |       | Tue 20 Sep 2016 01:08:02 BST | root | timeline | timeline    |
single | 34 |       | Tue 20 Sep 2016 02:08:03 BST | root | timeline | timeline    |

The amount of data in each subvolume (I have quota enabled but there are
no limits set) is as follows: 21.34GB, 861.29MB, 613.09MB, 16.54MB,
1.49MB, 24.00KB.

It's a mailserver so there's a lot of Maildir format email activity
(lkml + netdev + mips in the first subvolume). The mail spool directory
is on the same filesystem in the @ subvolume (that has no snapshots).

I ran btrfs check [tsort-btrfs-1.txt] on it, which had a peak memory
usage of 6.5GB.

In order to avoid potentially losing data (I've had experience of the
uninformative "repair mode will force to clear out log tree, Are you
sure?" message before) I mounted and umounted the filesystem (first ro,
then rw).

I then ran btrfs check --repair [tsort-btrfs-2.txt] and mounted the
filesystem to disable snapper. This went ok until I umounted it when the
kernel (4.4.0-24-generic #43-Ubuntu) used 2GB of memory to do so (all of
it in kmalloc-64 according to slabtop).


When I started the VM again (with more RAM), it again started hanging
processes [2] and the only way I could resolve it was to delete all of
the snapshots. Most of the hangs were on readdir, if I avoided doing
this I could ssh in without much delay. Deleting the snapshots took 7.5
minutes and 2.5 minutes for the oldest snapshot on the two largest
subvolumes.


Is the filesystem ok? The call to btrfs_create_repair_bio implies that
something was wrong with it:
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc0138900>] ? btrfs_create_repair_bio+0x100/0x100 [btrfs]
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc011c6c0>] ? btrfs_real_readdir+0x5e0/0x5e0 [btrfs]
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc013ad9a>] __extent_readpages.constprop.42+0x2ca/0x2e0 [btrfs]
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc011c6c0>] ? btrfs_real_readdir+0x5e0/0x5e0 [btrfs]
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc013b42d>] extent_readpages+0x1ad/0x200 [btrfs]
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc011c6c0>] ? btrfs_real_readdir+0x5e0/0x5e0 [btrfs]
2016-09-20T02:20:15.005+01:00 <warning> kernel:  [<ffffffffc011a32f>] btrfs_readpages+0x1f/0x30 [btrfs]

I've had issues with this filesystem before [2015-05-18.txt] which I
attributed to damage caused by the "os-prober" package [3].

Should I recreate the filesystem or can btrfsck (v4.4) recover from this
state?


I don't think 72 snapshots is an excessive number, but a previous email
[4] on this list has stated that there should be no more than 2000
snapshots in total. One of my VMs has 96 subvolumes, at 40 snapshots
each this will reach 3840 in total (but most of them will have no
activity).

[1] https://s85.org/AOOnMvh4:view
[2] https://s85.org/4wIPHOW1:view
[3] https://bugs.debian.org/788062
[4] http://www.spinics.net/lists/linux-btrfs/msg47674.html

-- 
Simon Arlott


[-- Attachment #2: tsort-btrfs-1.txt --]
[-- Type: text/plain, Size: 6644 bytes --]

checking extents
checking free space cache
checking fs roots
checking csums
checking root refs
checking quota groups
Ignoring qgroup relation key 258
Ignoring qgroup relation key 71776119061217538
Counts for qgroup id: 307 are different
our:		referenced 22849331200 referenced compressed 22849331200
disk:		referenced 22849331200 referenced compressed 22849331200
our:		exclusive 2375680 exclusive compressed 2375680
disk:		exclusive 2424832 exclusive compressed 2424832
diff:		exclusive -49152 exclusive compressed -49152
Counts for qgroup id: 313 are different
our:		referenced 22849662976 referenced compressed 22849662976
disk:		referenced 14821355520 referenced compressed 14821355520
diff:		referenced 8028307456 referenced compressed 8028307456
our:		exclusive 1474560 exclusive compressed 1474560
disk:		exclusive 770048 exclusive compressed 770048
diff:		exclusive 704512 exclusive compressed 704512
Counts for qgroup id: 319 are different
our:		referenced 22850076672 referenced compressed 22850076672
disk:		referenced 22850076672 referenced compressed 22850076672
our:		exclusive 2330624 exclusive compressed 2330624
disk:		exclusive 2347008 exclusive compressed 2347008
diff:		exclusive -16384 exclusive compressed -16384
found 27847938080 bytes used err is 0
total csum bytes: 22378356
total tree bytes: 4932501504
total fs tree bytes: 4756373504
total extent tree bytes: 146063360
btree space waste bytes: 735301991
file data blocks allocated: 29639204864
 referenced 29585145856
extent buffer leak: start 29818880 len 16384
extent buffer leak: start 23827791872 len 16384
extent buffer leak: start 373047296 len 16384
extent buffer leak: start 627539968 len 16384
extent buffer leak: start 22344695808 len 16384
extent buffer leak: start 20805058560 len 16384
extent buffer leak: start 515801088 len 16384
extent buffer leak: start 1078607872 len 16384
extent buffer leak: start 20933148672 len 16384
extent buffer leak: start 22253387776 len 16384
extent buffer leak: start 22299197440 len 16384
extent buffer leak: start 22502604800 len 16384
extent buffer leak: start 23878729728 len 16384
extent buffer leak: start 24083415040 len 16384
extent buffer leak: start 114638848 len 16384
extent buffer leak: start 373080064 len 16384
extent buffer leak: start 63733760 len 16384
extent buffer leak: start 108789760 len 16384
extent buffer leak: start 109281280 len 16384
extent buffer leak: start 111099904 len 16384
extent buffer leak: start 111804416 len 16384
extent buffer leak: start 115097600 len 16384
extent buffer leak: start 221265920 len 16384
extent buffer leak: start 254394368 len 16384
extent buffer leak: start 270876672 len 16384
extent buffer leak: start 372834304 len 16384
extent buffer leak: start 372932608 len 16384
extent buffer leak: start 373997568 len 16384
extent buffer leak: start 374013952 len 16384
extent buffer leak: start 374194176 len 16384
extent buffer leak: start 374734848 len 16384
extent buffer leak: start 374784000 len 16384
extent buffer leak: start 375537664 len 16384
extent buffer leak: start 375652352 len 16384
extent buffer leak: start 464486400 len 16384
extent buffer leak: start 491798528 len 16384
extent buffer leak: start 525402112 len 16384
extent buffer leak: start 533233664 len 16384
extent buffer leak: start 537853952 len 16384
extent buffer leak: start 589561856 len 16384
extent buffer leak: start 590561280 len 16384
extent buffer leak: start 593788928 len 16384
extent buffer leak: start 600457216 len 16384
extent buffer leak: start 600522752 len 16384
extent buffer leak: start 625000448 len 16384
extent buffer leak: start 625115136 len 16384
extent buffer leak: start 800538624 len 16384
extent buffer leak: start 1021788160 len 16384
extent buffer leak: start 1043365888 len 16384
extent buffer leak: start 1080393728 len 16384
extent buffer leak: start 1081016320 len 16384
extent buffer leak: start 1081917440 len 16384
extent buffer leak: start 1083850752 len 16384
extent buffer leak: start 20804943872 len 16384
extent buffer leak: start 20810088448 len 16384
extent buffer leak: start 20825702400 len 16384
extent buffer leak: start 20838547456 len 16384
extent buffer leak: start 20852097024 len 16384
extent buffer leak: start 20864991232 len 16384
extent buffer leak: start 20876099584 len 16384
extent buffer leak: start 20906819584 len 16384
extent buffer leak: start 20922990592 len 16384
extent buffer leak: start 20949303296 len 16384
extent buffer leak: start 20949336064 len 16384
extent buffer leak: start 22083600384 len 16384
extent buffer leak: start 22089842688 len 16384
extent buffer leak: start 22095462400 len 16384
extent buffer leak: start 22109028352 len 16384
extent buffer leak: start 22166323200 len 16384
extent buffer leak: start 22204825600 len 16384
extent buffer leak: start 22254862336 len 16384
extent buffer leak: start 22273769472 len 16384
extent buffer leak: start 22291742720 len 16384
extent buffer leak: start 22293004288 len 16384
extent buffer leak: start 22295281664 len 16384
extent buffer leak: start 22298443776 len 16384
extent buffer leak: start 22306357248 len 16384
extent buffer leak: start 22338060288 len 16384
extent buffer leak: start 22338535424 len 16384
extent buffer leak: start 22340452352 len 16384
extent buffer leak: start 22341697536 len 16384
extent buffer leak: start 22345318400 len 16384
extent buffer leak: start 22417129472 len 16384
extent buffer leak: start 22479716352 len 16384
extent buffer leak: start 22494543872 len 16384
extent buffer leak: start 22506094592 len 16384
extent buffer leak: start 22507651072 len 16384
extent buffer leak: start 22507732992 len 16384
extent buffer leak: start 22512025600 len 16384
extent buffer leak: start 22512451584 len 16384
extent buffer leak: start 23758307328 len 16384
extent buffer leak: start 23805935616 len 16384
extent buffer leak: start 23877074944 len 16384
extent buffer leak: start 23877828608 len 16384
extent buffer leak: start 23878025216 len 16384
extent buffer leak: start 23881285632 len 16384
extent buffer leak: start 23881400320 len 16384
extent buffer leak: start 23903928320 len 16384
extent buffer leak: start 23931813888 len 16384
extent buffer leak: start 23944495104 len 16384
extent buffer leak: start 24047747072 len 16384
extent buffer leak: start 24070668288 len 16384
extent buffer leak: start 24076992512 len 16384
extent buffer leak: start 24078204928 len 16384
extent buffer leak: start 24080023552 len 16384
extent buffer leak: start 24080580608 len 16384
extent buffer leak: start 24083972096 len 16384
extent buffer leak: start 24125587456 len 16384
extent buffer leak: start 24149475328 len 16384

[-- Attachment #3: tsort-btrfs-2.txt --]
[-- Type: text/plain, Size: 6751 bytes --]

checking extents
Fixed 0 roots.
checking free space cache
checking fs roots
checking csums
checking root refs
checking quota groups
extent buffer leak: start 29818880 len 16384
extent buffer leak: start 23827791872 len 16384
extent buffer leak: start 373047296 len 16384
extent buffer leak: start 627539968 len 16384
extent buffer leak: start 22344695808 len 16384
extent buffer leak: start 20805058560 len 16384
extent buffer leak: start 515801088 len 16384
extent buffer leak: start 1078607872 len 16384
extent buffer leak: start 20933148672 len 16384
extent buffer leak: start 22253387776 len 16384
extent buffer leak: start 22299197440 len 16384
extent buffer leak: start 22502604800 len 16384
extent buffer leak: start 23878729728 len 16384
extent buffer leak: start 24083415040 len 16384
extent buffer leak: start 114638848 len 16384
extent buffer leak: start 373080064 len 16384
extent buffer leak: start 39583744 len 16384
extent buffer leak: start 63733760 len 16384
extent buffer leak: start 108789760 len 16384
extent buffer leak: start 109281280 len 16384
extent buffer leak: start 111099904 len 16384
extent buffer leak: start 111804416 len 16384
extent buffer leak: start 115097600 len 16384
extent buffer leak: start 221265920 len 16384
extent buffer leak: start 254394368 len 16384
extent buffer leak: start 270876672 len 16384
extent buffer leak: start 372834304 len 16384
extent buffer leak: start 372932608 len 16384
extent buffer leak: start 373997568 len 16384
extent buffer leak: start 374013952 len 16384
extent buffer leak: start 374194176 len 16384
extent buffer leak: start 374734848 len 16384
extent buffer leak: start 374784000 len 16384
extent buffer leak: start 375537664 len 16384
extent buffer leak: start 375652352 len 16384
extent buffer leak: start 464486400 len 16384
extent buffer leak: start 491798528 len 16384
extent buffer leak: start 525402112 len 16384
extent buffer leak: start 533233664 len 16384
extent buffer leak: start 537853952 len 16384
extent buffer leak: start 590561280 len 16384
extent buffer leak: start 593788928 len 16384
extent buffer leak: start 600457216 len 16384
extent buffer leak: start 600522752 len 16384
extent buffer leak: start 625000448 len 16384
extent buffer leak: start 625115136 len 16384
extent buffer leak: start 800538624 len 16384
extent buffer leak: start 1021788160 len 16384
extent buffer leak: start 1043365888 len 16384
extent buffer leak: start 1080393728 len 16384
extent buffer leak: start 1081016320 len 16384
extent buffer leak: start 1081917440 len 16384
extent buffer leak: start 1083850752 len 16384
extent buffer leak: start 20804943872 len 16384
extent buffer leak: start 20810088448 len 16384
extent buffer leak: start 20825702400 len 16384
extent buffer leak: start 20838547456 len 16384
extent buffer leak: start 20852097024 len 16384
extent buffer leak: start 20864991232 len 16384
extent buffer leak: start 20876099584 len 16384
extent buffer leak: start 20906819584 len 16384
extent buffer leak: start 20922990592 len 16384
extent buffer leak: start 20949303296 len 16384
extent buffer leak: start 20949336064 len 16384
extent buffer leak: start 22083600384 len 16384
extent buffer leak: start 22089842688 len 16384
extent buffer leak: start 22095462400 len 16384
extent buffer leak: start 22109028352 len 16384
extent buffer leak: start 22166323200 len 16384
extent buffer leak: start 22204825600 len 16384
extent buffer leak: start 22254862336 len 16384
extent buffer leak: start 22273769472 len 16384
extent buffer leak: start 22291742720 len 16384
extent buffer leak: start 22293004288 len 16384
extent buffer leak: start 22295281664 len 16384
extent buffer leak: start 22298443776 len 16384
extent buffer leak: start 22306357248 len 16384
extent buffer leak: start 22338060288 len 16384
extent buffer leak: start 22338535424 len 16384
extent buffer leak: start 22340452352 len 16384
extent buffer leak: start 22341697536 len 16384
extent buffer leak: start 22345318400 len 16384
extent buffer leak: start 22417129472 len 16384
extent buffer leak: start 22479716352 len 16384
extent buffer leak: start 22494543872 len 16384
extent buffer leak: start 22506094592 len 16384
extent buffer leak: start 22507651072 len 16384
extent buffer leak: start 22507732992 len 16384
extent buffer leak: start 22512025600 len 16384
extent buffer leak: start 22512451584 len 16384
extent buffer leak: start 23758307328 len 16384
extent buffer leak: start 23805935616 len 16384
extent buffer leak: start 23877074944 len 16384
extent buffer leak: start 23877828608 len 16384
extent buffer leak: start 23878025216 len 16384
extent buffer leak: start 23881285632 len 16384
extent buffer leak: start 23881400320 len 16384
extent buffer leak: start 23903928320 len 16384
extent buffer leak: start 23931813888 len 16384
extent buffer leak: start 23944495104 len 16384
extent buffer leak: start 24047747072 len 16384
extent buffer leak: start 24070668288 len 16384
extent buffer leak: start 24076992512 len 16384
extent buffer leak: start 24078204928 len 16384
extent buffer leak: start 24080023552 len 16384
extent buffer leak: start 24080580608 len 16384
extent buffer leak: start 24083972096 len 16384
extent buffer leak: start 24125587456 len 16384
extent buffer leak: start 24149475328 len 16384
enabling repair mode
cache and super generation don't match, space cache will be invalidated
Ignoring qgroup relation key 258
Ignoring qgroup relation key 71776119061217538
Counts for qgroup id: 307 are different
our:		referenced 22849331200 referenced compressed 22849331200
disk:		referenced 22849331200 referenced compressed 22849331200
our:		exclusive 2375680 exclusive compressed 2375680
disk:		exclusive 2424832 exclusive compressed 2424832
diff:		exclusive -49152 exclusive compressed -49152
Counts for qgroup id: 313 are different
our:		referenced 22849662976 referenced compressed 22849662976
disk:		referenced 14821355520 referenced compressed 14821355520
diff:		referenced 8028307456 referenced compressed 8028307456
our:		exclusive 1474560 exclusive compressed 1474560
disk:		exclusive 770048 exclusive compressed 770048
diff:		exclusive 704512 exclusive compressed 704512
Counts for qgroup id: 319 are different
our:		referenced 22850076672 referenced compressed 22850076672
disk:		referenced 22850076672 referenced compressed 22850076672
our:		exclusive 2330624 exclusive compressed 2330624
disk:		exclusive 2347008 exclusive compressed 2347008
diff:		exclusive -16384 exclusive compressed -16384
found 27847938080 bytes used err is 0
total csum bytes: 22378356
total tree bytes: 4932501504
total fs tree bytes: 4756373504
total extent tree bytes: 146063360
btree space waste bytes: 735301793
file data blocks allocated: 29639204864
 referenced 29585145856

[-- Attachment #4: 2016-05-18.txt --]
[-- Type: text/plain, Size: 6094 bytes --]

2016-05-18T03:11:26.979+01:00 <warning> kernel: ------------[ cut here ]------------
2016-05-18T03:11:26.979+01:00 <warning> kernel: WARNING: CPU: 1 PID: 326 at /build/linux-FvcHlK/linux-4.4.0/fs/btrfs/extent-tree.c:2927 btrfs_run_delayed_refs+0x26b/0x2a0 [btrfs]()
2016-05-18T03:11:26.979+01:00 <debug> kernel: BTRFS: Transaction aborted (error -17)
2016-05-18T03:11:26.980+01:00 <warning> kernel: Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ts_kmp xt_tcpudp ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_owner xt_conntrack nf_conntrack xt_string xt_set iptable_filter ip_tables x_tables ip_set_hash_ip ip_set_hash_ipportnet ip_set_hash_ipport ip_set_bitmap_port ip_set_hash_net ip_set nfnetlink binfmt_misc kvm_intel kvm irqbypass bridge stp crct10dif_pclmul crc32_pclmul llc aesni_intel aes_x86_64 lrw gf128mul ppdev glue_helper ablk_helper parport_pc parport cryptd joydev input_leds i2c_piix4 serio_raw mac_hid 8250_fintek shpchp sunrpc autofs4 btrfs raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1
2016-05-18T03:11:26.990+01:00 <warning> kernel:  raid0 multipath linear floppy psmouse pata_acpi
2016-05-18T03:11:26.990+01:00 <warning> kernel: CPU: 1 PID: 326 Comm: btrfs-transacti Not tainted 4.4.0-22-generic #40-Ubuntu
2016-05-18T03:11:26.990+01:00 <warning> kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
2016-05-18T03:11:26.990+01:00 <warning> kernel:  0000000000000286 00000000ebde90bd ffff880035dfbd08 ffffffff813e9c53
2016-05-18T03:11:26.990+01:00 <warning> kernel:  ffff880035dfbd50 ffffffffc019a468 ffff880035dfbd40 ffffffff81080fb2
2016-05-18T03:11:26.990+01:00 <warning> kernel:  ffff88007c68ee60 ffff880035c7e800 ffff8800794da9e0 ffffffffffffffff
2016-05-18T03:11:26.990+01:00 <warning> kernel: Call Trace:
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffff813e9c53>] dump_stack+0x63/0x90
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffff81080fb2>] warn_slowpath_common+0x82/0xc0
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffff8108104c>] warn_slowpath_fmt+0x5c/0x80
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffffc0101beb>] btrfs_run_delayed_refs+0x26b/0x2a0 [btrfs]
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffffc0114a22>] ? btrfs_wait_pending_ordered+0x22/0x90 [btrfs]
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffffc0117052>] btrfs_commit_transaction+0x4d2/0xa90 [btrfs]
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffffc0112229>] transaction_kthread+0x229/0x240 [btrfs]
2016-05-18T03:11:26.990+01:00 <warning> kernel:  [<ffffffffc0112000>] ? btrfs_cleanup_transaction+0x570/0x570 [btrfs]
2016-05-18T03:11:26.991+01:00 <warning> kernel:  [<ffffffff810a0588>] kthread+0xd8/0xf0
2016-05-18T03:11:26.991+01:00 <warning> kernel:  [<ffffffff810a04b0>] ? kthread_create_on_node+0x1e0/0x1e0
2016-05-18T03:11:26.991+01:00 <warning> kernel:  [<ffffffff8182568f>] ret_from_fork+0x3f/0x70
2016-05-18T03:11:26.991+01:00 <warning> kernel:  [<ffffffff810a04b0>] ? kthread_create_on_node+0x1e0/0x1e0
2016-05-18T03:11:26.991+01:00 <warning> kernel: ---[ end trace 70356c7d452c6316 ]---
2016-05-18T03:11:26.991+01:00 <crit> kernel: BTRFS: error (device vda1) in btrfs_run_delayed_refs:2927: errno=-17 Object already exists
2016-05-18T03:11:26.991+01:00 <info> kernel: BTRFS info (device vda1): forced readonly
2016-05-18T03:11:26.991+01:00 <warning> kernel: BTRFS warning (device vda1): Skipping commit of aborted transaction.
2016-05-18T03:11:26.991+01:00 <crit> kernel: BTRFS: error (device vda1) in cleanup_transaction:1746: errno=-17 Object already exists


checking extents
checking free space cache
block group 7545552896 has wrong amount of free spacefailed to load free space cache for block group 7545552896
There is no free space entry for 9272557568-9272573952
There is no free space entry for 9272557568-9693036544
cache appears valid but isnt 9156165632
block group 16135487488 has wrong amount of free spacefailed to load free space cache for block group 16135487488
Wanted offset 17758535680, found 17758519296
Wanted offset 17758535680, found 17758519296
cache appears valid but isnt 17746100224
Wanted offset 23651729408, found 23651696640
Wanted offset 23651729408, found 23651696640
cache appears valid but isnt 23651680256
found 24704049179 bytes used err is -22
total csum bytes: 19692648
total tree bytes: 4538777600
total fs tree bytes: 4391075840
total extent tree bytes: 122552320
btree space waste bytes: 664565435
file data blocks allocated: 20176596992
 referenced 20168798208
btrfs-progs v4.0


enabling repair mode
checking extents
Fixed 0 roots.
checking free space cache
cache and super generation don't match, space cache will be invalidated
checking fs roots
checking csums
checking root refs
checking quota groups
Ignoring qgroup relation key 258
Ignoring qgroup relation key 71776119061217538
Counts for qgroup id: 263 are different
our:		referenced 21499777024 referenced compressed 21499777024
disk:		referenced 21499768832 referenced compressed 21499768832
diff:		referenced 8192 referenced compressed 8192
our:		exclusive 21499777024 exclusive compressed 21499777024
disk:		exclusive 21499768832 exclusive compressed 21499768832
diff:		exclusive 8192 exclusive compressed 8192
found 24704053275 bytes used err is 0
total csum bytes: 19692652
total tree bytes: 4538777600
total fs tree bytes: 4391075840
total extent tree bytes: 122552320
btree space waste bytes: 664564234
file data blocks allocated: 20176601088
 referenced 20168802304
btrfs-progs v4.0
extent buffer leak: start 29409280 len 16384
extent buffer leak: start 22362816512 len 16384
extent buffer leak: start 22271852544 len 16384
extent buffer leak: start 550207488 len 16384
extent buffer leak: start 590757888 len 16384
extent buffer leak: start 888635392 len 16384
extent buffer leak: start 893124608 len 16384
extent buffer leak: start 20463452160 len 16384

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2016-09-22 19:37 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-22 19:36 btrfs caused processes to freeze and exhausted all available memory during snapshot deletion Simon Arlott

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.