All of lore.kernel.org
 help / color / mirror / Atom feed
* btrfsck does not fix
@ 2014-01-03 19:41 Hendrik Friedel
  2014-01-03 23:33 ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-03 19:41 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 2002 bytes --]

Hello,

I ran btrfsck on my volume with the repair option. When I re-run it, I 
get the same errors as before.
Attached you find the output of btrfsck in repair-mode.

Are these errors, that btrfsck cannot fix?

Regards,
Hendrik


Below some samples:
Extent back ref already exists for 2994950590464 parent 863072366592 root 0

ref mismatch on [32935936 4096] extent item 1, found 2
repair deleting extent record: key 32935936 168 4096
adding new tree backref on start 32935936 len 4096 parent 2994784206848 
root 2994784206848
Incorrect global backref count on 32935936 found 1 wanted 2
backpointer mismatch on [32935936 4096]
ref mismatch on [32997376 4096] extent item 1, found 2
repair deleting extent record: key 32997376 168 4096
adding new tree backref on start 32997376 len 4096 parent 2994824708096 
root 2994824708096
Incorrect global backref count on 32997376 found 1 wanted 2
backpointer mismatch on [32997376 4096]

Incorrect global backref count on 8988365651968 found 1 wanted 0
backpointer mismatch on [8988365651968 4096]
repaired damaged extent references
checking free space cache
checking fs roots
root 256 inode 9579 errors 100, file extent discount
root 256 inode 9580 errors 100, file extent discount
root 256 inode 14258 errors 100, file extent discount
root 256 inode 14259 errors 100, file extent discount
root 4444 inode 9579 errors 100, file extent discount
root 4444 inode 9580 errors 100, file extent discount
root 4444 inode 14258 errors 100, file extent discount
root 4444 inode 14259 errors 100, file extent discount
enabling repair mode
Checking filesystem on /dev/sdc1
UUID: 989306aa-d291-4752-8477-0baf94f8c42f
cache and super generation don't match, space cache will be invalidated
found 827360733827 bytes used err is 1
total csum bytes: 4446455380
total tree bytes: 5506977792
total fs tree bytes: 137293824
total extent tree bytes: 258691072
btree space waste bytes: 496921489
file data blocks allocated: 6440132583424
  referenced 6424163344384
Btrfs v3.12





[-- Attachment #2: allout.zip --]
[-- Type: application/zip, Size: 33903 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-03 19:41 btrfsck does not fix Hendrik Friedel
@ 2014-01-03 23:33 ` Chris Murphy
  2014-01-04 21:21   ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-01-03 23:33 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs


On Jan 3, 2014, at 12:41 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hello,
> 
> I ran btrfsck on my volume with the repair option. When I re-run it, I get the same errors as before.

Did you try mounting with -o recovery first?
https://btrfs.wiki.kernel.org/index.php/Problem_FAQ

What messages in dmesg so you get when you use recovery?


Chris Murphy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-03 23:33 ` Chris Murphy
@ 2014-01-04 21:21   ` Hendrik Friedel
  2014-01-05 13:36     ` Hendrik Friedel
  2014-01-05 16:55     ` Chris Murphy
  0 siblings, 2 replies; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-04 21:21 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

Hi Chris,


 >> I ran btrfsck on my volume with the repair option. When I re-run it, 
 >>I get the same errors as before.
>
> Did you try mounting with -o recovery first?
> https://btrfs.wiki.kernel.org/index.php/Problem_FAQ

No, I did not.
In fact, I had visited the FAQ before, and my understanding was, that -o 
recovery was used/needed when mounting is impossible. This is not the 
case. In fact, the disk does work without obvious problems.

> What messages in dmesg so you get when you use recovery?

I'll find out, tomorrow (I can't access the disk just now).

Greetings,
Hendrik


-- 
Hendrik Friedel
Auf dem Brink 12
28844 Weyhe
Mobil 0178 1874363

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-04 21:21   ` Hendrik Friedel
@ 2014-01-05 13:36     ` Hendrik Friedel
  2014-01-05 16:55     ` Chris Murphy
  1 sibling, 0 replies; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-05 13:36 UTC (permalink / raw)
  To: Chris Murphy, linux-btrfs@vger.kernel.org >> linux-btrfs

Hello,


>> What messages in dmesg so you get when you use recovery?
>
> I'll find out, tomorrow (I can't access the disk just now).

Here it is:
[90098.989872] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 transid 162460 /dev/sdc1

That's all. The same in the syslog.

Do you have further suggestions to fix the file-system?

Regards,
Hendrik

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-04 21:21   ` Hendrik Friedel
  2014-01-05 13:36     ` Hendrik Friedel
@ 2014-01-05 16:55     ` Chris Murphy
  2014-01-07 20:38       ` Hendrik Friedel
  1 sibling, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-01-05 16:55 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs


On Jan 4, 2014, at 2:21 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hi Chris,
> 
> 
> >> I ran btrfsck on my volume with the repair option. When I re-run it, >>I get the same errors as before.
>> 
>> Did you try mounting with -o recovery first?
>> https://btrfs.wiki.kernel.org/index.php/Problem_FAQ
> 
> No, I did not.
> In fact, I had visited the FAQ before, and my understanding was, that -o recovery was used/needed when mounting is impossible. This is not the case. In fact, the disk does work without obvious problems.

It mounts without errors? So why then btrfsck/btrfs repair? What precipitated the repair?

If mount option -o recovery is used, dmesg should report 'btrfs: enabling auto recovery' and I think you're right if it's mounting OK then probably recovery isn't applicable. Can you just do a btrfs check <dev> and report the results? Repair can sometimes make problems worse it seems.


Chris Murphy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-05 16:55     ` Chris Murphy
@ 2014-01-07 20:38       ` Hendrik Friedel
  2014-01-10 23:53         ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-07 20:38 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

Hello,

 >>>> I ran btrfsck on my volume with the repair option. When I re-run 
it, >>I get the same errors as before.
> It mounts without errors? So why then btrfsck/btrfs repair? What precipitated the repair?

I don't know what caused the damage, but a check revealed this:

Checking filesystem on /dev/sdb1
UUID: 989306aa-d291-4752-8477-0baf94f8c42f
checking extents

Extent back ref already exists for 2994950590464 parent 863072366592 root 0
Extent back ref already exists for 2994950836224 parent 863072366592 root 0
Extent back ref already exists for 862762737664 parent 863072366592 root 0
Extent back ref already exists for 2994950877184 parent 863072366592
[...]
Incorrect global backref count on 2995767250944 found 1 wanted 2
backpointer mismatch on [2995767250944 4096]
ref mismatch on [2995767304192 4096] extent item 1, found 2
Incorrect global backref count on 2995767304192 found 1 wanted 2
backpointer mismatch on [2995767304192 4096]
ref mismatch on [2995768258560 4096] extent item 1, found 2
Incorrect global backref count on 2995768258560 found 1 wanted 2
backpointer mismatch on [2995768258560 4096]
ref mismatch on [2995768459264 4096] extent item 1, found 2
Incorrect global backref count on 2995768459264 found 1 wanted 2
backpointer mismatch on [2995768459264 4096]
Errors found in extent allocation tree or chunk allocation

ref mismatch on [2995768459264 4096] extent item 1, found 2
Incorrect global backref count on 2995768459264 found 1 wanted 2
backpointer mismatch on [2995768459264 4096]
Errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
root 256 inode 9579 errors 100, file extent discount
root 256 inode 9580 errors 100, file extent discount
root 256 inode 14258 errors 100, file extent discount
root 256 inode 14259 errors 100, file extent discount
root 4444 inode 9579 errors 100, file extent discount
root 4444 inode 9580 errors 100, file extent discount
root 4444 inode 14258 errors 100, file extent discount
root 4444 inode 14259 errors 100, file extent discount
found 1993711951581 bytes used err is 1
total csum bytes: 4560615360
total tree bytes: 5643403264
total fs tree bytes: 139776000
total extent tree bytes: 263602176
btree space waste bytes: 504484726
file data blocks allocated: 6557032402944
  referenced 6540949323776
Btrfs v3.12

This made me run btrfsck with the repair option:


Extent back ref already exists for 2994950590464 parent 863072366592 root 0

ref mismatch on [32935936 4096] extent item 1, found 2
repair deleting extent record: key 32935936 168 4096
adding new tree backref on start 32935936 len 4096 parent 2994784206848 
root 2994784206848
Incorrect global backref count on 32935936 found 1 wanted 2
backpointer mismatch on [32935936 4096]
ref mismatch on [32997376 4096] extent item 1, found 2
repair deleting extent record: key 32997376 168 4096
adding new tree backref on start 32997376 len 4096 parent 2994824708096 
root 2994824708096
Incorrect global backref count on 32997376 found 1 wanted 2
backpointer mismatch on [32997376 4096]

Incorrect global backref count on 8988365651968 found 1 wanted 0
backpointer mismatch on [8988365651968 4096]
repaired damaged extent references
checking free space cache
checking fs roots
root 256 inode 9579 errors 100, file extent discount
root 256 inode 9580 errors 100, file extent discount
root 256 inode 14258 errors 100, file extent discount
root 256 inode 14259 errors 100, file extent discount
root 4444 inode 9579 errors 100, file extent discount
root 4444 inode 9580 errors 100, file extent discount
root 4444 inode 14258 errors 100, file extent discount
root 4444 inode 14259 errors 100, file extent discount
enabling repair mode
Checking filesystem on /dev/sdc1
UUID: 989306aa-d291-4752-8477-0baf94f8c42f
cache and super generation don't match, space cache will be invalidated
found 827360733827 bytes used err is 1
total csum bytes: 4446455380
total tree bytes: 5506977792
total fs tree bytes: 137293824
total extent tree bytes: 258691072
btree space waste bytes: 496921489
file data blocks allocated: 6440132583424
  referenced 6424163344384
Btrfs v3.12


After this, I ran a check without the repair option again and the same 
errors persist.

Greetings,
Hendrik


-- 
Hendrik Friedel
Auf dem Brink 12
28844 Weyhe
Mobil 0178 1874363

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-07 20:38       ` Hendrik Friedel
@ 2014-01-10 23:53         ` Hendrik Friedel
  2014-01-11  1:05           ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-10 23:53 UTC (permalink / raw)
  To: linux-btrfs

Hello,

I was wondering whether I am doing something wrong in the way I am 
asking/what I am asking.
My understanding is, that btrfsck is not able to fix this error yet. So, 
I am surprised, that noone is interested in this, apparently?

Regards,
Hendrik Friedel

Am 07.01.2014 21:38, schrieb Hendrik Friedel:
> Hello,
>
>  >>>> I ran btrfsck on my volume with the repair option. When I re-run
> it, >>I get the same errors as before.
>> It mounts without errors? So why then btrfsck/btrfs repair? What
>> precipitated the repair?
>
> I don't know what caused the damage, but a check revealed this:
>
> Checking filesystem on /dev/sdb1
> UUID: 989306aa-d291-4752-8477-0baf94f8c42f
> checking extents
>
> Extent back ref already exists for 2994950590464 parent 863072366592 root 0
> Extent back ref already exists for 2994950836224 parent 863072366592 root 0
> Extent back ref already exists for 862762737664 parent 863072366592 root 0
> Extent back ref already exists for 2994950877184 parent 863072366592
> [...]
> Incorrect global backref count on 2995767250944 found 1 wanted 2
> backpointer mismatch on [2995767250944 4096]
> ref mismatch on [2995767304192 4096] extent item 1, found 2
> Incorrect global backref count on 2995767304192 found 1 wanted 2
> backpointer mismatch on [2995767304192 4096]
> ref mismatch on [2995768258560 4096] extent item 1, found 2
> Incorrect global backref count on 2995768258560 found 1 wanted 2
> backpointer mismatch on [2995768258560 4096]
> ref mismatch on [2995768459264 4096] extent item 1, found 2
> Incorrect global backref count on 2995768459264 found 1 wanted 2
> backpointer mismatch on [2995768459264 4096]
> Errors found in extent allocation tree or chunk allocation
>
> ref mismatch on [2995768459264 4096] extent item 1, found 2
> Incorrect global backref count on 2995768459264 found 1 wanted 2
> backpointer mismatch on [2995768459264 4096]
> Errors found in extent allocation tree or chunk allocation
> checking free space cache
> checking fs roots
> root 256 inode 9579 errors 100, file extent discount
> root 256 inode 9580 errors 100, file extent discount
> root 256 inode 14258 errors 100, file extent discount
> root 256 inode 14259 errors 100, file extent discount
> root 4444 inode 9579 errors 100, file extent discount
> root 4444 inode 9580 errors 100, file extent discount
> root 4444 inode 14258 errors 100, file extent discount
> root 4444 inode 14259 errors 100, file extent discount
> found 1993711951581 bytes used err is 1
> total csum bytes: 4560615360
> total tree bytes: 5643403264
> total fs tree bytes: 139776000
> total extent tree bytes: 263602176
> btree space waste bytes: 504484726
> file data blocks allocated: 6557032402944
>   referenced 6540949323776
> Btrfs v3.12
>
> This made me run btrfsck with the repair option:
>
>
> Extent back ref already exists for 2994950590464 parent 863072366592 root 0
>
> ref mismatch on [32935936 4096] extent item 1, found 2
> repair deleting extent record: key 32935936 168 4096
> adding new tree backref on start 32935936 len 4096 parent 2994784206848
> root 2994784206848
> Incorrect global backref count on 32935936 found 1 wanted 2
> backpointer mismatch on [32935936 4096]
> ref mismatch on [32997376 4096] extent item 1, found 2
> repair deleting extent record: key 32997376 168 4096
> adding new tree backref on start 32997376 len 4096 parent 2994824708096
> root 2994824708096
> Incorrect global backref count on 32997376 found 1 wanted 2
> backpointer mismatch on [32997376 4096]
>
> Incorrect global backref count on 8988365651968 found 1 wanted 0
> backpointer mismatch on [8988365651968 4096]
> repaired damaged extent references
> checking free space cache
> checking fs roots
> root 256 inode 9579 errors 100, file extent discount
> root 256 inode 9580 errors 100, file extent discount
> root 256 inode 14258 errors 100, file extent discount
> root 256 inode 14259 errors 100, file extent discount
> root 4444 inode 9579 errors 100, file extent discount
> root 4444 inode 9580 errors 100, file extent discount
> root 4444 inode 14258 errors 100, file extent discount
> root 4444 inode 14259 errors 100, file extent discount
> enabling repair mode
> Checking filesystem on /dev/sdc1
> UUID: 989306aa-d291-4752-8477-0baf94f8c42f
> cache and super generation don't match, space cache will be invalidated
> found 827360733827 bytes used err is 1
> total csum bytes: 4446455380
> total tree bytes: 5506977792
> total fs tree bytes: 137293824
> total extent tree bytes: 258691072
> btree space waste bytes: 496921489
> file data blocks allocated: 6440132583424
>   referenced 6424163344384
> Btrfs v3.12
>
>
> After this, I ran a check without the repair option again and the same
> errors persist.
>
> Greetings,
> Hendrik
>
>


-- 
Hendrik Friedel
Auf dem Brink 12
28844 Weyhe
Mobil 0178 1874363

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-10 23:53         ` Hendrik Friedel
@ 2014-01-11  1:05           ` Chris Murphy
  2014-01-12 22:31             ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-01-11  1:05 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs


On Jan 10, 2014, at 4:53 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hello,
> 
> I was wondering whether I am doing something wrong in the way I am asking/what I am asking.
> My understanding is, that btrfsck is not able to fix this error yet. So, I am surprised, that noone is interested in this, apparently?


I don't know what the error messages mean, or why they aren't being fixed. It's known btrfs repair doesn't fix everything yet and can sometimes make problems worse.

Kernel version?
It mounts OK with no kernel messages?

What do you get for:
btrfs fi show
btrfs fi df <mp>

btrfs device stats </dev or mountpoint>

If there are any errors reported for device stats, what do you get for smartctl -x /dev/X


Chris Murphy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-11  1:05           ` Chris Murphy
@ 2014-01-12 22:31             ` Hendrik Friedel
  2014-01-14  0:40               ` Chris Murphy
  2014-01-19 19:37               ` Martin Steigerwald
  0 siblings, 2 replies; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-12 22:31 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

Hello,

> Kernel version?
3.12.0-031200-generic


> It mounts OK with no kernel messages?

Yes. Here I mount the three subvolumes:
dmesg:
[105152.392900] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 1 
                         transid 164942 /dev/sdb1
[105152.394332] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105152.394663] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 1 
                         transid 164942 /dev/sdb1
[105152.394759] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105152.394845] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105152.395941] btrfs: disk space caching is enabled
[105195.320249] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 1 
                         transid 164942 /dev/sdb1
[105195.320256] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105195.320263] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 1 
                         transid 164942 /dev/sdb1
[105195.320290] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105195.320308] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105208.832997] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 1 
                         transid 164942 /dev/sdb1
[105208.833005] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105208.833026] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 1 
                         transid 164942 /dev/sdb1
[105208.833030] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1
[105208.833032] btrfs: device fsid 989306aa-d291-4752-8477-0baf94f8c42f 
devid 2 
                         transid 164942 /dev/sdc1

Syslog:
Jan 12 23:25:43 homeserver kernel: [105152.392900] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 1 transid 164942 /dev/sdb1
Jan 12 23:25:43 homeserver kernel: [105152.394332] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:25:43 homeserver kernel: [105152.394663] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 1 transid 164942 /dev/sdb1
Jan 12 23:25:43 homeserver kernel: [105152.394759] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:25:43 homeserver kernel: [105152.394845] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:25:43 homeserver kernel: [105152.395941] btrfs: disk space 
caching is enabled
Jan 12 23:26:26 homeserver kernel: [105195.320249] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 1 transid 164942 /dev/sdb1
Jan 12 23:26:26 homeserver kernel: [105195.320256] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:26:26 homeserver kernel: [105195.320263] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 1 transid 164942 /dev/sdb1
Jan 12 23:26:26 homeserver kernel: [105195.320290] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:26:26 homeserver kernel: [105195.320308] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:26:39 homeserver kernel: [105208.832997] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 1 transid 164942 /dev/sdb1
Jan 12 23:26:39 homeserver kernel: [105208.833005] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:26:39 homeserver kernel: [105208.833026] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 1 transid 164942 /dev/sdb1
Jan 12 23:26:39 homeserver kernel: [105208.833030] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1
Jan 12 23:26:39 homeserver kernel: [105208.833032] btrfs: device fsid 
989306aa-d291-4752-8477-0baf94f8c42f devid 2 transid 164942 /dev/sdc1


> What do you get for:
> btrfs fi show
./btrfs/btrfs-progs/btrfs fi show
Label: none  uuid: 989306aa-d291-4752-8477-0baf94f8c42f
         Total devices 2 FS bytes used 4.37TiB
         devid    1 size 2.73TiB used 2.73TiB path /dev/sdb1
         devid    2 size 2.73TiB used 2.73TiB path /dev/sdc1

Btrfs v3.12

> btrfs fi df <mp>
./btrfs/btrfs-progs/btrfs fi df /mnt/BTRFS/rsnapshot/
Data, RAID0: total=5.45TiB, used=4.37TiB
System, RAID1: total=8.00MiB, used=396.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, RAID1: total=6.00GiB, used=5.41GiB
Metadata, single: total=8.00MiB, used=0.00
(for all Subvolumes)

> btrfs device stats </dev or mountpoint>
./btrfs/btrfs-progs/btrfs device stats /mnt/BTRFS/backups/
[/dev/sdb1].write_io_errs   0
[/dev/sdb1].read_io_errs    0
[/dev/sdb1].flush_io_errs   0
[/dev/sdb1].corruption_errs 0
[/dev/sdb1].generation_errs 0
[/dev/sdc1].write_io_errs   0
[/dev/sdc1].read_io_errs    0
[/dev/sdc1].flush_io_errs   0
[/dev/sdc1].corruption_errs 0
[/dev/sdc1].generation_errs 0


Greetings and thanks for your help!

Hendrik


Appendix:

> If there are any errors reported for device stats, what do you get for smartctl -x /dev/X
There are none, but for completeness:

smartctl -x /dev/sdc1
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.12.0-031200-generic] 
(local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family:     Seagate Barracuda 7200.14 (AF)
Device Model:     ST3000DM001-9YN166
Serial Number:    Z1F0HAZN
LU WWN Device Id: 5 000c50 03fec37b4
Firmware Version: CC4B
User Capacity:    3.000.592.982.016 bytes [3,00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  ATA-8-ACS revision 4
Local Time is:    Sun Jan 12 23:30:28 2014 CET

==> WARNING: A firmware update for this drive may be available,
see the following Seagate web pages:
http://knowledge.seagate.com/articles/en_US/FAQ/207931en
http://knowledge.seagate.com/articles/en_US/FAQ/223651en

SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00) Offline data collection activity
                                         was never started.
                                         Auto Offline Data Collection: 
Disabled.
Self-test execution status:      (   0) The previous self-test routine 
completed
                                         without error or no self-test 
has ever
                                         been run.
Total time to complete Offline
data collection:                (  575) seconds.
Offline data collection
capabilities:                    (0x73) SMART execute Offline immediate.
                                         Auto Offline data collection 
on/off support.
                                         Suspend Offline collection upon new
                                         command.
                                         No Offline surface scan supported.
                                         Self-test supported.
                                         Conveyance Self-test supported.
                                         Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                         power-saving mode.
                                         Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                         General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        ( 255) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x3085) SCT Status supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAGS    VALUE WORST THRESH FAIL RAW_VALUE
   1 Raw_Read_Error_Rate     POSR--   119   099   006    -    222649736
   3 Spin_Up_Time            PO----   092   092   000    -    0
   4 Start_Stop_Count        -O--CK   095   095   020    -    5128
   5 Reallocated_Sector_Ct   PO--CK   100   100   036    -    0
   7 Seek_Error_Rate         POSR--   076   057   030    -    13026610297
   9 Power_On_Hours          -O--CK   068   068   000    -    28432
  10 Spin_Retry_Count        PO--C-   100   100   097    -    0
  12 Power_Cycle_Count       -O--CK   100   100   020    -    167
183 Runtime_Bad_Block       -O--CK   100   100   000    -    0
184 End-to-End_Error        -O--CK   100   100   099    -    0
187 Reported_Uncorrect      -O--CK   100   100   000    -    0
188 Command_Timeout         -O--CK   100   099   000    -    2 2 2
189 High_Fly_Writes         -O-RCK   092   092   000    -    8
190 Airflow_Temperature_Cel -O---K   060   046   045    -    40 (0 10 47 26)
191 G-Sense_Error_Rate      -O--CK   100   100   000    -    0
192 Power-Off_Retract_Count -O--CK   100   100   000    -    148
193 Load_Cycle_Count        -O--CK   095   095   000    -    11707
194 Temperature_Celsius     -O---K   040   054   000    -    40 (0 15 0 0)
197 Current_Pending_Sector  -O--C-   100   100   000    -    0
198 Offline_Uncorrectable   ----C-   100   100   000    -    0
199 UDMA_CRC_Error_Count    -OSRCK   200   200   000    -    0
240 Head_Flying_Hours       ------   100   253   000    - 
10383h+50m+39.165s
241 Total_LBAs_Written      ------   100   253   000    -    80975340592299
242 Total_LBAs_Read         ------   100   253   000    -    194401514224638
                             ||||||_ K auto-keep
                             |||||__ C event count
                             ||||___ R error rate
                             |||____ S speed/performance
                             ||_____ O updated online
                             |______ P prefailure warning

General Purpose Log Directory Version 1
SMART           Log Directory Version 1 [multi-sector log support]
GP/S  Log at address 0x00 has    1 sectors [Log Directory]
SMART Log at address 0x01 has    1 sectors [Summary SMART error log]
SMART Log at address 0x02 has    5 sectors [Comprehensive SMART error log]
GP    Log at address 0x03 has    5 sectors [Ext. Comprehensive SMART 
error log]
SMART Log at address 0x06 has    1 sectors [SMART self-test log]
GP    Log at address 0x07 has    1 sectors [Extended self-test log]
SMART Log at address 0x09 has    1 sectors [Selective self-test log]
GP    Log at address 0x10 has    1 sectors [NCQ Command Error]
GP    Log at address 0x11 has    1 sectors [SATA Phy Event Counters]
GP    Log at address 0x21 has    1 sectors [Write stream error log]
GP    Log at address 0x22 has    1 sectors [Read stream error log]
GP/S  Log at address 0x80 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x81 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x82 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x83 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x84 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x85 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x86 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x87 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x88 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x89 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x8a has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x8b has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x8c has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x8d has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x8e has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x8f has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x90 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x91 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x92 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x93 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x94 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x95 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x96 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x97 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x98 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x99 has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x9a has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x9b has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x9c has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x9d has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x9e has   16 sectors [Host vendor specific log]
GP/S  Log at address 0x9f has   16 sectors [Host vendor specific log]
GP/S  Log at address 0xa1 has   20 sectors [Device vendor specific log]
GP    Log at address 0xa2 has 4496 sectors [Device vendor specific log]
GP/S  Log at address 0xa8 has   20 sectors [Device vendor specific log]
GP/S  Log at address 0xa9 has    1 sectors [Device vendor specific log]
GP    Log at address 0xab has    1 sectors [Device vendor specific log]
GP    Log at address 0xb0 has 5067 sectors [Device vendor specific log]
GP    Log at address 0xbd has  512 sectors [Device vendor specific log]
GP    Log at address 0xbe has 65535 sectors [Device vendor specific log]
GP    Log at address 0xbf has 65535 sectors [Device vendor specific log]
GP/S  Log at address 0xc0 has    1 sectors [Device vendor specific log]
GP/S  Log at address 0xe0 has    1 sectors [SCT Command/Status]
GP/S  Log at address 0xe1 has    1 sectors [SCT Data Transfer]

SMART Extended Comprehensive Error Log Version: 1 (5 sectors)
No Errors Logged

SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining 
LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%     28411 
      -
# 2  Extended offline    Completed without error       00%     28394 
      -
# 3  Short offline       Completed without error       00%     28387 
      -
# 4  Short offline       Completed without error       00%     28363 
      -
# 5  Short offline       Completed without error       00%     28339 
      -
# 6  Short offline       Completed without error       00%     28316 
      -
# 7  Short offline       Completed without error       00%     28292 
      -
# 8  Short offline       Completed without error       00%     28268 
      -
# 9  Short offline       Completed without error       00%     28243 
      -
#10  Extended offline    Completed without error       00%     28226 
      -
#11  Short offline       Completed without error       00%     28220 
      -
#12  Short offline       Completed without error       00%     28196 
      -
#13  Short offline       Completed without error       00%     28172 
      -
#14  Short offline       Completed without error       00%     28148 
      -
#15  Short offline       Completed without error       00%     28124 
      -
#16  Short offline       Completed without error       00%     28100 
      -
#17  Short offline       Completed without error       00%     28076 
      -
#18  Extended offline    Completed without error       00%     28058 
      -
#19  Short offline       Completed without error       00%     28052 
      -

SMART Selective self-test log data structure revision number 1
  SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
     1        0        0  Not_testing
     2        0        0  Not_testing
     3        0        0  Not_testing
     4        0        0  Not_testing
     5        0        0  Not_testing
Selective self-test flags (0x0):
   After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Warning: device does not support SCT Data Table command
Warning: device does not support SCT Error Recovery Control command
SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x000a  2            3  Device-to-host register FISes sent due to a COMRESET
0x0001  2            0  Command failed due to ICRC error
0x0003  2            0  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-12 22:31             ` Hendrik Friedel
@ 2014-01-14  0:40               ` Chris Murphy
  2014-01-14  6:03                 ` Duncan
  2014-01-14  8:16                 ` Hugo Mills
  2014-01-19 19:37               ` Martin Steigerwald
  1 sibling, 2 replies; 36+ messages in thread
From: Chris Murphy @ 2014-01-14  0:40 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs

I see no obvious hardware source for the problem, SATA Phy Event Counters are zeros except a few COMRESET events which is pretty minor. No ICRC or UDMA-CRC errors recorded. No reallocated or pending bad sectors.

I'd look at your historical system logs, messages or journalctl, and do case insensitive search for: err, unc, reset and see if anything along those lines has been reported that might explain it.

If you decide to backup, reformat, restore, then first I suggest 

btrfs-image -c 9-t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>

You can keep it handy in case a dev asks for it or you can attach it to a kernel.org bug report.

Chris Murphy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14  0:40               ` Chris Murphy
@ 2014-01-14  6:03                 ` Duncan
  2014-01-14  7:49                   ` Chris Murphy
  2014-01-14  8:16                 ` Hugo Mills
  1 sibling, 1 reply; 36+ messages in thread
From: Duncan @ 2014-01-14  6:03 UTC (permalink / raw)
  To: linux-btrfs

Chris Murphy posted on Mon, 13 Jan 2014 17:40:33 -0700 as excerpted:

> If you decide to backup, reformat, restore, then first I suggest
> 
> btrfs-image -c 9-t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>
> 
> You can keep it handy in case a dev asks for it or you can attach it to
> a kernel.org bug report.

I've not actually done one of these, yet.  How big are they?  The manpage 
says it's basically the metadata only (data is zeroed), so I'd guess one 
could expect it to be about the size of metadata as reported by btrfs fi 
df (obviously, when it could still mount if it's now unmountable), right?

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14  6:03                 ` Duncan
@ 2014-01-14  7:49                   ` Chris Murphy
  2014-01-14  9:30                     ` Duncan
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-01-14  7:49 UTC (permalink / raw)
  To: Btrfs BTRFS


On Jan 13, 2014, at 11:03 PM, Duncan <1i5t5.duncan@cox.net> wrote:

> Chris Murphy posted on Mon, 13 Jan 2014 17:40:33 -0700 as excerpted:
> 
>> If you decide to backup, reformat, restore, then first I suggest
>> 
>> btrfs-image -c 9-t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>
>> 
>> You can keep it handy in case a dev asks for it or you can attach it to
>> a kernel.org bug report.
> 
> I've not actually done one of these, yet.  How big are they?  The manpage 
> says it's basically the metadata only (data is zeroed), so I'd guess one 
> could expect it to be about the size of metadata as reported by btrfs fi 
> df (obviously, when it could still mount if it's now unmountable), right?

# btrfs fi df /mnt/isos
Data, single: total=196.01GiB, used=185.92GiB
System, DUP: total=8.00MiB, used=48.00KiB
System, single: total=4.00MiB, used=0.00
Metadata, DUP: total=1.00GiB, used=289.53MiB
Metadata, single: total=8.00MiB, used=0.00

193M -rw-r--r--. 1 root root 193M Jan 14 00:44 sdb.img


Guess that's too big to attach to a bugzilla bug.


Chris Murphy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14  0:40               ` Chris Murphy
  2014-01-14  6:03                 ` Duncan
@ 2014-01-14  8:16                 ` Hugo Mills
  1 sibling, 0 replies; 36+ messages in thread
From: Hugo Mills @ 2014-01-14  8:16 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Hendrik Friedel, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1350 bytes --]

On Mon, Jan 13, 2014 at 05:40:33PM -0700, Chris Murphy wrote:
> I see no obvious hardware source for the problem, SATA Phy Event Counters are zeros except a few COMRESET events which is pretty minor. No ICRC or UDMA-CRC errors recorded. No reallocated or pending bad sectors.
> 
> I'd look at your historical system logs, messages or journalctl, and do case insensitive search for: err, unc, reset and see if anything along those lines has been reported that might explain it.
> 
> If you decide to backup, reformat, restore, then first I suggest 
> 
> btrfs-image -c 9-t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>

btrfs-image -c 9 -t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>

   Space between the 9 and the -t.

   Hugo.

> You can keep it handy in case a dev asks for it or you can attach it to a kernel.org bug report.
> 
> Chris Murphy--
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
    --- I think that everything darkling says is actually a joke. ---    
         It's just that we haven't worked out most of them yet.          

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14  7:49                   ` Chris Murphy
@ 2014-01-14  9:30                     ` Duncan
  2014-01-14  9:38                       ` Hugo Mills
  2014-01-14 17:17                       ` Chris Murphy
  0 siblings, 2 replies; 36+ messages in thread
From: Duncan @ 2014-01-14  9:30 UTC (permalink / raw)
  To: linux-btrfs

Chris Murphy posted on Tue, 14 Jan 2014 00:49:13 -0700 as excerpted:

> On Jan 13, 2014, at 11:03 PM, Duncan <1i5t5.duncan@cox.net> wrote:
> 
>> Chris Murphy posted on Mon, 13 Jan 2014 17:40:33 -0700 as excerpted:
>> 
>>> btrfs-image -c 9-t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>
>>> 
>>> You can keep it handy in case a dev asks for it or you can attach it
>>> to a kernel.org bug report.
>> 
>> How big are they?  manpage says it's metadata only (data is zeroed),
>> so I'd guess one could expect it to be about metadata size as reported
>> by btrfs fi df?
> 
> # btrfs fi df /mnt/isos
> Metadata, DUP: total=1.00GiB, used=289.53MiB
> 
> 193M -rw-r--r--. 1 root root 193M Jan 14 00:44 sdb.img
> 
> Guess that's too big to attach to a bugzilla bug.

~ 290 MiB metadata, ~ 190 MiB btrfs-image.  So roughly 2/3 the size of 
metadata.  Metadata minus file tails (which are actually data, thus 
should be zeroed)?

Thanks.

I wonder if it compresses?

But yeah, without checking bugzie filesize caps, that eyeballs as a bit 
big.  It could be kept to be private-mailed on request, and/or stuck on a 
pastebin somewhere, however.

As long as the reporter isn't stuck on mobile with a gig-a-month data cap 
or dialup or some such...

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14  9:30                     ` Duncan
@ 2014-01-14  9:38                       ` Hugo Mills
  2014-01-14 17:17                       ` Chris Murphy
  1 sibling, 0 replies; 36+ messages in thread
From: Hugo Mills @ 2014-01-14  9:38 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1368 bytes --]

On Tue, Jan 14, 2014 at 09:30:14AM +0000, Duncan wrote:
> Chris Murphy posted on Tue, 14 Jan 2014 00:49:13 -0700 as excerpted:
> 
> > On Jan 13, 2014, at 11:03 PM, Duncan <1i5t5.duncan@cox.net> wrote:
> > 
> >> Chris Murphy posted on Mon, 13 Jan 2014 17:40:33 -0700 as excerpted:
> >> 
> >>> btrfs-image -c 9-t 4 /dev/sdX /mnt/<pathtoanothervolume+filename>
> >>> 
> >>> You can keep it handy in case a dev asks for it or you can attach it
> >>> to a kernel.org bug report.
> >> 
> >> How big are they?  manpage says it's metadata only (data is zeroed),
> >> so I'd guess one could expect it to be about metadata size as reported
> >> by btrfs fi df?
> > 
> > # btrfs fi df /mnt/isos
> > Metadata, DUP: total=1.00GiB, used=289.53MiB
> > 
> > 193M -rw-r--r--. 1 root root 193M Jan 14 00:44 sdb.img
> > 
> > Guess that's too big to attach to a bugzilla bug.
> 
> ~ 290 MiB metadata, ~ 190 MiB btrfs-image.  So roughly 2/3 the size of 
> metadata.  Metadata minus file tails (which are actually data, thus 
> should be zeroed)?
> 
> Thanks.
> 
> I wonder if it compresses?

   Yes, that's the -c9 option, and it compresses fairly well.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
    --- Questions are a burden, and answers a prison for oneself. ---    

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14  9:30                     ` Duncan
  2014-01-14  9:38                       ` Hugo Mills
@ 2014-01-14 17:17                       ` Chris Murphy
  2014-01-18  7:20                         ` Chris Samuel
  1 sibling, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-01-14 17:17 UTC (permalink / raw)
  To: Btrfs BTRFS


On Jan 14, 2014, at 2:30 AM, Duncan <1i5t5.duncan@cox.net> wrote:
> 
> I wonder if it compresses?

-c 9 is max compression although I don't know what algorithm btrfs-image uses off hand. If I use xz on it, 193MB becomes 192MB.


Chris Murphy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-14 17:17                       ` Chris Murphy
@ 2014-01-18  7:20                         ` Chris Samuel
  0 siblings, 0 replies; 36+ messages in thread
From: Chris Samuel @ 2014-01-18  7:20 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 542 bytes --]

On Tue, 14 Jan 2014 10:17:19 AM Chris Murphy wrote:

> -c 9 is max compression although I don't know what algorithm btrfs-image
> uses off hand. If I use xz on it, 193MB becomes 192MB.

Be interesting to generate the same image without compression in btrfs-image 
and then xz it, in case it can do better with the raw data.

cheers!
Chris
-- 
 Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC

This email may come with a PGP signature as a file. Do not panic.
For more info see: http://en.wikipedia.org/wiki/OpenPGP

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 482 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-12 22:31             ` Hendrik Friedel
  2014-01-14  0:40               ` Chris Murphy
@ 2014-01-19 19:37               ` Martin Steigerwald
  2014-01-21 20:00                 ` Hendrik Friedel
  1 sibling, 1 reply; 36+ messages in thread
From: Martin Steigerwald @ 2014-01-19 19:37 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: Chris Murphy, linux-btrfs

Am Sonntag, 12. Januar 2014, 23:31:43 schrieb Hendrik Friedel:
> > It mounts OK with no kernel messages?
> 
> Yes. Here I mount the three subvolumes:

Does scrubbing the volume give any errors?

I´d test this. If scrubbing runs through without errors at least your data is 
currently safe.

As to the error messages: I do not know how critical those are.

I usually just scrub my filesystems once in a while and would only try btrfs 
check on one that fails the scrubbing or has problems mounting or (in some 
cases) yields strange messages in dmesg.

-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-19 19:37               ` Martin Steigerwald
@ 2014-01-21 20:00                 ` Hendrik Friedel
  2014-01-21 20:01                   ` Hendrik Friedel
  2014-02-08 22:01                   ` Hendrik Friedel
  0 siblings, 2 replies; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-21 20:00 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Chris Murphy, linux-btrfs

Hello,

>> Yes. Here I mount the three subvolumes:
>
> Does scrubbing the volume give any errors?

Last time I did (that was after I discovered the first errors in 
btrfsck) scrub, it found no error. But I will re-check asap.


> As to the error messages: I do not know how critical those are.
>
> I usually just scrub my filesystems once in a while and would only try btrfs
> check on one that fails the scrubbing or has problems mounting or (in some
> cases) yields strange messages in dmesg.

Ok.
I think, I do/did have some symptoms, but I cannot exclude other reasons..
-High Load without high cpu-usage (io was the bottleneck)
-Just now: transfer from one directory to the other on the same 
subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead 
of > 60.
-For some of the files I even got a "no space left on device" error.

This is without any messages in dmesg or syslog related to btrfs.

Greetings,
Hendrik

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-21 20:00                 ` Hendrik Friedel
@ 2014-01-21 20:01                   ` Hendrik Friedel
  2014-02-08 22:01                   ` Hendrik Friedel
  1 sibling, 0 replies; 36+ messages in thread
From: Hendrik Friedel @ 2014-01-21 20:01 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Chris Murphy, linux-btrfs

Hello again:

> I think, I do/did have some symptoms, but I cannot exclude other reasons..
> -High Load without high cpu-usage (io was the bottleneck)
> -Just now: transfer from one directory to the other on the same
> subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead
> of > 60.
> -For some of the files I even got a "no space left on device" error.

And the first symptom is also there:
top - 21:00:58 up 22:19,  5 users,  load average: 1.08, 1.15, 1.09
Tasks: 204 total,   1 running, 203 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.5%us,  2.7%sy,  0.3%ni, 66.6%id, 28.6%wa,  0.3%hi,  0.0%si, 
0.0%st
Mem:   3795584k total,  3614088k used,   181496k free,   367820k buffers
Swap:  8293372k total,    45464k used,  8247908k free,  2337704k cached

Greetings,
Hendrik



-- 
Hendrik Friedel
Auf dem Brink 12
28844 Weyhe
Mobil 0178 1874363

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-01-21 20:00                 ` Hendrik Friedel
  2014-01-21 20:01                   ` Hendrik Friedel
@ 2014-02-08 22:01                   ` Hendrik Friedel
  2014-02-09  0:45                     ` Chris Murphy
  1 sibling, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-02-08 22:01 UTC (permalink / raw)
  To: Martin Steigerwald; +Cc: Chris Murphy, linux-btrfs

Hello,

> Ok.
> I think, I do/did have some symptoms, but I cannot exclude other reasons..
> -High Load without high cpu-usage (io was the bottleneck)
> -Just now: transfer from one directory to the other on the same
> subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead
> of > 60.
> -For some of the files I even got a "no space left on device" error.
>
> This is without any messages in dmesg or syslog related to btrfs.

as I don't see that I can fix this, I intend to re-create the 
file-system. For that, I need to remove one of the two discs from the 
raid/filesystem, then create a new fs on this and move the data to it (I 
have no spare)
Could you please advise me, wheather this will be successful?


first some Information on the filesystem:
----------------
./btrfs filesystem show /dev/sdb1
Label: none  uuid: 989306aa-d291-4752-8477-0baf94f8c42f
         Total devices 2 FS bytes used 3.47TiB
         devid    1 size 2.73TiB used 1.74TiB path /dev/sdb1
         devid    2 size 2.73TiB used 1.74TiB path /dev/sdc1

/btrfs subvolume list /mnt/BTRFS/Video
ID 256 gen 226429 top level 5 path Video
ID 1495 gen 226141 top level 5 path rsnapshot
ID 4444 gen 226429 top level 256 path Snapshot
ID 5845 gen 226375 top level 5 path backups

btrfs fi df /mnt/BTRFS/Video/
Data, RAID0: total=3.48TB, used=3.47TB
System, RAID1: total=32.00MB, used=260.00KB
Metadata, RAID1: total=4.49GB, used=3.85GB
--------------------

What I did already yesterday was:

  btrfs device delete /dev/sdc1 /mnt/BTRFS/rsnapshot/
  btrfs device delete /dev/sdc1 /mnt/BTRFS/backups/
  btrfs device delete /dev/sdc1 /mnt/BTRFS/Video/
  btrfs filesystem balance start /mnt/BTRFS/Video/

next, I'm doing the balance for the subvolume /mnt/BTRFS/backups

In parallel, I try to delete /mnt/BTRFS/rsnapshot, but it fails:
   btrfs subvolume delete  /mnt/BTRFS/rsnapshot/
   Delete subvolume '/mnt/BTRFS/rsnapshot'
   ERROR: cannot delete '/mnt/BTRFS/rsnapshot' - Inappropriate ioctl
   for  device

Why's that?
But even more: How do I free sdc1 now?!

Greetings,
Hendrik

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-08 22:01                   ` Hendrik Friedel
@ 2014-02-09  0:45                     ` Chris Murphy
  2014-02-09  8:36                       ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-02-09  0:45 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: Martin Steigerwald, linux-btrfs


On Feb 8, 2014, at 3:01 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hello,
> 
>> Ok.
>> I think, I do/did have some symptoms, but I cannot exclude other reasons..
>> -High Load without high cpu-usage (io was the bottleneck)
>> -Just now: transfer from one directory to the other on the same
>> subvolume (from /mnt/subvol/A/B to /mnt/subvol/A) I get 1.2MB/s instead
>> of > 60.
>> -For some of the files I even got a "no space left on device" error.
>> 
>> This is without any messages in dmesg or syslog related to btrfs.
> 
> as I don't see that I can fix this, I intend to re-create the file-system. For that, I need to remove one of the two discs from the raid/filesystem, then create a new fs on this and move the data to it (I have no spare)
> Could you please advise me, wheather this will be successful?
> 
> 
> first some Information on the filesystem:
> ----------------
> ./btrfs filesystem show /dev/sdb1
> Label: none  uuid: 989306aa-d291-4752-8477-0baf94f8c42f
>        Total devices 2 FS bytes used 3.47TiB
>        devid    1 size 2.73TiB used 1.74TiB path /dev/sdb1
>        devid    2 size 2.73TiB used 1.74TiB path /dev/sdc1

I don't understand the no spare part. You have 3.47T of data, and yet the single device size is 2.73T. There is no way to migrate 1.74T from sdc1 to sdb1 because there isn't enough space.




> 
> /btrfs subvolume list /mnt/BTRFS/Video
> ID 256 gen 226429 top level 5 path Video
> ID 1495 gen 226141 top level 5 path rsnapshot
> ID 4444 gen 226429 top level 256 path Snapshot
> ID 5845 gen 226375 top level 5 path backups
> 
> btrfs fi df /mnt/BTRFS/Video/
> Data, RAID0: total=3.48TB, used=3.47TB
> System, RAID1: total=32.00MB, used=260.00KB
> Metadata, RAID1: total=4.49GB, used=3.85GB
> --------------------
> 
> What I did already yesterday was:
> 
> btrfs device delete /dev/sdc1 /mnt/BTRFS/rsnapshot/
> btrfs device delete /dev/sdc1 /mnt/BTRFS/backups/
> btrfs device delete /dev/sdc1 /mnt/BTRFS/Video/
> btrfs filesystem balance start /mnt/BTRFS/Video/

I don't understand this sequence because I don't know what you've mounted where, but in any case maybe it's a bug that you're not getting errors for each of these commands because  you can't delete sdc1 from a raid0 volume. You'd first have to convert the data, metadata, and system profiles to single (metadata can be set to dup). And then you'd be able to delete a device so long as there's room on remaining devices, which you don't have.


> next, I'm doing the balance for the subvolume /mnt/BTRFS/backups

You told us above  you deleted that subvolume. So how are you balancing it? And also, balance applies to a mountpoint, and even if you mount a subvolume to that mountpoint, the whole file system is balanced. Not just the mounted subvolume.


> In parallel, I try to delete /mnt/BTRFS/rsnapshot, but it fails:
>  btrfs subvolume delete  /mnt/BTRFS/rsnapshot/
>  Delete subvolume '/mnt/BTRFS/rsnapshot'
>  ERROR: cannot delete '/mnt/BTRFS/rsnapshot' - Inappropriate ioctl
>  for  device
> 
> Why's that?
> But even more: How do I free sdc1 now?!


Well I'm pretty confused because again, I can't tell if your paths refer to subvolumes or if they refer to mount points. The balance and device delete commands all refer to a mount point, which is the path returned by the df command. The subvolume delete command needs a path to subvolume that starts with the mount point.


Chris Murphy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-09  0:45                     ` Chris Murphy
@ 2014-02-09  8:36                       ` Hendrik Friedel
  2014-02-11  1:45                         ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-02-09  8:36 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Martin Steigerwald, linux-btrfs

Hi Chris,

thanks for your reply.

 >> ./btrfs filesystem show /dev/sdb1
>> Label: none  uuid: 989306aa-d291-4752-8477-0baf94f8c42f
>>         Total devices 2 FS bytes used 3.47TiB
>>         devid    1 size 2.73TiB used 1.74TiB path /dev/sdb1
>>         devid    2 size 2.73TiB used 1.74TiB path /dev/sdc1
>
> I don't understand the no spare part. You have 3.47T of data, and yet the single device size is 2.73T.
> There is no way to migrate 1.74T from sdc1 to sdb1 because there isn't enough space.

Fair point. I summed up manually (with du) and apparently missed some 
data. I can move the 0.8TiB out of the way. I just don't have 3.5TiB 
'spare'.



>> btrfs device delete /dev/sdc1 /mnt/BTRFS/rsnapshot/
>> btrfs device delete /dev/sdc1 /mnt/BTRFS/backups/
>> btrfs device delete /dev/sdc1 /mnt/BTRFS/Video/
>> btrfs filesystem balance start /mnt/BTRFS/Video/
>
> I don't understand this sequence because I don't know what you've mounted where,

I'm sorry. here you go:
/btrfs subvolume list /mnt/BTRFS/Video
ID 256 gen 226429 top level 5 path Video --> /mnt/BTRFS/Video/
ID 1495 gen 226141 top level 5 path rsnapshot  --> /mnt/BTRFS/rsnapshot
ID 4444 gen 226429 top level 256 path Snapshot --> not mounted
ID 5845 gen 226375 top level 5 path backups --> /mnt/BTRFS/backups


> but in any case maybe it's a bug that you're not getting errors for each
 > of these commands because  you can't delete sdc1 from a raid0 volume.
That makes sense. I read that procedure somewhere in the -totally 
unvalidated- Internet.
In case the missing Error-Message is a Bug: Is this place here 
sufficient to report it, or is there a Bug-Tracker?


 > You'd first have to convert the data, metadata, and system profiles to
 > single (metadata can be set to dup). And then you'd be able to delete
 > a device so long as there's room on remaining devices, which you 
don't have.

Yes, but I can create that space.
So, for me the next steps would be to:
-generate enough room on the filesystem
-btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/BTRFS/Video
-btrfs device delete /dev/sdc1 /mnt/BTRFS/Video

Right?

>> next, I'm doing the balance for the subvolume /mnt/BTRFS/backups
>
> You told us above  you deleted that subvolume. So how are you balancing it?

Yes, that was my understanding from my research:
You tell btrfs, that you want to remove one disc from the filesystem and 
then balance it to move the data on the remaining disc. I did find this 
logical. I was expecting that I possibly need a further command to tell 
btrfs that it's not a raid anymore, but I thought this could also be 
automagical.
I understand, that's not the way it is implemented, but it's not a crazy 
idea, is it?

> And also, balance applies to a mountpoint, and even if you mount a
 > subvolume to that mountpoint, the whole file system is balanced.
 > Not just the mounted subvolume.

That is confusing. (I mean: I understand what you are saying, but it's 
counterintuitive). Why is this the case?

>> In parallel, I try to delete /mnt/BTRFS/rsnapshot, but it fails:
>>   btrfs subvolume delete  /mnt/BTRFS/rsnapshot/
>>   Delete subvolume '/mnt/BTRFS/rsnapshot'
>>   ERROR: cannot delete '/mnt/BTRFS/rsnapshot' - Inappropriate ioctl
>>   for  device
>>
>> Why's that?
>> But even more: How do I free sdc1 now?!
>
>
> Well I'm pretty confused because again, I can't tell if your paths refer to
 > subvolumes or if they refer to mount points.

Now I am confused. These paths are the paths to which I mounted the 
subvolumes:
my (abbreviated) fstab:
UUID=xy      /mnt/BTRFS/Video btrfs subvol=Video
UUID=xy     /mnt/BTRFS/rsnapshot btrfs subvol=rsnapshot
UUID=xy     /mnt/BTRFS/backups btrfs subvol=backups


 > The balance and device delete commands all refer to a mount point, 
which is the path returned by the df command.
So this:
/dev/sdb1       5,5T    3,5T  2,0T   64% /mnt/BTRFS/Video
/dev/sdb1       5,5T    3,5T  2,0T   64% /mnt/BTRFS/backups
/dev/sdc1       5,5T    3,5T  2,0T   64% /mnt/BTRFS/rsnapshot

> The subvolume delete command needs a path to subvolume that starts with the mount point.
Sorry, this I do not understand, no matter how hard I think about it..
What would it be in my case?

Thanks for your help! I appreciate it.


Greetings,
Hendrik

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-09  8:36                       ` Hendrik Friedel
@ 2014-02-11  1:45                         ` Chris Murphy
  2014-02-11  2:23                           ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-02-11  1:45 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs


On Feb 9, 2014, at 1:36 AM, Hendrik Friedel <hendrik@friedels.name> wrote:
> 
> Yes, but I can create that space.
> So, for me the next steps would be to:
> -generate enough room on the filesystem
> -btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/BTRFS/Video
> -btrfs device delete /dev/sdc1 /mnt/BTRFS/Video
> 
> Right?

No. You said you need to recreate the file system, and only have these two devices and therefore must remove one device. You can't achieve that with raid1 which requires minimum two devices.

-dconvert=single -mconvert=dup -sconvert=dup

> 
>>> next, I'm doing the balance for the subvolume /mnt/BTRFS/backups
>> 
>> You told us above  you deleted that subvolume. So how are you balancing it?
> 
> Yes, that was my understanding from my research:
> You tell btrfs, that you want to remove one disc from the filesystem and then balance it to move the data on the remaining disc. I did find this logical. I was expecting that I possibly need a further command to tell btrfs that it's not a raid anymore, but I thought this could also be automagical.
> I understand, that's not the way it is implemented, but it's not a crazy idea, is it?

Well it's not the right way to think that devices are raid1 or raid0. It's the data or metadata that has that attribute. And by removing a device you are managing devices, not the attribute of data or metadata chunks. Since you're already at the minimum number of disks for raid0, that's why conversion is needed first.


> 
>> And also, balance applies to a mountpoint, and even if you mount a
> > subvolume to that mountpoint, the whole file system is balanced.
> > Not just the mounted subvolume.
> 
> That is confusing. (I mean: I understand what you are saying, but it's counterintuitive). Why is this the case?

A subvolume is a file system tree. The data created in that tree is allocated to chunks which can contain data from other trees. And balance reads/writes chunks. It's not a subvolume aware command.


> 
>>> In parallel, I try to delete /mnt/BTRFS/rsnapshot, but it fails:
>>>  btrfs subvolume delete  /mnt/BTRFS/rsnapshot/
>>>  Delete subvolume '/mnt/BTRFS/rsnapshot'
>>>  ERROR: cannot delete '/mnt/BTRFS/rsnapshot' - Inappropriate ioctl
>>>  for  device
>>> 
>>> Why's that?
>>> But even more: How do I free sdc1 now?!
>> 
>> 
>> Well I'm pretty confused because again, I can't tell if your paths refer to
> > subvolumes or if they refer to mount points.
> 
> Now I am confused. These paths are the paths to which I mounted the subvolumes:
> my (abbreviated) fstab:
> UUID=xy      /mnt/BTRFS/Video btrfs subvol=Video
> UUID=xy     /mnt/BTRFS/rsnapshot btrfs subvol=rsnapshot
> UUID=xy     /mnt/BTRFS/backups btrfs subvol=backups
> 
> 
> > The balance and device delete commands all refer to a mount point, which is the path returned by the df command.
> So this:
> /dev/sdb1       5,5T    3,5T  2,0T   64% /mnt/BTRFS/Video
> /dev/sdb1       5,5T    3,5T  2,0T   64% /mnt/BTRFS/backups
> /dev/sdc1       5,5T    3,5T  2,0T   64% /mnt/BTRFS/rsnapshot

You can't delete a mounted subvolume. You'd have to unmount it first. And then you'd have to mount a parent subvolume. So if the subvolume you want to delete is in the ID 5 subvolume, you must mount that subvolume, for example:
 
mount /dev/sdb1 /mnt/btrfs
btrfs subvolume delete /mnt/btrfs/<subvolumetodelete>



Chris Murphy

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-11  1:45                         ` Chris Murphy
@ 2014-02-11  2:23                           ` Chris Murphy
  2014-02-16 19:18                             ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-02-11  2:23 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs


On Feb 10, 2014, at 6:45 PM, Chris Murphy <lists@colorremedies.com> wrote:

> 
> On Feb 9, 2014, at 1:36 AM, Hendrik Friedel <hendrik@friedels.name> wrote:
>> 
>> Yes, but I can create that space.
>> So, for me the next steps would be to:
>> -generate enough room on the filesystem
>> -btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/BTRFS/Video
>> -btrfs device delete /dev/sdc1 /mnt/BTRFS/Video
>> 
>> Right?
> 
> No. You said you need to recreate the file system, and only have these two devices and therefore must remove one device. You can't achieve that with raid1 which requires minimum two devices.
> 
> -dconvert=single -mconvert=dup -sconvert=dup

Actually, I'm reminded with multiple devices that dup might not be possible. Instead you might have to using single for all of them. Then remove the device you want removed. And then do another conversion for just -mconvert=dup -sconvert=dup, and do not specify -dconvert. That way the single metadata profile is converted to duplicate.


Chris

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-11  2:23                           ` Chris Murphy
@ 2014-02-16 19:18                             ` Hendrik Friedel
  2014-02-16 19:54                               ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-02-16 19:18 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

Hi Chris,

thanks for your hint.
 >> No. You said you need to recreate the file system, and only have 
these two devices and therefore must remove one device. You can't 
achieve that with raid1 which requires minimum two devices.
>>
>> -dconvert=single -mconvert=dup -sconvert=dup
>
> Actually, I'm reminded with multiple devices that dup might not be possible. Instead you might have to using single for all of them. Then remove the device you want removed. And then do another conversion for just -mconvert=dup -sconvert=dup, and do not specify -dconvert. That way the single metadata profile is converted to duplicate.

I think it didn't work.

btrfs balance start  -dconvert=single -mconvert=single -sconvert=single 
--force /mnt/BTRFS/Video/
After >10h:
  btrfs balance status /mnt/BTRFS/Video/
No balance found on '/mnt/BTRFS/Video/'
root@homeserver:~# btrfs fi df /mnt/BTRFS/Video/
Data, RAID0: total=4.00GB, used=4.00GB
Data: total=2.29TB, used=2.29TB
System: total=32.00MB, used=256.00KB
Metadata: total=4.00GB, used=2.57GB
root@homeserver:~# btrfs fi show
Label: none  uuid: 989306aa-d291-4752-8477-0baf94f8c42f
         Total devices 2 FS bytes used 2.29TB
         devid    2 size 2.73TB used 1.15TB path /dev/sdc1
         devid    1 size 2.73TB used 1.15TB path /dev/sdb1


(you see that I cleaned up beforehand, so that enough space is 
available, generally).

Do you have an idea what could be wrong?


Thanks and Regards,
Hendrik


-- 
Hendrik Friedel
Auf dem Brink 12
28844 Weyhe
Mobil 0178 1874363

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-16 19:18                             ` Hendrik Friedel
@ 2014-02-16 19:54                               ` Chris Murphy
  2014-02-17  3:20                                 ` Duncan
  2014-03-03  1:09                                 ` Russell Coker
  0 siblings, 2 replies; 36+ messages in thread
From: Chris Murphy @ 2014-02-16 19:54 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: linux-btrfs


On Feb 16, 2014, at 12:18 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hi Chris,
> 
> thanks for your hint.
> >> No. You said you need to recreate the file system, and only have these two devices and therefore must remove one device. You can't achieve that with raid1 which requires minimum two devices.
>>> 
>>> -dconvert=single -mconvert=dup -sconvert=dup
>> 
>> Actually, I'm reminded with multiple devices that dup might not be possible. Instead you might have to using single for all of them. Then remove the device you want removed. And then do another conversion for just -mconvert=dup -sconvert=dup, and do not specify -dconvert. That way the single metadata profile is converted to duplicate.
> 
> I think it didn't work.
> 
> btrfs balance start  -dconvert=single -mconvert=single -sconvert=single --force /mnt/BTRFS/Video/
> After >10h:
> btrfs balance status /mnt/BTRFS/Video/
> No balance found on '/mnt/BTRFS/Video/'
> root@homeserver:~# btrfs fi df /mnt/BTRFS/Video/
> Data, RAID0: total=4.00GB, used=4.00GB
> Data: total=2.29TB, used=2.29TB
> System: total=32.00MB, used=256.00KB
> Metadata: total=4.00GB, used=2.57GB

It looks like everything is single except for 4GB of data which is still raid0. Weird. There should be a bunch of messages in dmesg during a normal/successful balance, and either something mentioned or missing might provide a clue why some chunks weren't converted. Unmounted, what do you get for btrfs check?

> 
> Do you have an idea what could be wrong?

No. I'd say it's a bug. 3.14rc3 should be out today, and might be worth a shot. Or btrfs-next. If you try again, you only need to convert the data profile.

Also, 10 hours to balance two disks at 2.3TB seems like a long time. I'm not sure if that's expected.

Chris Murphy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-16 19:54                               ` Chris Murphy
@ 2014-02-17  3:20                                 ` Duncan
  2014-02-17  9:41                                   ` Goswin von Brederlow
  2014-02-18 21:55                                   ` Hendrik Friedel
  2014-03-03  1:09                                 ` Russell Coker
  1 sibling, 2 replies; 36+ messages in thread
From: Duncan @ 2014-02-17  3:20 UTC (permalink / raw)
  To: linux-btrfs

Chris Murphy posted on Sun, 16 Feb 2014 12:54:44 -0700 as excerpted:

> On Feb 16, 2014, at 12:18 PM, Hendrik Friedel <hendrik@friedels.name>
> wrote:

[On balance converting to single from raidN:]

>> I think it didn't work.
>> 
>> btrfs balance start  -dconvert=single -mconvert=single -sconvert=single
>> --force /mnt/BTRFS/Video/
>> After >10h:
>> btrfs balance status /mnt/BTRFS/Video/
>> No balance found on '/mnt/BTRFS/Video/'
>> root@homeserver:~# btrfs fi df /mnt/BTRFS/Video/
>> Data, RAID0: total=4.00GB, used=4.00GB
>> Data: total=2.29TB, used=2.29TB
>> System: total=32.00MB, used=256.00KB
>> Metadata: total=4.00GB, used=2.57GB
> 
> It looks like everything is single except for 4GB of data which is still
> raid0. Weird. There should be a bunch of messages in dmesg during a
> normal/successful balance, and either something mentioned or missing
> might provide a clue why some chunks weren't converted.

Agreed.

> Unmounted, what do you get for btrfs check?

Agreed, but it's worth an explanation and explicit warning just in case...

btrfs check is read-only by default -- it'll tell you what it thinks is 
wrong, but won't attempt to correct anything.  Adding --repair tells it 
to try to correct the errors it found, but the recommendation is do NOT 
use --repair unless it's a last-ditch effort after other things failed, 
and preferably only after a btrfs dev says to, because sometimes it can 
make things worse instead of better.

So running the (read-only) /check/ to see what it says is a good idea, 
but do NOT try to run it with --repair just yet, no matter what errors it 
thinks it sees.

>> Do you have an idea what could be wrong?
> 
> No. I'd say it's a bug. 3.14rc3 should be out today, and might be worth
> a shot. Or btrfs-next. If you try again, you only need to convert the
> data profile.

https://btrfs.wiki.kernel.org/index.php/Balance_Filters

Based on that, I'd suggest

btrfs balance start -dconvert=single,soft /mnt/BTRFS/Video/

Given that there's only 4 GiB left to convert, it should go MUCH faster 
than the 10 hours the multiple TiB took.

> Also, 10 hours to balance two disks at 2.3TB seems like a long time. I'm
> not sure if that's expected.

FWIW, I think you may not realize how big 2.3 TiB is, and/or how slow 
spinning rust can be when dealing with TiBs of potentially fragmented 
data...

2.3TiB * 1024GiB/TiB * 1024 MiB/GiB / 10 hours / 60 min/hr / 60 sec/min =

66.99... real close to 67 MiB/sec

Since it's multiple TiB we're talking and only two devices, that's almost 
certainly spinning rust, not SSD, and on spinning rust, 67 MiB/sec really 
isn't /that/ bad, especially if the filesystem wasn't new and had been 
reasonably used, thus likely had some fragmentation to deal with.

But the good news is that the 4 GiB remaining should be much faster; @ 
the 67 MiB/sec average of the above, we're talking about a minute.  
Throwing in that "soft" should tell it to ignore the previously converted 
data, and only balance data chunks that aren't yet in the target single 
profile, so it should only do the 4 GiB that's still raid0, not redo the 
multiple TiB.

Tho it will probably have to check the profile on each chunk still, and 
if that remaining data is hugely fragmented or something that could take 
a bit longer, so it could be two minutes or ten minutes instead of one, 
but if it's more than an hour, I'd definitely be wondering what's up!

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-17  3:20                                 ` Duncan
@ 2014-02-17  9:41                                   ` Goswin von Brederlow
  2014-02-18 21:55                                   ` Hendrik Friedel
  1 sibling, 0 replies; 36+ messages in thread
From: Goswin von Brederlow @ 2014-02-17  9:41 UTC (permalink / raw)
  To: Duncan; +Cc: linux-btrfs

On Mon, Feb 17, 2014 at 03:20:58AM +0000, Duncan wrote:
> Chris Murphy posted on Sun, 16 Feb 2014 12:54:44 -0700 as excerpted:
> > Also, 10 hours to balance two disks at 2.3TB seems like a long time. I'm
> > not sure if that's expected.
> 
> FWIW, I think you may not realize how big 2.3 TiB is, and/or how slow 
> spinning rust can be when dealing with TiBs of potentially fragmented 
> data...
> 
> 2.3TiB * 1024GiB/TiB * 1024 MiB/GiB / 10 hours / 60 min/hr / 60 sec/min =
> 
> 66.99... real close to 67 MiB/sec
> 
> Since it's multiple TiB we're talking and only two devices, that's almost 
> certainly spinning rust, not SSD, and on spinning rust, 67 MiB/sec really 
> isn't /that/ bad, especially if the filesystem wasn't new and had been 
> reasonably used, thus likely had some fragmentation to deal with.

Don't forget that that is 67MiB/s reading data and 67MiB/s writing
data giving a total of 134MiB/s. 

Still, on a good system each disk should have about that speed so it's
about 50% of theoretical maximum. Which is quite good given that the
disks will need to seek between every read and write. In comparison
moving data with LVM gets only about half that speed and that doesn't
even have the overhead of a filesystem to deal with.

MfG
	Goswin

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-17  3:20                                 ` Duncan
  2014-02-17  9:41                                   ` Goswin von Brederlow
@ 2014-02-18 21:55                                   ` Hendrik Friedel
  2014-02-18 22:12                                     ` Chris Murphy
  1 sibling, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-02-18 21:55 UTC (permalink / raw)
  To: Duncan, linux-btrfs, Chris Murphy

Hello,

  >> It looks like everything is single except for 4GB of data which is 
still
>> raid0. Weird. There should be a bunch of messages in dmesg during a
>> normal/successful balance, and either something mentioned or missing
>> might provide a clue why some chunks weren't converted.
  >
> Agreed.

time ./btrfs balance start  -dconvert=single,soft /mnt/BTRFS/Video/
ERROR: error during balancing '/mnt/BTRFS/Video/' - No space left on device
There may be more info in syslog - try dmesg | tail

real    0m23.803s
user    0m0.000s
sys     0m1.070s

dmesg:
[697498.761318] btrfs: relocating block group 19874593112064 flags 9
[697507.614140] btrfs: relocating block group 19715679322112 flags 9
[697516.218690] btrfs: 2 enospc errors during balance


>> Unmounted, what do you get for btrfs check?
>
> Agreed, but it's worth an explanation and explicit warning just in case...

Many errors. For better readability, you find the complete output at the 
end of this mail.

...
Extent back ref already exists for 20463459852288 parent 22989777764352 
root 0
Extent back ref already exists for 22727791968256 parent 22989777764352 
root 0
...
Extent back ref already exists for 20463415173120 parent 22989923053568 
root 0
ref mismatch on [20463062638592 4096] extent item 1, found 2
Incorrect global backref count on 20463062638592 found 1 wanted 2
backpointer mismatch on [20463062638592 4096]
...
root 17050 inode 14259 errors 100, file extent discount
found 990180465844 bytes used err is 1


> btrfs check is read-only by default -- it'll tell you what it thinks is
> wrong, but won't attempt to correct anything.  Adding --repair tells it
> to try to correct the errors it found, but the recommendation is do NOT
> use --repair unless it's a last-ditch effort after other things failed,
> and preferably only after a btrfs dev says to, because sometimes it can
> make things worse instead of better.

Ok, I didn't do that now.
But I had done that before...

As a reminder:
This all started with the problem that my system became 
unresponsive/slow sometimes. I thus ran a btrfsck.

Below some samples from about one month ago:
----------
Extent back ref already exists for 2994950590464 parent 863072366592 root 0

ref mismatch on [32935936 4096] extent item 1, found 2
repair deleting extent record: key 32935936 168 4096
adding new tree backref on start 32935936 len 4096 parent 2994784206848 
root 2994784206848
Incorrect global backref count on 32935936 found 1 wanted 2
backpointer mismatch on [32935936 4096]
ref mismatch on [32997376 4096] extent item 1, found 2
repair deleting extent record: key 32997376 168 4096
adding new tree backref on start 32997376 len 4096 parent 2994824708096 
root 2994824708096
Incorrect global backref count on 32997376 found 1 wanted 2
backpointer mismatch on [32997376 4096]

Incorrect global backref count on 8988365651968 found 1 wanted 0
backpointer mismatch on [8988365651968 4096]
repaired damaged extent references
checking free space cache
checking fs roots
root 256 inode 9579 errors 100, file extent discount
root 256 inode 9580 errors 100, file extent discount
root 256 inode 14258 errors 100, file extent discount
root 256 inode 14259 errors 100, file extent discount
root 4444 inode 9579 errors 100, file extent discount
root 4444 inode 9580 errors 100, file extent discount
root 4444 inode 14258 errors 100, file extent discount
root 4444 inode 14259 errors 100, file extent discount
enabling repair mode
Checking filesystem on /dev/sdc1
UUID: 989306aa-d291-4752-8477-0baf94f8c42f
cache and super generation don't match, space cache will be invalidated
found 827360733827 bytes used err is 1
total csum bytes: 4446455380
total tree bytes: 5506977792
total fs tree bytes: 137293824
total extent tree bytes: 258691072
btree space waste bytes: 496921489
file data blocks allocated: 6440132583424
   referenced 6424163344384
Btrfs v3.12
-----the full output is attached------


Chris Murphy did suggest some things like
-mount with repair option (didn't help)
-check smartctrl (ok)
-scrub (no change)

For that reason I decided to get rid of this filesystem. But I need to 
free one of the discs for this.


> Based on that, I'd suggest
>
> btrfs balance start -dconvert=single,soft /mnt/BTRFS/Video/
>
> Given that there's only 4 GiB left to convert, it should go MUCH faster
> than the 10 hours the multiple TiB took.

Ok, that's valuable knowledge, as this saves lots of time.



Greetings,
Hendrik

Below the full output of todays btrfsck:

./btrfsck /dev/sdb1
Checking filesystem on /dev/sdb1
UUID: 989306aa-d291-4752-8477-0baf94f8c42f
checking extents
Extent back ref already exists for 20463459852288 parent 22989777764352 
root 0
Extent back ref already exists for 22727791968256 parent 22989777764352 
root 0
Extent back ref already exists for 22990383153152 parent 22989777764352 
root 0
Extent back ref already exists for 22989638897664 parent 22989777764352 
root 0
Extent back ref already exists for 22727392038912 parent 22989777764352 
root 0
Extent back ref already exists for 22990049484800 parent 22989777764352 
root 0
Extent back ref already exists for 22727394050048 parent 22989777764352 
root 0
Extent back ref already exists for 22728068673536 parent 22989777764352 
root 0
Extent back ref already exists for 22728592928768 parent 22989777764352 
root 0
Extent back ref already exists for 22727395975168 parent 22989777764352 
root 0
Extent back ref already exists for 22727395983360 parent 22989777764352 
root 0
Extent back ref already exists for 22727395979264 parent 22989777764352 
root 0
Extent back ref already exists for 22727395987456 parent 22989777764352 
root 0
Extent back ref already exists for 22727395991552 parent 22989777764352 
root 0
Extent back ref already exists for 22727395995648 parent 22989777764352 
root 0
Extent back ref already exists for 22727395999744 parent 22989777764352 
root 0
Extent back ref already exists for 22726591852544 parent 22989777764352 
root 0
Extent back ref already exists for 22990516260864 parent 22989777764352 
root 0
Extent back ref already exists for 22990577917952 parent 22989777764352 
root 0
Extent back ref already exists for 22727397859328 parent 22989777764352 
root 0
Extent back ref already exists for 22727397855232 parent 22989777764352 
root 0
Extent back ref already exists for 22727397863424 parent 22989777764352 
root 0
Extent back ref already exists for 22728458592256 parent 22989777764352 
root 0
Extent back ref already exists for 22727397609472 parent 22989777764352 
root 0
Extent back ref already exists for 22989850673152 parent 22989777764352 
root 0
Extent back ref already exists for 22727923941376 parent 22989777764352 
root 0
Extent back ref already exists for 22727397867520 parent 22989777764352 
root 0
Extent back ref already exists for 22727397613568 parent 22989777764352 
root 0
Extent back ref already exists for 22727397871616 parent 22989777764352 
root 0
Extent back ref already exists for 22990270459904 parent 22989777764352 
root 0
Extent back ref already exists for 22727559303168 parent 22989777764352 
root 0
Extent back ref already exists for 22727397617664 parent 22989777764352 
root 0
Extent back ref already exists for 22727397634048 parent 22989777764352 
root 0
Extent back ref already exists for 22727397629952 parent 22989777764352 
root 0
Extent back ref already exists for 22727924060160 parent 22989777764352 
root 0
Extent back ref already exists for 20463088771072 parent 22989777764352 
root 0
Extent back ref already exists for 22989638901760 parent 22989777764352 
root 0
Extent back ref already exists for 22727493369856 parent 22989777764352 
root 0
Extent back ref already exists for 22727104884736 parent 22989777764352 
root 0
Extent back ref already exists for 22989979959296 parent 22989777764352 
root 0
Extent back ref already exists for 22728433455104 parent 22989777764352 
root 0
Extent back ref already exists for 22726856859648 parent 22989777764352 
root 0
Extent back ref already exists for 20463062638592 parent 22989777764352 
root 0
Extent back ref already exists for 20463065632768 parent 22989777764352 
root 0
Extent back ref already exists for 22727869063168 parent 22989777764352 
root 0
Extent back ref already exists for 20463504248832 parent 22989777764352 
root 0
Extent back ref already exists for 22728591757312 parent 22989777764352 
root 0
Extent back ref already exists for 22727924064256 parent 22989777764352 
root 0
Extent back ref already exists for 22727046537216 parent 22989777764352 
root 0
Extent back ref already exists for 20463815450624 parent 22989777764352 
root 0
Extent back ref already exists for 20463085563904 parent 22989777764352 
root 0
Extent back ref already exists for 22726721363968 parent 22989777764352 
root 0
Extent back ref already exists for 22727828881408 parent 22989777764352 
root 0
Extent back ref already exists for 22727828885504 parent 22989777764352 
root 0
Extent back ref already exists for 22728593092608 parent 22989777764352 
root 0
Extent back ref already exists for 22727399534592 parent 22989777764352 
root 0
Extent back ref already exists for 22727828918272 parent 22989777764352 
root 0
Extent back ref already exists for 22990577303552 parent 22989777764352 
root 0
Extent back ref already exists for 22727767691264 parent 22989777764352 
root 0
Extent back ref already exists for 22726584791040 parent 22989777764352 
root 0
Extent back ref already exists for 22989850677248 parent 22989777764352 
root 0
Extent back ref already exists for 22726929747968 parent 22989777764352 
root 0
Extent back ref already exists for 20463560003584 parent 22989777764352 
root 0
Extent back ref already exists for 22727522791424 parent 22989777764352 
root 0
Extent back ref already exists for 20463740538880 parent 22989777764352 
root 0
Extent back ref already exists for 22727399620608 parent 22989777764352 
root 0
Extent back ref already exists for 22990127919104 parent 22989777764352 
root 0
Extent back ref already exists for 22727047536640 parent 22989777764352 
root 0
Extent back ref already exists for 22728068694016 parent 22989777764352 
root 0
Extent back ref already exists for 22727214133248 parent 22989777764352 
root 0
Extent back ref already exists for 22726525747200 parent 22989777764352 
root 0
Extent back ref already exists for 20463477268480 parent 22989777764352 
root 0
Extent back ref already exists for 22727973011456 parent 22989777764352 
root 0
Extent back ref already exists for 22728439918592 parent 22989777764352 
root 0
Extent back ref already exists for 22727972970496 parent 22989777764352 
root 0
Extent back ref already exists for 22728587952128 parent 22989777764352 
root 0
Extent back ref already exists for 22726929244160 parent 22989777764352 
root 0
Extent back ref already exists for 22726929248256 parent 22989777764352 
root 0
Extent back ref already exists for 22727828897792 parent 22989777764352 
root 0
Extent back ref already exists for 20464035745792 parent 22989777764352 
root 0
Extent back ref already exists for 22727828901888 parent 22989777764352 
root 0
Extent back ref already exists for 22727828905984 parent 22989777764352 
root 0
Extent back ref already exists for 22727828910080 parent 22989777764352 
root 0
Extent back ref already exists for 22728414253056 parent 22989777764352 
root 0
Extent back ref already exists for 22728059179008 parent 22989777764352 
root 0
Extent back ref already exists for 22728434843648 parent 22989777764352 
root 0
Extent back ref already exists for 22728335515648 parent 22989777764352 
root 0
Extent back ref already exists for 22728433627136 parent 22989777764352 
root 0
Extent back ref already exists for 22727229071360 parent 22989777764352 
root 0
Extent back ref already exists for 22728100175872 parent 22989777764352 
root 0
Extent back ref already exists for 22726506930176 parent 22989777764352 
root 0
Extent back ref already exists for 22727642562560 parent 22989777764352 
root 0
Extent back ref already exists for 20464037945344 parent 22989777764352 
root 0
Extent back ref already exists for 20463971205120 parent 22989777764352 
root 0
Extent back ref already exists for 20464037908480 parent 22989777764352 
root 0
Extent back ref already exists for 22727828938752 parent 22989777764352 
root 0
Extent back ref already exists for 20464037912576 parent 22989777764352 
root 0
Extent back ref already exists for 20464066527232 parent 22989777764352 
root 0
Extent back ref already exists for 20464066576384 parent 22989777764352 
root 0
Extent back ref already exists for 20463888011264 parent 22989777764352 
root 0
Extent back ref already exists for 22989741858816 parent 22989777764352 
root 0
Extent back ref already exists for 22727873511424 parent 22989777764352 
root 0
Extent back ref already exists for 20464037928960 parent 22989777764352 
root 0
Extent back ref already exists for 20464037916672 parent 22989777764352 
root 0
Extent back ref already exists for 20464037937152 parent 22989777764352 
root 0
Extent back ref already exists for 22990549094400 parent 22989777764352 
root 0
Extent back ref already exists for 20463081979904 parent 22989777764352 
root 0
Extent back ref already exists for 20463064051712 parent 22989777764352 
root 0
Extent back ref already exists for 22727522848768 parent 22989777764352 
root 0
Extent back ref already exists for 22727046541312 parent 22989777764352 
root 0
Extent back ref already exists for 22728325935104 parent 22989777764352 
root 0
Extent back ref already exists for 22728330858496 parent 22989777764352 
root 0
Extent back ref already exists for 22728201531392 parent 22989777764352 
root 0
Extent back ref already exists for 22726507606016 parent 22989777764352 
root 0
Extent back ref already exists for 22726754209792 parent 22989777764352 
root 0
Extent back ref already exists for 22989777768448 parent 22989777764352 
root 0
Extent back ref already exists for 22726918811648 parent 22989777764352 
root 0
Extent back ref already exists for 20463088701440 parent 22989777764352 
root 0
Extent back ref already exists for 22727748812800 parent 22989777764352 
root 0
Extent back ref already exists for 22727973179392 parent 22989777764352 
root 0
Extent back ref already exists for 22727935164416 parent 22989777764352 
root 0
Extent back ref already exists for 22727988588544 parent 22989901127680 
root 0
Extent back ref already exists for 22989901148160 parent 22989901127680 
root 0
Extent back ref already exists for 22727970435072 parent 22989901127680 
root 0
Extent back ref already exists for 22990169636864 parent 22989901127680 
root 0
Extent back ref already exists for 22727091236864 parent 22989901127680 
root 0
Extent back ref already exists for 22727970406400 parent 22989901127680 
root 0
Extent back ref already exists for 20463705088000 parent 22989901127680 
root 0
Extent back ref already exists for 22727969415168 parent 22989901127680 
root 0
Extent back ref already exists for 22727969947648 parent 22989901127680 
root 0
Extent back ref already exists for 22727007752192 parent 22989901127680 
root 0
Extent back ref already exists for 20464059932672 parent 22989901127680 
root 0
Extent back ref already exists for 20464059961344 parent 22989901127680 
root 0
Extent back ref already exists for 20464060006400 parent 22989901127680 
root 0
Extent back ref already exists for 20464060051456 parent 22989901127680 
root 0
Extent back ref already exists for 20464060055552 parent 22989901127680 
root 0
Extent back ref already exists for 20464060096512 parent 22989901127680 
root 0
Extent back ref already exists for 20464060211200 parent 22989901127680 
root 0
Extent back ref already exists for 22727985885184 parent 22989901127680 
root 0
Extent back ref already exists for 22728090324992 parent 22989901127680 
root 0
Extent back ref already exists for 22727987724288 parent 22989901127680 
root 0
Extent back ref already exists for 20464059654144 parent 22989901127680 
root 0
Extent back ref already exists for 22727988805632 parent 22989901127680 
root 0
Extent back ref already exists for 22728091582464 parent 22989901127680 
root 0
Extent back ref already exists for 22728227930112 parent 22989901127680 
root 0
Extent back ref already exists for 22726928609280 parent 22989901127680 
root 0
Extent back ref already exists for 22728092659712 parent 22989901127680 
root 0
Extent back ref already exists for 22727981469696 parent 22989901127680 
root 0
Extent back ref already exists for 22727981498368 parent 22989901127680 
root 0
Extent back ref already exists for 22727981506560 parent 22989901127680 
root 0
Extent back ref already exists for 22728091500544 parent 22989901127680 
root 0
Extent back ref already exists for 22728091508736 parent 22989901127680 
root 0
Extent back ref already exists for 22728091516928 parent 22989901127680 
root 0
Extent back ref already exists for 22728091336704 parent 22989901127680 
root 0
Extent back ref already exists for 22728091467776 parent 22989901127680 
root 0
Extent back ref already exists for 22728237932544 parent 22989901127680 
root 0
Extent back ref already exists for 22728237342720 parent 22989901127680 
root 0
Extent back ref already exists for 22728394371072 parent 22989901127680 
root 0
Extent back ref already exists for 22728394395648 parent 22989901127680 
root 0
Extent back ref already exists for 22726525861888 parent 22989923053568 
root 0
Extent back ref already exists for 20463246749696 parent 22989923053568 
root 0
Extent back ref already exists for 20463246761984 parent 22989923053568 
root 0
Extent back ref already exists for 20463246774272 parent 22989923053568 
root 0
Extent back ref already exists for 20463246786560 parent 22989923053568 
root 0
Extent back ref already exists for 20463246798848 parent 22989923053568 
root 0
Extent back ref already exists for 20463385513984 parent 22989923053568 
root 0
Extent back ref already exists for 20463357014016 parent 22989923053568 
root 0
Extent back ref already exists for 20463415046144 parent 22989923053568 
root 0
Extent back ref already exists for 20463415054336 parent 22989923053568 
root 0
Extent back ref already exists for 22726850498560 parent 22989923053568 
root 0
Extent back ref already exists for 20463415070720 parent 22989923053568 
root 0
Extent back ref already exists for 20463415083008 parent 22989923053568 
root 0
Extent back ref already exists for 20463415091200 parent 22989923053568 
root 0
Extent back ref already exists for 20463357095936 parent 22989923053568 
root 0
Extent back ref already exists for 20463357104128 parent 22989923053568 
root 0
Extent back ref already exists for 20463415111680 parent 22989923053568 
root 0
Extent back ref already exists for 20463415128064 parent 22989923053568 
root 0
Extent back ref already exists for 20463357140992 parent 22989923053568 
root 0
Extent back ref already exists for 20463415152640 parent 22989923053568 
root 0
Extent back ref already exists for 20463415193600 parent 22989923053568 
root 0
Extent back ref already exists for 20463357177856 parent 22989923053568 
root 0
Extent back ref already exists for 20463357186048 parent 22989923053568 
root 0
Extent back ref already exists for 20463246721024 parent 22989923053568 
root 0
Extent back ref already exists for 20463246725120 parent 22989923053568 
root 0
Extent back ref already exists for 20463270359040 parent 22989923053568 
root 0
Extent back ref already exists for 20463246745600 parent 22989923053568 
root 0
Extent back ref already exists for 20463246753792 parent 22989923053568 
root 0
Extent back ref already exists for 20463246757888 parent 22989923053568 
root 0
Extent back ref already exists for 20463246766080 parent 22989923053568 
root 0
Extent back ref already exists for 20463246770176 parent 22989923053568 
root 0
Extent back ref already exists for 20463246778368 parent 22989923053568 
root 0
Extent back ref already exists for 20463246782464 parent 22989923053568 
root 0
Extent back ref already exists for 20463246790656 parent 22989923053568 
root 0
Extent back ref already exists for 20463246794752 parent 22989923053568 
root 0
Extent back ref already exists for 20463246802944 parent 22989923053568 
root 0
Extent back ref already exists for 20463246811136 parent 22989923053568 
root 0
Extent back ref already exists for 20463415132160 parent 22989923053568 
root 0
Extent back ref already exists for 20463357018112 parent 22989923053568 
root 0
Extent back ref already exists for 20463357022208 parent 22989923053568 
root 0
Extent back ref already exists for 20463415042048 parent 22989923053568 
root 0
Extent back ref already exists for 20463415050240 parent 22989923053568 
root 0
Extent back ref already exists for 22989923340288 parent 22989923053568 
root 0
Extent back ref already exists for 20463415062528 parent 22989923053568 
root 0
Extent back ref already exists for 20463357054976 parent 22989923053568 
root 0
Extent back ref already exists for 20463415066624 parent 22989923053568 
root 0
Extent back ref already exists for 20463415074816 parent 22989923053568 
root 0
Extent back ref already exists for 20463357075456 parent 22989923053568 
root 0
Extent back ref already exists for 20463415087104 parent 22989923053568 
root 0
Extent back ref already exists for 20463357087744 parent 22989923053568 
root 0
Extent back ref already exists for 20463415095296 parent 22989923053568 
root 0
Extent back ref already exists for 20463415099392 parent 22989923053568 
root 0
Extent back ref already exists for 20463357108224 parent 22989923053568 
root 0
Extent back ref already exists for 20463415103488 parent 22989923053568 
root 0
Extent back ref already exists for 20463415107584 parent 22989923053568 
root 0
Extent back ref already exists for 20463415119872 parent 22989923053568 
root 0
Extent back ref already exists for 20463415136256 parent 22989923053568 
root 0
Extent back ref already exists for 20463357136896 parent 22989923053568 
root 0
Extent back ref already exists for 20463415144448 parent 22989923053568 
root 0
Extent back ref already exists for 20463415148544 parent 22989923053568 
root 0
Extent back ref already exists for 20463415173120 parent 22989923053568 
root 0
ref mismatch on [20463062638592 4096] extent item 1, found 2
Incorrect global backref count on 20463062638592 found 1 wanted 2
backpointer mismatch on [20463062638592 4096]
ref mismatch on [20463064051712 4096] extent item 1, found 2
Incorrect global backref count on 20463064051712 found 1 wanted 2
backpointer mismatch on [20463064051712 4096]
ref mismatch on [20463065632768 4096] extent item 1, found 2
Incorrect global backref count on 20463065632768 found 1 wanted 2
backpointer mismatch on [20463065632768 4096]
ref mismatch on [20463081979904 4096] extent item 1, found 2
Incorrect global backref count on 20463081979904 found 1 wanted 2
backpointer mismatch on [20463081979904 4096]
ref mismatch on [20463085563904 4096] extent item 1, found 2
Incorrect global backref count on 20463085563904 found 1 wanted 2
backpointer mismatch on [20463085563904 4096]
ref mismatch on [20463088701440 4096] extent item 1, found 2
Incorrect global backref count on 20463088701440 found 1 wanted 2
backpointer mismatch on [20463088701440 4096]
ref mismatch on [20463088771072 4096] extent item 1, found 2
Incorrect global backref count on 20463088771072 found 1 wanted 2
backpointer mismatch on [20463088771072 4096]
ref mismatch on [20463246721024 4096] extent item 1, found 2
Incorrect global backref count on 20463246721024 found 1 wanted 2
backpointer mismatch on [20463246721024 4096]
ref mismatch on [20463246725120 4096] extent item 1, found 2
Incorrect global backref count on 20463246725120 found 1 wanted 2
backpointer mismatch on [20463246725120 4096]
ref mismatch on [20463246745600 4096] extent item 1, found 2
Incorrect global backref count on 20463246745600 found 1 wanted 2
backpointer mismatch on [20463246745600 4096]
ref mismatch on [20463246749696 4096] extent item 1, found 2
Incorrect global backref count on 20463246749696 found 1 wanted 2
backpointer mismatch on [20463246749696 4096]
ref mismatch on [20463246753792 4096] extent item 1, found 2
Incorrect global backref count on 20463246753792 found 1 wanted 2
backpointer mismatch on [20463246753792 4096]
ref mismatch on [20463246757888 4096] extent item 1, found 2
Incorrect global backref count on 20463246757888 found 1 wanted 2
backpointer mismatch on [20463246757888 4096]
ref mismatch on [20463246761984 4096] extent item 1, found 2
Incorrect global backref count on 20463246761984 found 1 wanted 2
backpointer mismatch on [20463246761984 4096]
ref mismatch on [20463246766080 4096] extent item 1, found 2
Incorrect global backref count on 20463246766080 found 1 wanted 2
backpointer mismatch on [20463246766080 4096]
ref mismatch on [20463246770176 4096] extent item 1, found 2
Incorrect global backref count on 20463246770176 found 1 wanted 2
backpointer mismatch on [20463246770176 4096]
ref mismatch on [20463246774272 4096] extent item 1, found 2
Incorrect global backref count on 20463246774272 found 1 wanted 2
backpointer mismatch on [20463246774272 4096]
ref mismatch on [20463246778368 4096] extent item 1, found 2
Incorrect global backref count on 20463246778368 found 1 wanted 2
backpointer mismatch on [20463246778368 4096]
ref mismatch on [20463246782464 4096] extent item 1, found 2
Incorrect global backref count on 20463246782464 found 1 wanted 2
backpointer mismatch on [20463246782464 4096]
ref mismatch on [20463246786560 4096] extent item 1, found 2
Incorrect global backref count on 20463246786560 found 1 wanted 2
backpointer mismatch on [20463246786560 4096]
ref mismatch on [20463246790656 4096] extent item 1, found 2
Incorrect global backref count on 20463246790656 found 1 wanted 2
backpointer mismatch on [20463246790656 4096]
ref mismatch on [20463246794752 4096] extent item 1, found 2
Incorrect global backref count on 20463246794752 found 1 wanted 2
backpointer mismatch on [20463246794752 4096]
ref mismatch on [20463246798848 4096] extent item 1, found 2
Incorrect global backref count on 20463246798848 found 1 wanted 2
backpointer mismatch on [20463246798848 4096]
ref mismatch on [20463246802944 4096] extent item 1, found 2
Incorrect global backref count on 20463246802944 found 1 wanted 2
backpointer mismatch on [20463246802944 4096]
ref mismatch on [20463246811136 4096] extent item 1, found 2
Incorrect global backref count on 20463246811136 found 1 wanted 2
backpointer mismatch on [20463246811136 4096]
ref mismatch on [20463270359040 4096] extent item 1, found 2
Incorrect global backref count on 20463270359040 found 1 wanted 2
backpointer mismatch on [20463270359040 4096]
ref mismatch on [20463357014016 4096] extent item 1, found 2
Incorrect global backref count on 20463357014016 found 1 wanted 2
backpointer mismatch on [20463357014016 4096]
ref mismatch on [20463357018112 4096] extent item 1, found 2
Incorrect global backref count on 20463357018112 found 1 wanted 2
backpointer mismatch on [20463357018112 4096]
ref mismatch on [20463357022208 4096] extent item 1, found 2
Incorrect global backref count on 20463357022208 found 1 wanted 2
backpointer mismatch on [20463357022208 4096]
ref mismatch on [20463357054976 4096] extent item 1, found 2
Incorrect global backref count on 20463357054976 found 1 wanted 2
backpointer mismatch on [20463357054976 4096]
ref mismatch on [20463357075456 4096] extent item 1, found 2
Incorrect global backref count on 20463357075456 found 1 wanted 2
backpointer mismatch on [20463357075456 4096]
ref mismatch on [20463357087744 4096] extent item 1, found 2
Incorrect global backref count on 20463357087744 found 1 wanted 2
backpointer mismatch on [20463357087744 4096]
ref mismatch on [20463357095936 4096] extent item 1, found 2
Incorrect global backref count on 20463357095936 found 1 wanted 2
backpointer mismatch on [20463357095936 4096]
ref mismatch on [20463357104128 4096] extent item 1, found 2
Incorrect global backref count on 20463357104128 found 1 wanted 2
backpointer mismatch on [20463357104128 4096]
ref mismatch on [20463357108224 4096] extent item 1, found 2
Incorrect global backref count on 20463357108224 found 1 wanted 2
backpointer mismatch on [20463357108224 4096]
ref mismatch on [20463357136896 4096] extent item 1, found 2
Incorrect global backref count on 20463357136896 found 1 wanted 2
backpointer mismatch on [20463357136896 4096]
ref mismatch on [20463357140992 4096] extent item 1, found 2
Incorrect global backref count on 20463357140992 found 1 wanted 2
backpointer mismatch on [20463357140992 4096]
ref mismatch on [20463357177856 4096] extent item 1, found 2
Incorrect global backref count on 20463357177856 found 1 wanted 2
backpointer mismatch on [20463357177856 4096]
ref mismatch on [20463357186048 4096] extent item 1, found 2
Incorrect global backref count on 20463357186048 found 1 wanted 2
backpointer mismatch on [20463357186048 4096]
ref mismatch on [20463385513984 4096] extent item 1, found 2
Incorrect global backref count on 20463385513984 found 1 wanted 2
backpointer mismatch on [20463385513984 4096]
ref mismatch on [20463415042048 4096] extent item 1, found 2
Incorrect global backref count on 20463415042048 found 1 wanted 2
backpointer mismatch on [20463415042048 4096]
ref mismatch on [20463415046144 4096] extent item 1, found 2
Incorrect global backref count on 20463415046144 found 1 wanted 2
backpointer mismatch on [20463415046144 4096]
ref mismatch on [20463415050240 4096] extent item 1, found 2
Incorrect global backref count on 20463415050240 found 1 wanted 2
backpointer mismatch on [20463415050240 4096]
ref mismatch on [20463415054336 4096] extent item 1, found 2
Incorrect global backref count on 20463415054336 found 1 wanted 2
backpointer mismatch on [20463415054336 4096]
ref mismatch on [20463415062528 4096] extent item 1, found 2
Incorrect global backref count on 20463415062528 found 1 wanted 2
backpointer mismatch on [20463415062528 4096]
ref mismatch on [20463415066624 4096] extent item 1, found 2
Incorrect global backref count on 20463415066624 found 1 wanted 2
backpointer mismatch on [20463415066624 4096]
ref mismatch on [20463415070720 4096] extent item 1, found 2
Incorrect global backref count on 20463415070720 found 1 wanted 2
backpointer mismatch on [20463415070720 4096]
ref mismatch on [20463415074816 4096] extent item 1, found 2
Incorrect global backref count on 20463415074816 found 1 wanted 2
backpointer mismatch on [20463415074816 4096]
ref mismatch on [20463415083008 4096] extent item 1, found 2
Incorrect global backref count on 20463415083008 found 1 wanted 2
backpointer mismatch on [20463415083008 4096]
ref mismatch on [20463415087104 4096] extent item 1, found 2
Incorrect global backref count on 20463415087104 found 1 wanted 2
backpointer mismatch on [20463415087104 4096]
ref mismatch on [20463415091200 4096] extent item 1, found 2
Incorrect global backref count on 20463415091200 found 1 wanted 2
backpointer mismatch on [20463415091200 4096]
ref mismatch on [20463415095296 4096] extent item 1, found 2
Incorrect global backref count on 20463415095296 found 1 wanted 2
backpointer mismatch on [20463415095296 4096]
ref mismatch on [20463415099392 4096] extent item 1, found 2
Incorrect global backref count on 20463415099392 found 1 wanted 2
backpointer mismatch on [20463415099392 4096]
ref mismatch on [20463415103488 4096] extent item 1, found 2
Incorrect global backref count on 20463415103488 found 1 wanted 2
backpointer mismatch on [20463415103488 4096]
ref mismatch on [20463415107584 4096] extent item 1, found 2
Incorrect global backref count on 20463415107584 found 1 wanted 2
backpointer mismatch on [20463415107584 4096]
ref mismatch on [20463415111680 4096] extent item 1, found 2
Incorrect global backref count on 20463415111680 found 1 wanted 2
backpointer mismatch on [20463415111680 4096]
ref mismatch on [20463415119872 4096] extent item 1, found 2
Incorrect global backref count on 20463415119872 found 1 wanted 2
backpointer mismatch on [20463415119872 4096]
ref mismatch on [20463415128064 4096] extent item 1, found 2
Incorrect global backref count on 20463415128064 found 1 wanted 2
backpointer mismatch on [20463415128064 4096]
ref mismatch on [20463415132160 4096] extent item 1, found 2
Incorrect global backref count on 20463415132160 found 1 wanted 2
backpointer mismatch on [20463415132160 4096]
ref mismatch on [20463415136256 4096] extent item 1, found 2
Incorrect global backref count on 20463415136256 found 1 wanted 2
backpointer mismatch on [20463415136256 4096]
ref mismatch on [20463415144448 4096] extent item 1, found 2
Incorrect global backref count on 20463415144448 found 1 wanted 2
backpointer mismatch on [20463415144448 4096]
ref mismatch on [20463415148544 4096] extent item 1, found 2
Incorrect global backref count on 20463415148544 found 1 wanted 2
backpointer mismatch on [20463415148544 4096]
ref mismatch on [20463415152640 4096] extent item 1, found 2
Incorrect global backref count on 20463415152640 found 1 wanted 2
backpointer mismatch on [20463415152640 4096]
ref mismatch on [20463415173120 4096] extent item 1, found 2
Incorrect global backref count on 20463415173120 found 1 wanted 2
backpointer mismatch on [20463415173120 4096]
ref mismatch on [20463415193600 4096] extent item 1, found 2
Incorrect global backref count on 20463415193600 found 1 wanted 2
backpointer mismatch on [20463415193600 4096]
ref mismatch on [20463459852288 4096] extent item 1, found 2
Incorrect global backref count on 20463459852288 found 1 wanted 2
backpointer mismatch on [20463459852288 4096]
ref mismatch on [20463477268480 4096] extent item 1, found 2
Incorrect global backref count on 20463477268480 found 1 wanted 2
backpointer mismatch on [20463477268480 4096]
ref mismatch on [20463504248832 4096] extent item 1, found 2
Incorrect global backref count on 20463504248832 found 1 wanted 2
backpointer mismatch on [20463504248832 4096]
ref mismatch on [20463560003584 4096] extent item 1, found 2
Incorrect global backref count on 20463560003584 found 1 wanted 2
backpointer mismatch on [20463560003584 4096]
ref mismatch on [20463705088000 4096] extent item 1, found 2
Incorrect global backref count on 20463705088000 found 1 wanted 2
backpointer mismatch on [20463705088000 4096]
ref mismatch on [20463740538880 4096] extent item 1, found 2
Incorrect global backref count on 20463740538880 found 1 wanted 2
backpointer mismatch on [20463740538880 4096]
ref mismatch on [20463815450624 4096] extent item 1, found 2
Incorrect global backref count on 20463815450624 found 1 wanted 2
backpointer mismatch on [20463815450624 4096]
ref mismatch on [20463888011264 4096] extent item 1, found 2
Incorrect global backref count on 20463888011264 found 1 wanted 2
backpointer mismatch on [20463888011264 4096]
ref mismatch on [20463971205120 4096] extent item 1, found 2
Incorrect global backref count on 20463971205120 found 1 wanted 2
backpointer mismatch on [20463971205120 4096]
ref mismatch on [20464035745792 4096] extent item 1, found 2
Incorrect global backref count on 20464035745792 found 1 wanted 2
backpointer mismatch on [20464035745792 4096]
ref mismatch on [20464037908480 4096] extent item 1, found 2
Incorrect global backref count on 20464037908480 found 1 wanted 2
backpointer mismatch on [20464037908480 4096]
ref mismatch on [20464037912576 4096] extent item 1, found 2
Incorrect global backref count on 20464037912576 found 1 wanted 2
backpointer mismatch on [20464037912576 4096]
ref mismatch on [20464037916672 4096] extent item 1, found 2
Incorrect global backref count on 20464037916672 found 1 wanted 2
backpointer mismatch on [20464037916672 4096]
ref mismatch on [20464037928960 4096] extent item 1, found 2
Incorrect global backref count on 20464037928960 found 1 wanted 2
backpointer mismatch on [20464037928960 4096]
ref mismatch on [20464037937152 4096] extent item 1, found 2
Incorrect global backref count on 20464037937152 found 1 wanted 2
backpointer mismatch on [20464037937152 4096]
ref mismatch on [20464037945344 4096] extent item 1, found 2
Incorrect global backref count on 20464037945344 found 1 wanted 2
backpointer mismatch on [20464037945344 4096]
ref mismatch on [20464059654144 4096] extent item 1, found 2
Incorrect global backref count on 20464059654144 found 1 wanted 2
backpointer mismatch on [20464059654144 4096]
ref mismatch on [20464059932672 4096] extent item 1, found 2
Incorrect global backref count on 20464059932672 found 1 wanted 2
backpointer mismatch on [20464059932672 4096]
ref mismatch on [20464059961344 4096] extent item 1, found 2
Incorrect global backref count on 20464059961344 found 1 wanted 2
backpointer mismatch on [20464059961344 4096]
ref mismatch on [20464060006400 4096] extent item 1, found 2
Incorrect global backref count on 20464060006400 found 1 wanted 2
backpointer mismatch on [20464060006400 4096]
ref mismatch on [20464060051456 4096] extent item 1, found 2
Incorrect global backref count on 20464060051456 found 1 wanted 2
backpointer mismatch on [20464060051456 4096]
ref mismatch on [20464060055552 4096] extent item 1, found 2
Incorrect global backref count on 20464060055552 found 1 wanted 2
backpointer mismatch on [20464060055552 4096]
ref mismatch on [20464060096512 4096] extent item 1, found 2
Incorrect global backref count on 20464060096512 found 1 wanted 2
backpointer mismatch on [20464060096512 4096]
ref mismatch on [20464060211200 4096] extent item 1, found 2
Incorrect global backref count on 20464060211200 found 1 wanted 2
backpointer mismatch on [20464060211200 4096]
ref mismatch on [20464066527232 4096] extent item 1, found 2
Incorrect global backref count on 20464066527232 found 1 wanted 2
backpointer mismatch on [20464066527232 4096]
ref mismatch on [20464066576384 4096] extent item 1, found 2
Incorrect global backref count on 20464066576384 found 1 wanted 2
backpointer mismatch on [20464066576384 4096]
ref mismatch on [22726506930176 4096] extent item 1, found 2
Incorrect global backref count on 22726506930176 found 1 wanted 2
backpointer mismatch on [22726506930176 4096]
ref mismatch on [22726507606016 4096] extent item 1, found 2
Incorrect global backref count on 22726507606016 found 1 wanted 2
backpointer mismatch on [22726507606016 4096]
ref mismatch on [22726525747200 4096] extent item 1, found 2
Incorrect global backref count on 22726525747200 found 1 wanted 2
backpointer mismatch on [22726525747200 4096]
ref mismatch on [22726525861888 4096] extent item 1, found 2
Incorrect global backref count on 22726525861888 found 1 wanted 2
backpointer mismatch on [22726525861888 4096]
ref mismatch on [22726584791040 4096] extent item 1, found 2
Incorrect global backref count on 22726584791040 found 1 wanted 2
backpointer mismatch on [22726584791040 4096]
ref mismatch on [22726591852544 4096] extent item 1, found 2
Incorrect global backref count on 22726591852544 found 1 wanted 2
backpointer mismatch on [22726591852544 4096]
ref mismatch on [22726721363968 4096] extent item 1, found 2
Incorrect global backref count on 22726721363968 found 1 wanted 2
backpointer mismatch on [22726721363968 4096]
ref mismatch on [22726754209792 4096] extent item 1, found 2
Incorrect global backref count on 22726754209792 found 1 wanted 2
backpointer mismatch on [22726754209792 4096]
ref mismatch on [22726850498560 4096] extent item 1, found 2
Incorrect global backref count on 22726850498560 found 1 wanted 2
backpointer mismatch on [22726850498560 4096]
ref mismatch on [22726856859648 4096] extent item 1, found 2
Incorrect global backref count on 22726856859648 found 1 wanted 2
backpointer mismatch on [22726856859648 4096]
ref mismatch on [22726918811648 4096] extent item 1, found 2
Incorrect global backref count on 22726918811648 found 1 wanted 2
backpointer mismatch on [22726918811648 4096]
ref mismatch on [22726928609280 4096] extent item 1, found 2
Incorrect global backref count on 22726928609280 found 1 wanted 2
backpointer mismatch on [22726928609280 4096]
ref mismatch on [22726929244160 4096] extent item 1, found 2
Incorrect global backref count on 22726929244160 found 1 wanted 2
backpointer mismatch on [22726929244160 4096]
ref mismatch on [22726929248256 4096] extent item 1, found 2
Incorrect global backref count on 22726929248256 found 1 wanted 2
backpointer mismatch on [22726929248256 4096]
ref mismatch on [22726929747968 4096] extent item 1, found 2
Incorrect global backref count on 22726929747968 found 1 wanted 2
backpointer mismatch on [22726929747968 4096]
ref mismatch on [22727007752192 4096] extent item 1, found 2
Incorrect global backref count on 22727007752192 found 1 wanted 2
backpointer mismatch on [22727007752192 4096]
ref mismatch on [22727046537216 4096] extent item 1, found 2
Incorrect global backref count on 22727046537216 found 1 wanted 2
backpointer mismatch on [22727046537216 4096]
ref mismatch on [22727046541312 4096] extent item 1, found 2
Incorrect global backref count on 22727046541312 found 1 wanted 2
backpointer mismatch on [22727046541312 4096]
ref mismatch on [22727047536640 4096] extent item 1, found 2
Incorrect global backref count on 22727047536640 found 1 wanted 2
backpointer mismatch on [22727047536640 4096]
ref mismatch on [22727091236864 4096] extent item 1, found 2
Incorrect global backref count on 22727091236864 found 1 wanted 2
backpointer mismatch on [22727091236864 4096]
ref mismatch on [22727104884736 4096] extent item 1, found 2
Incorrect global backref count on 22727104884736 found 1 wanted 2
backpointer mismatch on [22727104884736 4096]
ref mismatch on [22727214133248 4096] extent item 1, found 2
Incorrect global backref count on 22727214133248 found 1 wanted 2
backpointer mismatch on [22727214133248 4096]
ref mismatch on [22727229071360 4096] extent item 1, found 2
Incorrect global backref count on 22727229071360 found 1 wanted 2
backpointer mismatch on [22727229071360 4096]
ref mismatch on [22727392038912 4096] extent item 1, found 2
Incorrect global backref count on 22727392038912 found 1 wanted 2
backpointer mismatch on [22727392038912 4096]
ref mismatch on [22727394050048 4096] extent item 1, found 2
Incorrect global backref count on 22727394050048 found 1 wanted 2
backpointer mismatch on [22727394050048 4096]
ref mismatch on [22727395975168 4096] extent item 1, found 2
Incorrect global backref count on 22727395975168 found 1 wanted 2
backpointer mismatch on [22727395975168 4096]
ref mismatch on [22727395979264 4096] extent item 1, found 2
Incorrect global backref count on 22727395979264 found 1 wanted 2
backpointer mismatch on [22727395979264 4096]
ref mismatch on [22727395983360 4096] extent item 1, found 2
Incorrect global backref count on 22727395983360 found 1 wanted 2
backpointer mismatch on [22727395983360 4096]
ref mismatch on [22727395987456 4096] extent item 1, found 2
Incorrect global backref count on 22727395987456 found 1 wanted 2
backpointer mismatch on [22727395987456 4096]
ref mismatch on [22727395991552 4096] extent item 1, found 2
Incorrect global backref count on 22727395991552 found 1 wanted 2
backpointer mismatch on [22727395991552 4096]
ref mismatch on [22727395995648 4096] extent item 1, found 2
Incorrect global backref count on 22727395995648 found 1 wanted 2
backpointer mismatch on [22727395995648 4096]
ref mismatch on [22727395999744 4096] extent item 1, found 2
Incorrect global backref count on 22727395999744 found 1 wanted 2
backpointer mismatch on [22727395999744 4096]
ref mismatch on [22727397609472 4096] extent item 1, found 2
Incorrect global backref count on 22727397609472 found 1 wanted 2
backpointer mismatch on [22727397609472 4096]
ref mismatch on [22727397613568 4096] extent item 1, found 2
Incorrect global backref count on 22727397613568 found 1 wanted 2
backpointer mismatch on [22727397613568 4096]
ref mismatch on [22727397617664 4096] extent item 1, found 2
Incorrect global backref count on 22727397617664 found 1 wanted 2
backpointer mismatch on [22727397617664 4096]
ref mismatch on [22727397629952 4096] extent item 1, found 2
Incorrect global backref count on 22727397629952 found 1 wanted 2
backpointer mismatch on [22727397629952 4096]
ref mismatch on [22727397634048 4096] extent item 1, found 2
Incorrect global backref count on 22727397634048 found 1 wanted 2
backpointer mismatch on [22727397634048 4096]
ref mismatch on [22727397855232 4096] extent item 1, found 2
Incorrect global backref count on 22727397855232 found 1 wanted 2
backpointer mismatch on [22727397855232 4096]
ref mismatch on [22727397859328 4096] extent item 1, found 2
Incorrect global backref count on 22727397859328 found 1 wanted 2
backpointer mismatch on [22727397859328 4096]
ref mismatch on [22727397863424 4096] extent item 1, found 2
Incorrect global backref count on 22727397863424 found 1 wanted 2
backpointer mismatch on [22727397863424 4096]
ref mismatch on [22727397867520 4096] extent item 1, found 2
Incorrect global backref count on 22727397867520 found 1 wanted 2
backpointer mismatch on [22727397867520 4096]
ref mismatch on [22727397871616 4096] extent item 1, found 2
Incorrect global backref count on 22727397871616 found 1 wanted 2
backpointer mismatch on [22727397871616 4096]
ref mismatch on [22727399534592 4096] extent item 1, found 2
Incorrect global backref count on 22727399534592 found 1 wanted 2
backpointer mismatch on [22727399534592 4096]
ref mismatch on [22727399620608 4096] extent item 1, found 2
Incorrect global backref count on 22727399620608 found 1 wanted 2
backpointer mismatch on [22727399620608 4096]
ref mismatch on [22727493369856 4096] extent item 1, found 2
Incorrect global backref count on 22727493369856 found 1 wanted 2
backpointer mismatch on [22727493369856 4096]
ref mismatch on [22727522791424 4096] extent item 1, found 2
Incorrect global backref count on 22727522791424 found 1 wanted 2
backpointer mismatch on [22727522791424 4096]
ref mismatch on [22727522848768 4096] extent item 1, found 2
Incorrect global backref count on 22727522848768 found 1 wanted 2
backpointer mismatch on [22727522848768 4096]
ref mismatch on [22727559303168 4096] extent item 1, found 2
Incorrect global backref count on 22727559303168 found 1 wanted 2
backpointer mismatch on [22727559303168 4096]
ref mismatch on [22727642562560 4096] extent item 1, found 2
Incorrect global backref count on 22727642562560 found 1 wanted 2
backpointer mismatch on [22727642562560 4096]
ref mismatch on [22727748812800 4096] extent item 1, found 2
Incorrect global backref count on 22727748812800 found 1 wanted 2
backpointer mismatch on [22727748812800 4096]
ref mismatch on [22727767691264 4096] extent item 1, found 2
Incorrect global backref count on 22727767691264 found 1 wanted 2
backpointer mismatch on [22727767691264 4096]
ref mismatch on [22727791968256 4096] extent item 1, found 2
Incorrect global backref count on 22727791968256 found 1 wanted 2
backpointer mismatch on [22727791968256 4096]
ref mismatch on [22727828881408 4096] extent item 1, found 2
Incorrect global backref count on 22727828881408 found 1 wanted 2
backpointer mismatch on [22727828881408 4096]
ref mismatch on [22727828885504 4096] extent item 1, found 2
Incorrect global backref count on 22727828885504 found 1 wanted 2
backpointer mismatch on [22727828885504 4096]
ref mismatch on [22727828897792 4096] extent item 1, found 2
Incorrect global backref count on 22727828897792 found 1 wanted 2
backpointer mismatch on [22727828897792 4096]
ref mismatch on [22727828901888 4096] extent item 1, found 2
Incorrect global backref count on 22727828901888 found 1 wanted 2
backpointer mismatch on [22727828901888 4096]
ref mismatch on [22727828905984 4096] extent item 1, found 2
Incorrect global backref count on 22727828905984 found 1 wanted 2
backpointer mismatch on [22727828905984 4096]
ref mismatch on [22727828910080 4096] extent item 1, found 2
Incorrect global backref count on 22727828910080 found 1 wanted 2
backpointer mismatch on [22727828910080 4096]
ref mismatch on [22727828918272 4096] extent item 1, found 2
Incorrect global backref count on 22727828918272 found 1 wanted 2
backpointer mismatch on [22727828918272 4096]
ref mismatch on [22727828938752 4096] extent item 1, found 2
Incorrect global backref count on 22727828938752 found 1 wanted 2
backpointer mismatch on [22727828938752 4096]
ref mismatch on [22727869063168 4096] extent item 1, found 2
Incorrect global backref count on 22727869063168 found 1 wanted 2
backpointer mismatch on [22727869063168 4096]
ref mismatch on [22727873511424 4096] extent item 1, found 2
Incorrect global backref count on 22727873511424 found 1 wanted 2
backpointer mismatch on [22727873511424 4096]
ref mismatch on [22727923941376 4096] extent item 1, found 2
Incorrect global backref count on 22727923941376 found 1 wanted 2
backpointer mismatch on [22727923941376 4096]
ref mismatch on [22727924060160 4096] extent item 1, found 2
Incorrect global backref count on 22727924060160 found 1 wanted 2
backpointer mismatch on [22727924060160 4096]
ref mismatch on [22727924064256 4096] extent item 1, found 2
Incorrect global backref count on 22727924064256 found 1 wanted 2
backpointer mismatch on [22727924064256 4096]
ref mismatch on [22727935164416 4096] extent item 1, found 2
Incorrect global backref count on 22727935164416 found 1 wanted 2
backpointer mismatch on [22727935164416 4096]
ref mismatch on [22727969415168 4096] extent item 1, found 2
Incorrect global backref count on 22727969415168 found 1 wanted 2
backpointer mismatch on [22727969415168 4096]
ref mismatch on [22727969947648 4096] extent item 1, found 2
Incorrect global backref count on 22727969947648 found 1 wanted 2
backpointer mismatch on [22727969947648 4096]
ref mismatch on [22727970406400 4096] extent item 1, found 2
Incorrect global backref count on 22727970406400 found 1 wanted 2
backpointer mismatch on [22727970406400 4096]
ref mismatch on [22727970435072 4096] extent item 1, found 2
Incorrect global backref count on 22727970435072 found 1 wanted 2
backpointer mismatch on [22727970435072 4096]
ref mismatch on [22727972970496 4096] extent item 1, found 2
Incorrect global backref count on 22727972970496 found 1 wanted 2
backpointer mismatch on [22727972970496 4096]
ref mismatch on [22727973011456 4096] extent item 1, found 2
Incorrect global backref count on 22727973011456 found 1 wanted 2
backpointer mismatch on [22727973011456 4096]
ref mismatch on [22727973179392 4096] extent item 1, found 2
Incorrect global backref count on 22727973179392 found 1 wanted 2
backpointer mismatch on [22727973179392 4096]
ref mismatch on [22727981469696 4096] extent item 1, found 2
Incorrect global backref count on 22727981469696 found 1 wanted 2
backpointer mismatch on [22727981469696 4096]
ref mismatch on [22727981498368 4096] extent item 1, found 2
Incorrect global backref count on 22727981498368 found 1 wanted 2
backpointer mismatch on [22727981498368 4096]
ref mismatch on [22727981506560 4096] extent item 1, found 2
Incorrect global backref count on 22727981506560 found 1 wanted 2
backpointer mismatch on [22727981506560 4096]
ref mismatch on [22727985885184 4096] extent item 1, found 2
Incorrect global backref count on 22727985885184 found 1 wanted 2
backpointer mismatch on [22727985885184 4096]
ref mismatch on [22727987724288 4096] extent item 1, found 2
Incorrect global backref count on 22727987724288 found 1 wanted 2
backpointer mismatch on [22727987724288 4096]
ref mismatch on [22727988588544 4096] extent item 1, found 2
Incorrect global backref count on 22727988588544 found 1 wanted 2
backpointer mismatch on [22727988588544 4096]
ref mismatch on [22727988805632 4096] extent item 1, found 2
Incorrect global backref count on 22727988805632 found 1 wanted 2
backpointer mismatch on [22727988805632 4096]
ref mismatch on [22728059179008 4096] extent item 1, found 2
Incorrect global backref count on 22728059179008 found 1 wanted 2
backpointer mismatch on [22728059179008 4096]
ref mismatch on [22728068673536 4096] extent item 1, found 2
Incorrect global backref count on 22728068673536 found 1 wanted 2
backpointer mismatch on [22728068673536 4096]
ref mismatch on [22728068694016 4096] extent item 1, found 2
Incorrect global backref count on 22728068694016 found 1 wanted 2
backpointer mismatch on [22728068694016 4096]
ref mismatch on [22728090324992 4096] extent item 1, found 2
Incorrect global backref count on 22728090324992 found 1 wanted 2
backpointer mismatch on [22728090324992 4096]
ref mismatch on [22728091336704 4096] extent item 1, found 2
Incorrect global backref count on 22728091336704 found 1 wanted 2
backpointer mismatch on [22728091336704 4096]
ref mismatch on [22728091467776 4096] extent item 1, found 2
Incorrect global backref count on 22728091467776 found 1 wanted 2
backpointer mismatch on [22728091467776 4096]
ref mismatch on [22728091500544 4096] extent item 1, found 2
Incorrect global backref count on 22728091500544 found 1 wanted 2
backpointer mismatch on [22728091500544 4096]
ref mismatch on [22728091508736 4096] extent item 1, found 2
Incorrect global backref count on 22728091508736 found 1 wanted 2
backpointer mismatch on [22728091508736 4096]
ref mismatch on [22728091516928 4096] extent item 1, found 2
Incorrect global backref count on 22728091516928 found 1 wanted 2
backpointer mismatch on [22728091516928 4096]
ref mismatch on [22728091582464 4096] extent item 1, found 2
Incorrect global backref count on 22728091582464 found 1 wanted 2
backpointer mismatch on [22728091582464 4096]
ref mismatch on [22728092659712 4096] extent item 1, found 2
Incorrect global backref count on 22728092659712 found 1 wanted 2
backpointer mismatch on [22728092659712 4096]
ref mismatch on [22728100175872 4096] extent item 1, found 2
Incorrect global backref count on 22728100175872 found 1 wanted 2
backpointer mismatch on [22728100175872 4096]
ref mismatch on [22728201531392 4096] extent item 1, found 2
Incorrect global backref count on 22728201531392 found 1 wanted 2
backpointer mismatch on [22728201531392 4096]
ref mismatch on [22728227930112 4096] extent item 1, found 2
Incorrect global backref count on 22728227930112 found 1 wanted 2
backpointer mismatch on [22728227930112 4096]
ref mismatch on [22728237342720 4096] extent item 1, found 2
Incorrect global backref count on 22728237342720 found 1 wanted 2
backpointer mismatch on [22728237342720 4096]
ref mismatch on [22728237932544 4096] extent item 1, found 2
Incorrect global backref count on 22728237932544 found 1 wanted 2
backpointer mismatch on [22728237932544 4096]
ref mismatch on [22728325935104 4096] extent item 1, found 2
Incorrect global backref count on 22728325935104 found 1 wanted 2
backpointer mismatch on [22728325935104 4096]
ref mismatch on [22728330858496 4096] extent item 1, found 2
Incorrect global backref count on 22728330858496 found 1 wanted 2
backpointer mismatch on [22728330858496 4096]
ref mismatch on [22728335515648 4096] extent item 1, found 2
Incorrect global backref count on 22728335515648 found 1 wanted 2
backpointer mismatch on [22728335515648 4096]
ref mismatch on [22728394371072 4096] extent item 1, found 2
Incorrect global backref count on 22728394371072 found 1 wanted 2
backpointer mismatch on [22728394371072 4096]
ref mismatch on [22728394395648 4096] extent item 1, found 2
Incorrect global backref count on 22728394395648 found 1 wanted 2
backpointer mismatch on [22728394395648 4096]
ref mismatch on [22728414253056 4096] extent item 1, found 2
Incorrect global backref count on 22728414253056 found 1 wanted 2
backpointer mismatch on [22728414253056 4096]
ref mismatch on [22728433455104 4096] extent item 1, found 2
Incorrect global backref count on 22728433455104 found 1 wanted 2
backpointer mismatch on [22728433455104 4096]
ref mismatch on [22728433627136 4096] extent item 1, found 2
Incorrect global backref count on 22728433627136 found 1 wanted 2
backpointer mismatch on [22728433627136 4096]
ref mismatch on [22728434843648 4096] extent item 1, found 2
Incorrect global backref count on 22728434843648 found 1 wanted 2
backpointer mismatch on [22728434843648 4096]
ref mismatch on [22728439918592 4096] extent item 1, found 2
Incorrect global backref count on 22728439918592 found 1 wanted 2
backpointer mismatch on [22728439918592 4096]
ref mismatch on [22728458592256 4096] extent item 1, found 2
Incorrect global backref count on 22728458592256 found 1 wanted 2
backpointer mismatch on [22728458592256 4096]
ref mismatch on [22728587952128 4096] extent item 1, found 2
Incorrect global backref count on 22728587952128 found 1 wanted 2
backpointer mismatch on [22728587952128 4096]
ref mismatch on [22728591757312 4096] extent item 1, found 2
Incorrect global backref count on 22728591757312 found 1 wanted 2
backpointer mismatch on [22728591757312 4096]
ref mismatch on [22728592928768 4096] extent item 1, found 2
Incorrect global backref count on 22728592928768 found 1 wanted 2
backpointer mismatch on [22728592928768 4096]
ref mismatch on [22728593092608 4096] extent item 1, found 2
Incorrect global backref count on 22728593092608 found 1 wanted 2
backpointer mismatch on [22728593092608 4096]
ref mismatch on [22989638897664 4096] extent item 1, found 2
Incorrect global backref count on 22989638897664 found 1 wanted 2
backpointer mismatch on [22989638897664 4096]
ref mismatch on [22989638901760 4096] extent item 1, found 2
Incorrect global backref count on 22989638901760 found 1 wanted 2
backpointer mismatch on [22989638901760 4096]
ref mismatch on [22989741858816 4096] extent item 1, found 2
Incorrect global backref count on 22989741858816 found 1 wanted 2
backpointer mismatch on [22989741858816 4096]
ref mismatch on [22989777768448 4096] extent item 1, found 2
Incorrect global backref count on 22989777768448 found 1 wanted 2
backpointer mismatch on [22989777768448 4096]
ref mismatch on [22989850673152 4096] extent item 1, found 2
Incorrect global backref count on 22989850673152 found 1 wanted 2
backpointer mismatch on [22989850673152 4096]
ref mismatch on [22989850677248 4096] extent item 1, found 2
Incorrect global backref count on 22989850677248 found 1 wanted 2
backpointer mismatch on [22989850677248 4096]
ref mismatch on [22989901148160 4096] extent item 1, found 2
Incorrect global backref count on 22989901148160 found 1 wanted 2
backpointer mismatch on [22989901148160 4096]
ref mismatch on [22989923340288 4096] extent item 1, found 2
Incorrect global backref count on 22989923340288 found 1 wanted 2
backpointer mismatch on [22989923340288 4096]
ref mismatch on [22989979959296 4096] extent item 1, found 2
Incorrect global backref count on 22989979959296 found 1 wanted 2
backpointer mismatch on [22989979959296 4096]
ref mismatch on [22990049484800 4096] extent item 1, found 2
Incorrect global backref count on 22990049484800 found 1 wanted 2
backpointer mismatch on [22990049484800 4096]
ref mismatch on [22990127919104 4096] extent item 1, found 2
Incorrect global backref count on 22990127919104 found 1 wanted 2
backpointer mismatch on [22990127919104 4096]
ref mismatch on [22990169636864 4096] extent item 1, found 2
Incorrect global backref count on 22990169636864 found 1 wanted 2
backpointer mismatch on [22990169636864 4096]
ref mismatch on [22990270459904 4096] extent item 1, found 2
Incorrect global backref count on 22990270459904 found 1 wanted 2
backpointer mismatch on [22990270459904 4096]
ref mismatch on [22990383153152 4096] extent item 1, found 2
Incorrect global backref count on 22990383153152 found 1 wanted 2
backpointer mismatch on [22990383153152 4096]
ref mismatch on [22990516260864 4096] extent item 1, found 2
Incorrect global backref count on 22990516260864 found 1 wanted 2
backpointer mismatch on [22990516260864 4096]
ref mismatch on [22990549094400 4096] extent item 1, found 2
Incorrect global backref count on 22990549094400 found 1 wanted 2
backpointer mismatch on [22990549094400 4096]
ref mismatch on [22990577303552 4096] extent item 1, found 2
Incorrect global backref count on 22990577303552 found 1 wanted 2
backpointer mismatch on [22990577303552 4096]
ref mismatch on [22990577917952 4096] extent item 1, found 2
Incorrect global backref count on 22990577917952 found 1 wanted 2
backpointer mismatch on [22990577917952 4096]
Errors found in extent allocation tree or chunk allocation
checking free space cache
checking fs roots
root 256 inode 9579 errors 100, file extent discount
root 256 inode 9580 errors 100, file extent discount
root 256 inode 14258 errors 100, file extent discount
root 256 inode 14259 errors 100, file extent discount
root 14155 inode 9579 errors 100, file extent discount
root 14155 inode 9580 errors 100, file extent discount
root 14155 inode 14258 errors 100, file extent discount
root 14155 inode 14259 errors 100, file extent discount
root 14251 inode 9579 errors 100, file extent discount
root 14251 inode 9580 errors 100, file extent discount
root 14251 inode 14258 errors 100, file extent discount
root 14251 inode 14259 errors 100, file extent discount
root 15239 inode 9579 errors 100, file extent discount
root 15239 inode 9580 errors 100, file extent discount
root 15239 inode 14258 errors 100, file extent discount
root 15239 inode 14259 errors 100, file extent discount
root 15283 inode 9579 errors 100, file extent discount
root 15283 inode 9580 errors 100, file extent discount
root 15283 inode 14258 errors 100, file extent discount
root 15283 inode 14259 errors 100, file extent discount
root 15405 inode 9579 errors 100, file extent discount
root 15405 inode 9580 errors 100, file extent discount
root 15405 inode 14258 errors 100, file extent discount
root 15405 inode 14259 errors 100, file extent discount
root 15512 inode 9579 errors 100, file extent discount
root 15512 inode 9580 errors 100, file extent discount
root 15512 inode 14258 errors 100, file extent discount
root 15512 inode 14259 errors 100, file extent discount
root 15601 inode 9579 errors 100, file extent discount
root 15601 inode 9580 errors 100, file extent discount
root 15601 inode 14258 errors 100, file extent discount
root 15601 inode 14259 errors 100, file extent discount
root 15679 inode 9579 errors 100, file extent discount
root 15679 inode 9580 errors 100, file extent discount
root 15679 inode 14258 errors 100, file extent discount
root 15679 inode 14259 errors 100, file extent discount
root 15761 inode 9579 errors 100, file extent discount
root 15761 inode 9580 errors 100, file extent discount
root 15761 inode 14258 errors 100, file extent discount
root 15761 inode 14259 errors 100, file extent discount
root 15891 inode 9579 errors 100, file extent discount
root 15891 inode 9580 errors 100, file extent discount
root 15891 inode 14258 errors 100, file extent discount
root 15891 inode 14259 errors 100, file extent discount
root 16093 inode 9579 errors 100, file extent discount
root 16093 inode 9580 errors 100, file extent discount
root 16093 inode 14258 errors 100, file extent discount
root 16093 inode 14259 errors 100, file extent discount
root 16215 inode 9579 errors 100, file extent discount
root 16215 inode 9580 errors 100, file extent discount
root 16215 inode 14258 errors 100, file extent discount
root 16215 inode 14259 errors 100, file extent discount
root 16296 inode 9579 errors 100, file extent discount
root 16296 inode 9580 errors 100, file extent discount
root 16296 inode 14258 errors 100, file extent discount
root 16296 inode 14259 errors 100, file extent discount
root 16364 inode 9579 errors 100, file extent discount
root 16364 inode 9580 errors 100, file extent discount
root 16364 inode 14258 errors 100, file extent discount
root 16364 inode 14259 errors 100, file extent discount
root 16434 inode 9579 errors 100, file extent discount
root 16434 inode 9580 errors 100, file extent discount
root 16434 inode 14258 errors 100, file extent discount
root 16434 inode 14259 errors 100, file extent discount
root 16500 inode 9579 errors 100, file extent discount
root 16500 inode 9580 errors 100, file extent discount
root 16500 inode 14258 errors 100, file extent discount
root 16500 inode 14259 errors 100, file extent discount
root 16557 inode 9579 errors 100, file extent discount
root 16557 inode 9580 errors 100, file extent discount
root 16557 inode 14258 errors 100, file extent discount
root 16557 inode 14259 errors 100, file extent discount
root 16611 inode 9579 errors 100, file extent discount
root 16611 inode 9580 errors 100, file extent discount
root 16611 inode 14258 errors 100, file extent discount
root 16611 inode 14259 errors 100, file extent discount
root 16662 inode 9579 errors 100, file extent discount
root 16662 inode 9580 errors 100, file extent discount
root 16662 inode 14258 errors 100, file extent discount
root 16662 inode 14259 errors 100, file extent discount
root 16714 inode 9579 errors 100, file extent discount
root 16714 inode 9580 errors 100, file extent discount
root 16714 inode 14258 errors 100, file extent discount
root 16714 inode 14259 errors 100, file extent discount
root 16769 inode 9579 errors 100, file extent discount
root 16769 inode 9580 errors 100, file extent discount
root 16769 inode 14258 errors 100, file extent discount
root 16769 inode 14259 errors 100, file extent discount
root 16817 inode 9579 errors 100, file extent discount
root 16817 inode 9580 errors 100, file extent discount
root 16817 inode 14258 errors 100, file extent discount
root 16817 inode 14259 errors 100, file extent discount
root 16868 inode 9579 errors 100, file extent discount
root 16868 inode 9580 errors 100, file extent discount
root 16868 inode 14258 errors 100, file extent discount
root 16868 inode 14259 errors 100, file extent discount
root 16922 inode 9579 errors 100, file extent discount
root 16922 inode 9580 errors 100, file extent discount
root 16922 inode 14258 errors 100, file extent discount
root 16922 inode 14259 errors 100, file extent discount
root 16981 inode 9579 errors 100, file extent discount
root 16981 inode 9580 errors 100, file extent discount
root 16981 inode 14258 errors 100, file extent discount
root 16981 inode 14259 errors 100, file extent discount
root 17050 inode 9579 errors 100, file extent discount
root 17050 inode 9580 errors 100, file extent discount
root 17050 inode 14258 errors 100, file extent discount
root 17050 inode 14259 errors 100, file extent discount
found 990180465844 bytes used err is 1
total csum bytes: 2488683616
total tree bytes: 2811412480
total fs tree bytes: 49156096
total extent tree bytes: 129392640
btree space waste bytes: 83287148
file data blocks allocated: 2585349308416
   referenced 2583623667712
Btrfs v3.12


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-18 21:55                                   ` Hendrik Friedel
@ 2014-02-18 22:12                                     ` Chris Murphy
  2014-03-02 18:39                                       ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-02-18 22:12 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: Duncan, linux-btrfs


On Feb 18, 2014, at 2:55 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hello,
> 
> >> It looks like everything is single except for 4GB of data which is still
>>> raid0. Weird. There should be a bunch of messages in dmesg during a
>>> normal/successful balance, and either something mentioned or missing
>>> might provide a clue why some chunks weren't converted.
> >
>> Agreed.
> 
> time ./btrfs balance start  -dconvert=single,soft /mnt/BTRFS/Video/
> ERROR: error during balancing '/mnt/BTRFS/Video/' - No space left on device
> There may be more info in syslog - try dmesg | tail
> 
> real    0m23.803s
> user    0m0.000s
> sys     0m1.070s
> 
> dmesg:
> [697498.761318] btrfs: relocating block group 19874593112064 flags 9
> [697507.614140] btrfs: relocating block group 19715679322112 flags 9
> [697516.218690] btrfs: 2 enospc errors during balance

You could try mounting with enospc_debug option and retrying, see if there's more information dmesg. But the large number of problems btfs check is finding, it may not be possible to move these 4GB.

So hopefully the data you need is already backed up and you can just blow this file system away. Or if you're looking to keep testing, you could try building a btrfs-next kernel to see if it can unwind this problem. And for btrfs-progs there's David Sterba's integration branch, is probably the most appropriate. But I haven't kept track of fsck related patches since v3.12 was released so I don't know if there's something newer that applies. Which is why it's easier to just back up and blow this thing away entirely and start from scratch.


Chris Murphy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-18 22:12                                     ` Chris Murphy
@ 2014-03-02 18:39                                       ` Hendrik Friedel
  2014-03-03 22:35                                         ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-03-02 18:39 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Duncan, linux-btrfs

Hi Chris, hi Ducan,

 >> time ./btrfs balance start  -dconvert=single,soft /mnt/BTRFS/Video/
>> ERROR: error during balancing '/mnt/BTRFS/Video/' - No space left on device
>> There may be more info in syslog - try dmesg | tail
>>
>> real    0m23.803s
>> user    0m0.000s
>> sys     0m1.070s
>>
>> dmesg:
>> [697498.761318] btrfs: relocating block group 19874593112064 flags 9
>> [697507.614140] btrfs: relocating block group 19715679322112 flags 9
>> [697516.218690] btrfs: 2 enospc errors during balance
>
> You could try mounting with enospc_debug option and retrying, see if there's more information dmesg.

I did this (this is on 3.14 rc4 now):
[ 2631.094438] BTRFS info (device sda1): block group has cluster?: no
[ 2631.094439] BTRFS info (device sda1): 0 blocks of free space at or 
bigger than bytes is
[ 2631.094440] BTRFS: block group 24946983043072 has 1073741824 bytes, 0 
used 0 pinned 0 reserved
[ 2631.094441] BTRFS critical (device sda1): entry offset 
24946983043072, bytes 1073741824, bitmap no
[ 2631.105072] BTRFS info (device sda1): block group has cluster?: no
[ 2631.105073] BTRFS info (device sda1): 0 blocks of free space at or 
bigger than bytes is
[ 2631.105074] BTRFS: block group 24948056784896 has 1073741824 bytes, 0 
used 0 pinned 0 reserved
[ 2631.105075] BTRFS critical (device sda1): entry offset 
24948056784896, bytes 1073741824, bitmap no
[ 2631.115594] BTRFS info (device sda1): block group has cluster?: no
[ 2631.115595] BTRFS info (device sda1): 0 blocks of free space at or 
bigger than bytes is
[ 2631.115596] BTRFS: block group 24949130526720 has 1073741824 bytes, 0 
used 0 pinned 0 reserved
[ 2631.115597] BTRFS critical (device sda1): entry offset 
24949130526720, bytes 1073741824, bitmap no
[ 2631.126096] BTRFS info (device sda1): block group has cluster?: no
[ 2631.126097] BTRFS info (device sda1): 0 blocks of free space at or 
bigger than bytes is
[ 2635.099492] BTRFS info (device sda1): 2 enospc errors during balance



> So hopefully the data you need is already backed up and you can just blow this file system away.

I am lacking space why I did the balance (to free one of the two discs).
So, unless the above helps, it seems, I need to buy another HDD?

Davids itegration-branch btrfsck tells me:

./btrfsck /dev/sda1
Checking filesystem on /dev/sda1
UUID: 989306aa-d291-4752-8477-0baf94f8c42f
checking extents
checking free space cache
checking fs roots
root 256 inode 9579 errors 100, file extent discount
root 256 inode 9580 errors 100, file extent discount
root 256 inode 14258 errors 100, file extent discount
root 256 inode 14259 errors 100, file extent discount
root 14155 inode 9579 errors 100, file extent discount
root 14155 inode 9580 errors 100, file extent discount
root 14155 inode 14258 errors 100, file extent discount
root 14155 inode 14259 errors 100, file extent discount
root 14251 inode 9579 errors 100, file extent discount
root 14251 inode 9580 errors 100, file extent discount
...
root 17083 inode 14259 errors 100, file extent discount
found 1419993333189 bytes used err is 1
total csum bytes: 2488683616
total tree bytes: 2811752448
total fs tree bytes: 49192960
total extent tree bytes: 129617920
btree space waste bytes: 83558397
file data blocks allocated: 2585349570560
  referenced 2583623929856
Btrfs this-will-become-v3.13-48-g57c3600

So, nothing new, as far as I can tell...

Greetings,
Hendrik


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-02-16 19:54                               ` Chris Murphy
  2014-02-17  3:20                                 ` Duncan
@ 2014-03-03  1:09                                 ` Russell Coker
  1 sibling, 0 replies; 36+ messages in thread
From: Russell Coker @ 2014-03-03  1:09 UTC (permalink / raw)
  To: Chris Murphy, linux-btrfs

On Mon, 17 Feb 2014, Chris Murphy <lists@colorremedies.com> wrote:
> Also, 10 hours to balance two disks at 2.3TB seems like a long time. I'm
> not sure if that's expected.

I just had a system with a single 120G Intel SSD and 54G of data stored take 3 
hours for a balance.

Balance seems to be a very slow operation and it also uses enough capacity 
(both disk IO and CPU time for BTRFS kernel threads) to seriously degrade 
system performance for the duration.

-- 
My Main Blog         http://etbe.coker.com.au/
My Documents Blog    http://doc.coker.com.au/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-03-02 18:39                                       ` Hendrik Friedel
@ 2014-03-03 22:35                                         ` Chris Murphy
  2014-03-04  6:42                                           ` Hendrik Friedel
  0 siblings, 1 reply; 36+ messages in thread
From: Chris Murphy @ 2014-03-03 22:35 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: Duncan, linux-btrfs


On Mar 2, 2014, at 11:39 AM, Hendrik Friedel <hendrik@friedels.name> wrote:
> 
> I am lacking space why I did the balance (to free one of the two discs).
> So, unless the above helps, it seems, I need to buy another HDD?

It might be worth finding large files to defragment. See the ENOSPC errors during raid1 rebalance thread. It sounds like it might be possible for some fragmented files to be stuck across multiple chunks, preventing conversion.

Otherwise, I'd like to believe for anything semi-important you have a way to back it up, and if not well you probably need another HDD anyway. As they say, raid is not a backup.

> 
> Davids itegration-branch btrfsck tells me:
> 
> ./btrfsck /dev/sda1
> Checking filesystem on /dev/sda1
> UUID: 989306aa-d291-4752-8477-0baf94f8c42f
> checking extents
> checking free space cache
> checking fs roots
> root 256 inode 9579 errors 100, file extent discount
> root 256 inode 9580 errors 100, file extent discount
> root 256 inode 14258 errors 100, file extent discount
> root 256 inode 14259 errors 100, file extent discount
> root 14155 inode 9579 errors 100, file extent discount
> root 14155 inode 9580 errors 100, file extent discount
> root 14155 inode 14258 errors 100, file extent discount
> root 14155 inode 14259 errors 100, file extent discount
> root 14251 inode 9579 errors 100, file extent discount
> root 14251 inode 9580 errors 100, file extent discount

I don't know what this means, even whether it's a significant problem or just noise. 


Chris Murphy


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-03-03 22:35                                         ` Chris Murphy
@ 2014-03-04  6:42                                           ` Hendrik Friedel
  2014-03-04 17:02                                             ` Chris Murphy
  0 siblings, 1 reply; 36+ messages in thread
From: Hendrik Friedel @ 2014-03-04  6:42 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Duncan, linux-btrfs

Hi Chris,


> It might be worth finding large files to defragment. See the ENOSPC errors during raid1 rebalance thread. It sounds like it might be possible for some fragmented files to be stuck across multiple chunks, preventing conversion.

I moved 400Gb from my other (but full) disc to the btrfs disc. This 
freed up 400Gb on the full disc, so that I could move the other 400Gb to 
the non-btrfs disc. Essentially, I think this also defragmented all 
files, as they were freshly written (and as "single", so that in fact a 
balance probably was not neccessarry anymore).

After this balance and device-delete worked, also the device-delete!

Nevertheless: I find it concerning, that this problem occured (remember, 
it was a raid with no SMART errors) and could not be fixed. My 
understanding was, that this should not happen to btrfs even in its 
current state.

Thanks!

Greetings,
Hendrik

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: btrfsck does not fix
  2014-03-04  6:42                                           ` Hendrik Friedel
@ 2014-03-04 17:02                                             ` Chris Murphy
  0 siblings, 0 replies; 36+ messages in thread
From: Chris Murphy @ 2014-03-04 17:02 UTC (permalink / raw)
  To: Hendrik Friedel; +Cc: Duncan, linux-btrfs


On Mar 3, 2014, at 11:42 PM, Hendrik Friedel <hendrik@friedels.name> wrote:

> Hi Chris,
> 
> 
>> It might be worth finding large files to defragment. See the ENOSPC errors during raid1 rebalance thread. It sounds like it might be possible for some fragmented files to be stuck across multiple chunks, preventing conversion.
> 
> I moved 400Gb from my other (but full) disc to the btrfs disc. This freed up 400Gb on the full disc, so that I could move the other 400Gb to the non-btrfs disc. Essentially, I think this also defragmented all files, as they were freshly written (and as "single", so that in fact a balance probably was not neccessarry anymore).
> 
> After this balance and device-delete worked, also the device-delete!
> 
> Nevertheless: I find it concerning, that this problem occured (remember, it was a raid with no SMART errors) and could not be fixed. My understanding was, that this should not happen to btrfs even in its current state.

I think that's the goal, but I don't think we're there yet. The are going to be unexpected bugs. I think it would be useful if you can recreate the problem with a discrete set of steps. If it's reproducible then hopefully it's a preventable situation, not just a matter for the fsck to fix.

Chris Murphy

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2014-03-04 17:02 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-01-03 19:41 btrfsck does not fix Hendrik Friedel
2014-01-03 23:33 ` Chris Murphy
2014-01-04 21:21   ` Hendrik Friedel
2014-01-05 13:36     ` Hendrik Friedel
2014-01-05 16:55     ` Chris Murphy
2014-01-07 20:38       ` Hendrik Friedel
2014-01-10 23:53         ` Hendrik Friedel
2014-01-11  1:05           ` Chris Murphy
2014-01-12 22:31             ` Hendrik Friedel
2014-01-14  0:40               ` Chris Murphy
2014-01-14  6:03                 ` Duncan
2014-01-14  7:49                   ` Chris Murphy
2014-01-14  9:30                     ` Duncan
2014-01-14  9:38                       ` Hugo Mills
2014-01-14 17:17                       ` Chris Murphy
2014-01-18  7:20                         ` Chris Samuel
2014-01-14  8:16                 ` Hugo Mills
2014-01-19 19:37               ` Martin Steigerwald
2014-01-21 20:00                 ` Hendrik Friedel
2014-01-21 20:01                   ` Hendrik Friedel
2014-02-08 22:01                   ` Hendrik Friedel
2014-02-09  0:45                     ` Chris Murphy
2014-02-09  8:36                       ` Hendrik Friedel
2014-02-11  1:45                         ` Chris Murphy
2014-02-11  2:23                           ` Chris Murphy
2014-02-16 19:18                             ` Hendrik Friedel
2014-02-16 19:54                               ` Chris Murphy
2014-02-17  3:20                                 ` Duncan
2014-02-17  9:41                                   ` Goswin von Brederlow
2014-02-18 21:55                                   ` Hendrik Friedel
2014-02-18 22:12                                     ` Chris Murphy
2014-03-02 18:39                                       ` Hendrik Friedel
2014-03-03 22:35                                         ` Chris Murphy
2014-03-04  6:42                                           ` Hendrik Friedel
2014-03-04 17:02                                             ` Chris Murphy
2014-03-03  1:09                                 ` Russell Coker

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.