Linux-BTRFS Archive on lore.kernel.org
 help / Atom feed
* BTRFS RAID filesystem unmountable
@ 2018-04-28  8:30 Michael Wade
  2018-04-28  8:45 ` Qu Wenruo
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-04-28  8:30 UTC (permalink / raw)
  To: linux-btrfs

Hi all,

I was hoping that someone would be able to help me resolve the issues
I am having with my ReadyNAS BTRFS volume. Basically my trouble
started after a power cut, subsequently the volume would not mount.
Here are the details of my setup as it is at the moment:

uname -a
Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux

btrfs --version
btrfs-progs v4.12

btrfs fi show
Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
Total devices 1 FS bytes used 5.12TiB
devid    1 size 7.27TiB used 6.24TiB path /dev/md127

Here are the relevant dmesg logs for the current state of the device:

[   19.119391] md: md127 stopped.
[   19.120841] md: bind<sdb3>
[   19.121120] md: bind<sdc3>
[   19.121380] md: bind<sda3>
[   19.125535] md/raid:md127: device sda3 operational as raid disk 0
[   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
[   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
[   19.126712] md/raid:md127: allocated 3240kB
[   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
devices, algorithm 2
[   19.126784] RAID conf printout:
[   19.126789]  --- level:5 rd:3 wd:3
[   19.126794]  disk 0, o:1, dev:sda3
[   19.126799]  disk 1, o:1, dev:sdb3
[   19.126804]  disk 2, o:1, dev:sdc3
[   19.128118] md127: detected capacity change from 0 to 7991637573632
[   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
across:523708k
[   19.434956] BTRFS: device label 11baed92:data devid 1 transid
151800 /dev/md127
[   19.739276] BTRFS info (device md127): setting nodatasum
[   19.740440] BTRFS critical (device md127): unable to find logical
3208757641216 len 4096
[   19.740450] BTRFS critical (device md127): unable to find logical
3208757641216 len 4096
[   19.740498] BTRFS critical (device md127): unable to find logical
3208757641216 len 4096
[   19.740512] BTRFS critical (device md127): unable to find logical
3208757641216 len 4096
[   19.740552] BTRFS critical (device md127): unable to find logical
3208757641216 len 4096
[   19.740560] BTRFS critical (device md127): unable to find logical
3208757641216 len 4096
[   19.740576] BTRFS error (device md127): failed to read chunk root
[   19.783975] BTRFS error (device md127): open_ctree failed

In an attempt to recover the volume myself I run a few BTRFS commands
mostly using advice from here:
https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
that actually seems to have made things worse as I can no longer mount
the file system, not even in readonly mode.

So starting from the beginning here is a list of things I have done so
far (hopefully I remembered the order in which I ran them!)

1. Noticed that my backups to the NAS were not running (didn't get
notified that the volume had basically "died")
2. ReadyNAS UI indicated that the volume was inactive.
3. SSHed onto the box and found that the first drive was not marked as
operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
the disk and let the array resync.
4. After resync the volume still was unaccessible so I looked at the
logs once more and saw something like the following which seemed to
indicate that the replay log had been corrupted when the power went
out:

BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
is 0: block=232292352, root=7, slot=0
BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
is 0: block=232292352, root=7, slot=0
BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
failure (Failed to recover log tree)
BTRFS error (device md127): pending csums is 155648
BTRFS error (device md127): cleaner transaction attach returned -30
BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
is 0: block=232292352, root=7, slot=0

5. Then:

btrfs rescue zero-log

6. Was then able to mount the volume in readonly mode.

btrfs scrub start

Which fixed some errors but not all:

scrub status for 20628cda-d98f-4f85-955c-932a367f8821

scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
total bytes scrubbed: 224.26GiB with 6 errors
error details: csum=6
corrected errors: 0, uncorrectable errors: 6, unverified errors: 0

scrub status for 20628cda-d98f-4f85-955c-932a367f8821
scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
total bytes scrubbed: 224.26GiB with 6 errors
error details: csum=6
corrected errors: 0, uncorrectable errors: 6, unverified errors: 0

6. Seeing this hanging I rebooted the NAS
7. Think this is when the volume would not mount at all.
8. Seeing log entries like these:

BTRFS warning (device md127): checksum error at logical 20800943685632
on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3

I ran

btrfs check --fix-crc

And that brings us to where I am now: Some seemly corrupted BTRFS
metadata and unable to mount the drive even with the recovery option.

Any help you can give is much appreciated!

Kind regards
Michael

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-04-28  8:30 BTRFS RAID filesystem unmountable Michael Wade
@ 2018-04-28  8:45 ` Qu Wenruo
  2018-04-28  9:37   ` Michael Wade
  0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-04-28  8:45 UTC (permalink / raw)
  To: Michael Wade, linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 6752 bytes --]



On 2018年04月28日 16:30, Michael Wade wrote:
> Hi all,
> 
> I was hoping that someone would be able to help me resolve the issues
> I am having with my ReadyNAS BTRFS volume. Basically my trouble
> started after a power cut, subsequently the volume would not mount.
> Here are the details of my setup as it is at the moment:
> 
> uname -a
> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux

The kernel is pretty old for btrfs.
Strongly recommended to upgrade.

> 
> btrfs --version
> btrfs-progs v4.12

So is the user tools.

Although I think it won't be a big problem, as needed tool should be there.

> 
> btrfs fi show
> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
> Total devices 1 FS bytes used 5.12TiB
> devid    1 size 7.27TiB used 6.24TiB path /dev/md127

So, it's btrfs on mdraid.
It would normally make things harder to debug, so I could only provide
advice from the respect of btrfs.
For mdraid part, I can't ensure anything.

> 
> Here are the relevant dmesg logs for the current state of the device:
> 
> [   19.119391] md: md127 stopped.
> [   19.120841] md: bind<sdb3>
> [   19.121120] md: bind<sdc3>
> [   19.121380] md: bind<sda3>
> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
> [   19.126712] md/raid:md127: allocated 3240kB
> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
> devices, algorithm 2
> [   19.126784] RAID conf printout:
> [   19.126789]  --- level:5 rd:3 wd:3
> [   19.126794]  disk 0, o:1, dev:sda3
> [   19.126799]  disk 1, o:1, dev:sdb3
> [   19.126804]  disk 2, o:1, dev:sdc3
> [   19.128118] md127: detected capacity change from 0 to 7991637573632
> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
> across:523708k
> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
> 151800 /dev/md127
> [   19.739276] BTRFS info (device md127): setting nodatasum
> [   19.740440] BTRFS critical (device md127): unable to find logical
> 3208757641216 len 4096
> [   19.740450] BTRFS critical (device md127): unable to find logical
> 3208757641216 len 4096
> [   19.740498] BTRFS critical (device md127): unable to find logical
> 3208757641216 len 4096
> [   19.740512] BTRFS critical (device md127): unable to find logical
> 3208757641216 len 4096
> [   19.740552] BTRFS critical (device md127): unable to find logical
> 3208757641216 len 4096
> [   19.740560] BTRFS critical (device md127): unable to find logical
> 3208757641216 len 4096
> [   19.740576] BTRFS error (device md127): failed to read chunk root

This shows it pretty clear, btrfs fails to read chunk root.
And according your above "len 4096" it's pretty old fs, as it's still
using 4K nodesize other than 16K nodesize.

According to above output, it means your superblock by somehow lacks the
needed system chunk mapping, which is used to initialize chunk mapping.

Please provide the following command output:

# btrfs inspect dump-super -fFa /dev/md127

Also, please consider run the following command and dump all its output:

# btrfs rescue chunk-recover -v /dev/md127.

Please note that, above command can take a long time to finish, and if
it works without problem, it may solve your problem.
But if it doesn't work, the output could help me to manually craft a fix
to your super block.

Thanks,
Qu


> [   19.783975] BTRFS error (device md127): open_ctree failed
> 
> In an attempt to recover the volume myself I run a few BTRFS commands
> mostly using advice from here:
> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
> that actually seems to have made things worse as I can no longer mount
> the file system, not even in readonly mode.
> 
> So starting from the beginning here is a list of things I have done so
> far (hopefully I remembered the order in which I ran them!)
> 
> 1. Noticed that my backups to the NAS were not running (didn't get
> notified that the volume had basically "died")
> 2. ReadyNAS UI indicated that the volume was inactive.
> 3. SSHed onto the box and found that the first drive was not marked as
> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
> the disk and let the array resync.
> 4. After resync the volume still was unaccessible so I looked at the
> logs once more and saw something like the following which seemed to
> indicate that the replay log had been corrupted when the power went
> out:
> 
> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
> is 0: block=232292352, root=7, slot=0
> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
> is 0: block=232292352, root=7, slot=0
> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
> failure (Failed to recover log tree)
> BTRFS error (device md127): pending csums is 155648
> BTRFS error (device md127): cleaner transaction attach returned -30
> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
> is 0: block=232292352, root=7, slot=0
> 
> 5. Then:
> 
> btrfs rescue zero-log
> 
> 6. Was then able to mount the volume in readonly mode.
> 
> btrfs scrub start
> 
> Which fixed some errors but not all:
> 
> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
> 
> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
> total bytes scrubbed: 224.26GiB with 6 errors
> error details: csum=6
> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
> 
> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
> total bytes scrubbed: 224.26GiB with 6 errors
> error details: csum=6
> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
> 
> 6. Seeing this hanging I rebooted the NAS
> 7. Think this is when the volume would not mount at all.
> 8. Seeing log entries like these:
> 
> BTRFS warning (device md127): checksum error at logical 20800943685632
> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
> 
> I ran
> 
> btrfs check --fix-crc
> 
> And that brings us to where I am now: Some seemly corrupted BTRFS
> metadata and unable to mount the drive even with the recovery option.
> 
> Any help you can give is much appreciated!
> 
> Kind regards
> Michael
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-04-28  8:45 ` Qu Wenruo
@ 2018-04-28  9:37   ` Michael Wade
  2018-04-28 11:38     ` Qu Wenruo
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-04-28  9:37 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 7547 bytes --]

Hi Qu,

Thanks for your reply. I will investigate upgrading the kernel,
however I worry that future ReadyNAS firmware upgrades would fail on a
newer kernel version (I don't have much linux experience so maybe my
concerns are unfounded!?).

I have attached the output of the dump super command.

I did actually run chunk recover before, without the verbose option,
it took around 24 hours to finish but did not resolve my issue. Happy
to start that again if you need its output.

Thanks so much for your help.

Kind regards
Michael

On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2018年04月28日 16:30, Michael Wade wrote:
>> Hi all,
>>
>> I was hoping that someone would be able to help me resolve the issues
>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>> started after a power cut, subsequently the volume would not mount.
>> Here are the details of my setup as it is at the moment:
>>
>> uname -a
>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>
> The kernel is pretty old for btrfs.
> Strongly recommended to upgrade.
>
>>
>> btrfs --version
>> btrfs-progs v4.12
>
> So is the user tools.
>
> Although I think it won't be a big problem, as needed tool should be there.
>
>>
>> btrfs fi show
>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>> Total devices 1 FS bytes used 5.12TiB
>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>
> So, it's btrfs on mdraid.
> It would normally make things harder to debug, so I could only provide
> advice from the respect of btrfs.
> For mdraid part, I can't ensure anything.
>
>>
>> Here are the relevant dmesg logs for the current state of the device:
>>
>> [   19.119391] md: md127 stopped.
>> [   19.120841] md: bind<sdb3>
>> [   19.121120] md: bind<sdc3>
>> [   19.121380] md: bind<sda3>
>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>> [   19.126712] md/raid:md127: allocated 3240kB
>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>> devices, algorithm 2
>> [   19.126784] RAID conf printout:
>> [   19.126789]  --- level:5 rd:3 wd:3
>> [   19.126794]  disk 0, o:1, dev:sda3
>> [   19.126799]  disk 1, o:1, dev:sdb3
>> [   19.126804]  disk 2, o:1, dev:sdc3
>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>> across:523708k
>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>> 151800 /dev/md127
>> [   19.739276] BTRFS info (device md127): setting nodatasum
>> [   19.740440] BTRFS critical (device md127): unable to find logical
>> 3208757641216 len 4096
>> [   19.740450] BTRFS critical (device md127): unable to find logical
>> 3208757641216 len 4096
>> [   19.740498] BTRFS critical (device md127): unable to find logical
>> 3208757641216 len 4096
>> [   19.740512] BTRFS critical (device md127): unable to find logical
>> 3208757641216 len 4096
>> [   19.740552] BTRFS critical (device md127): unable to find logical
>> 3208757641216 len 4096
>> [   19.740560] BTRFS critical (device md127): unable to find logical
>> 3208757641216 len 4096
>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>
> This shows it pretty clear, btrfs fails to read chunk root.
> And according your above "len 4096" it's pretty old fs, as it's still
> using 4K nodesize other than 16K nodesize.
>
> According to above output, it means your superblock by somehow lacks the
> needed system chunk mapping, which is used to initialize chunk mapping.
>
> Please provide the following command output:
>
> # btrfs inspect dump-super -fFa /dev/md127
>
> Also, please consider run the following command and dump all its output:
>
> # btrfs rescue chunk-recover -v /dev/md127.
>
> Please note that, above command can take a long time to finish, and if
> it works without problem, it may solve your problem.
> But if it doesn't work, the output could help me to manually craft a fix
> to your super block.
>
> Thanks,
> Qu
>
>
>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>
>> In an attempt to recover the volume myself I run a few BTRFS commands
>> mostly using advice from here:
>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>> that actually seems to have made things worse as I can no longer mount
>> the file system, not even in readonly mode.
>>
>> So starting from the beginning here is a list of things I have done so
>> far (hopefully I remembered the order in which I ran them!)
>>
>> 1. Noticed that my backups to the NAS were not running (didn't get
>> notified that the volume had basically "died")
>> 2. ReadyNAS UI indicated that the volume was inactive.
>> 3. SSHed onto the box and found that the first drive was not marked as
>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>> the disk and let the array resync.
>> 4. After resync the volume still was unaccessible so I looked at the
>> logs once more and saw something like the following which seemed to
>> indicate that the replay log had been corrupted when the power went
>> out:
>>
>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>> is 0: block=232292352, root=7, slot=0
>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>> is 0: block=232292352, root=7, slot=0
>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>> failure (Failed to recover log tree)
>> BTRFS error (device md127): pending csums is 155648
>> BTRFS error (device md127): cleaner transaction attach returned -30
>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>> is 0: block=232292352, root=7, slot=0
>>
>> 5. Then:
>>
>> btrfs rescue zero-log
>>
>> 6. Was then able to mount the volume in readonly mode.
>>
>> btrfs scrub start
>>
>> Which fixed some errors but not all:
>>
>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>
>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>> total bytes scrubbed: 224.26GiB with 6 errors
>> error details: csum=6
>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>
>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>> total bytes scrubbed: 224.26GiB with 6 errors
>> error details: csum=6
>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>
>> 6. Seeing this hanging I rebooted the NAS
>> 7. Think this is when the volume would not mount at all.
>> 8. Seeing log entries like these:
>>
>> BTRFS warning (device md127): checksum error at logical 20800943685632
>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>
>> I ran
>>
>> btrfs check --fix-crc
>>
>> And that brings us to where I am now: Some seemly corrupted BTRFS
>> metadata and unable to mount the drive even with the recovery option.
>>
>> Any help you can give is much appreciated!
>>
>> Kind regards
>> Michael
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

[-- Attachment #2: dumpsuper.txt --]
[-- Type: text/plain, Size: 11681 bytes --]

btrfs inspect dump-super -fFa /dev/md127
superblock: bytenr=65536, device=/dev/md127
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xa5e7dc75 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			20628cda-d98f-4f85-955c-932a367f8821
label			11baed92:data
generation		151800
root			44957696
sys_array_size		355
chunk_root_generation	151777
root_level		1
chunk_root		20800943685632
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		7991637573632
bytes_used		5631993507840
sectorsize		4096
nodesize		32768
leafsize (deprecated)		32768
stripesize		4096
root_dir		6
num_devices		1
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x21
			( MIXED_BACKREF |
			  BIG_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	151800
dev_item.uuid		7c2324dc-7906-430a-b44f-cbfce3ac5c56
dev_item.fsid		20628cda-d98f-4f85-955c-932a367f8821 [match]
dev_item.type		2
dev_item.total_bytes	7991637573632
dev_item.bytes_used	6860241371136
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 1 offset 20971520
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
			stripe 1 devid 1 offset 29360128
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
	item 2 key (FIRST_CHUNK_TREE CHUNK_ITEM 20800943685632)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM|DUP
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 1
			stripe 0 devid 1 offset 266325721088
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
			stripe 1 devid 1 offset 266359275520
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
backup_roots[4]:
	backup 0:
		backup_tree_root:	44236800	gen: 151798	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	44302336	gen: 151798	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 1:
		backup_tree_root:	44597248	gen: 151799	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	44630016	gen: 151799	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 2:
		backup_tree_root:	44957696	gen: 151800	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	45023232	gen: 151800	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 3:
		backup_tree_root:	43876352	gen: 151797	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	43941888	gen: 151797	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1


superblock: bytenr=67108864, device=/dev/md127
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x0586f4bb [match]
bytenr			67108864
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			20628cda-d98f-4f85-955c-932a367f8821
label			11baed92:data
generation		151800
root			44957696
sys_array_size		355
chunk_root_generation	151777
root_level		1
chunk_root		20800943685632
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		7991637573632
bytes_used		5631993507840
sectorsize		4096
nodesize		32768
leafsize (deprecated)		32768
stripesize		4096
root_dir		6
num_devices		1
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x21
			( MIXED_BACKREF |
			  BIG_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	151800
dev_item.uuid		7c2324dc-7906-430a-b44f-cbfce3ac5c56
dev_item.fsid		20628cda-d98f-4f85-955c-932a367f8821 [match]
dev_item.type		2
dev_item.total_bytes	7991637573632
dev_item.bytes_used	6860241371136
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 1 offset 20971520
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
			stripe 1 devid 1 offset 29360128
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
	item 2 key (FIRST_CHUNK_TREE CHUNK_ITEM 20800943685632)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM|DUP
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 1
			stripe 0 devid 1 offset 266325721088
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
			stripe 1 devid 1 offset 266359275520
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
backup_roots[4]:
	backup 0:
		backup_tree_root:	44236800	gen: 151798	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	44302336	gen: 151798	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 1:
		backup_tree_root:	44597248	gen: 151799	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	44630016	gen: 151799	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 2:
		backup_tree_root:	44957696	gen: 151800	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	45023232	gen: 151800	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 3:
		backup_tree_root:	43876352	gen: 151797	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	43941888	gen: 151797	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1


superblock: bytenr=274877906944, device=/dev/md127
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xf801a28a [match]
bytenr			274877906944
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			20628cda-d98f-4f85-955c-932a367f8821
label			11baed92:data
generation		151800
root			44957696
sys_array_size		355
chunk_root_generation	151777
root_level		1
chunk_root		20800943685632
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		7991637573632
bytes_used		5631993507840
sectorsize		4096
nodesize		32768
leafsize (deprecated)		32768
stripesize		4096
root_dir		6
num_devices		1
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x21
			( MIXED_BACKREF |
			  BIG_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	151800
dev_item.uuid		7c2324dc-7906-430a-b44f-cbfce3ac5c56
dev_item.fsid		20628cda-d98f-4f85-955c-932a367f8821 [match]
dev_item.type		2
dev_item.total_bytes	7991637573632
dev_item.bytes_used	6860241371136
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 1 offset 20971520
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
			stripe 1 devid 1 offset 29360128
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
	item 2 key (FIRST_CHUNK_TREE CHUNK_ITEM 20800943685632)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM|DUP
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 1
			stripe 0 devid 1 offset 266325721088
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
			stripe 1 devid 1 offset 266359275520
			dev_uuid 7c2324dc-7906-430a-b44f-cbfce3ac5c56
backup_roots[4]:
	backup 0:
		backup_tree_root:	44236800	gen: 151798	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	44302336	gen: 151798	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 1:
		backup_tree_root:	44597248	gen: 151799	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	44630016	gen: 151799	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 2:
		backup_tree_root:	44957696	gen: 151800	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	45023232	gen: 151800	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

	backup 3:
		backup_tree_root:	43876352	gen: 151797	level: 1
		backup_chunk_root:	20800943685632	gen: 151777	level: 1
		backup_extent_root:	43941888	gen: 151797	level: 2
		backup_fs_root:		167346176	gen: 150972	level: 0
		backup_dev_root:	29523968	gen: 151777	level: 1
		backup_csum_root:	37650432	gen: 151775	level: 0
		backup_total_bytes:	7991637573632
		backup_bytes_used:	5631993507840
		backup_num_devices:	1

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-04-28  9:37   ` Michael Wade
@ 2018-04-28 11:38     ` Qu Wenruo
       [not found]       ` <CAB+znrF_d+Hg_A9AMvWEB=S5eVAtYrvr2jUPcvR4FfB4hnCMWA@mail.gmail.com>
  0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-04-28 11:38 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 8585 bytes --]



On 2018年04月28日 17:37, Michael Wade wrote:
> Hi Qu,
> 
> Thanks for your reply. I will investigate upgrading the kernel,
> however I worry that future ReadyNAS firmware upgrades would fail on a
> newer kernel version (I don't have much linux experience so maybe my
> concerns are unfounded!?).
> 
> I have attached the output of the dump super command.
> 
> I did actually run chunk recover before, without the verbose option,
> it took around 24 hours to finish but did not resolve my issue. Happy
> to start that again if you need its output.

The system chunk only contains the following chunks:
[0, 4194304]:           Initial temporary chunk, not used at all
[20971520, 29360128]:   System chunk created by mkfs, should be full
                        used up
[20800943685632, 20800977240064]:
                        The newly created large system chunk.

The chunk root is still in 2nd chunk thus valid, but some of its leaf is
out of the range.

If you can't wait 24h for chunk recovery to run, my advice would be move
the disk to some other computer, and use latest btrfs-progs to execute
the following command:

# btrfs inpsect dump-tree -b 20800943685632 --follow

If we're lucky enough, we may read out the tree leaf containing the new
system chunk and save a day.

Thanks,
Qu

> 
> Thanks so much for your help.
> 
> Kind regards
> Michael
> 
> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年04月28日 16:30, Michael Wade wrote:
>>> Hi all,
>>>
>>> I was hoping that someone would be able to help me resolve the issues
>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>> started after a power cut, subsequently the volume would not mount.
>>> Here are the details of my setup as it is at the moment:
>>>
>>> uname -a
>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>
>> The kernel is pretty old for btrfs.
>> Strongly recommended to upgrade.
>>
>>>
>>> btrfs --version
>>> btrfs-progs v4.12
>>
>> So is the user tools.
>>
>> Although I think it won't be a big problem, as needed tool should be there.
>>
>>>
>>> btrfs fi show
>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>> Total devices 1 FS bytes used 5.12TiB
>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>
>> So, it's btrfs on mdraid.
>> It would normally make things harder to debug, so I could only provide
>> advice from the respect of btrfs.
>> For mdraid part, I can't ensure anything.
>>
>>>
>>> Here are the relevant dmesg logs for the current state of the device:
>>>
>>> [   19.119391] md: md127 stopped.
>>> [   19.120841] md: bind<sdb3>
>>> [   19.121120] md: bind<sdc3>
>>> [   19.121380] md: bind<sda3>
>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>> [   19.126712] md/raid:md127: allocated 3240kB
>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>> devices, algorithm 2
>>> [   19.126784] RAID conf printout:
>>> [   19.126789]  --- level:5 rd:3 wd:3
>>> [   19.126794]  disk 0, o:1, dev:sda3
>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>> across:523708k
>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>> 151800 /dev/md127
>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>> 3208757641216 len 4096
>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>> 3208757641216 len 4096
>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>> 3208757641216 len 4096
>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>> 3208757641216 len 4096
>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>> 3208757641216 len 4096
>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>> 3208757641216 len 4096
>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>
>> This shows it pretty clear, btrfs fails to read chunk root.
>> And according your above "len 4096" it's pretty old fs, as it's still
>> using 4K nodesize other than 16K nodesize.
>>
>> According to above output, it means your superblock by somehow lacks the
>> needed system chunk mapping, which is used to initialize chunk mapping.
>>
>> Please provide the following command output:
>>
>> # btrfs inspect dump-super -fFa /dev/md127
>>
>> Also, please consider run the following command and dump all its output:
>>
>> # btrfs rescue chunk-recover -v /dev/md127.
>>
>> Please note that, above command can take a long time to finish, and if
>> it works without problem, it may solve your problem.
>> But if it doesn't work, the output could help me to manually craft a fix
>> to your super block.
>>
>> Thanks,
>> Qu
>>
>>
>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>
>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>> mostly using advice from here:
>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>> that actually seems to have made things worse as I can no longer mount
>>> the file system, not even in readonly mode.
>>>
>>> So starting from the beginning here is a list of things I have done so
>>> far (hopefully I remembered the order in which I ran them!)
>>>
>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>> notified that the volume had basically "died")
>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>> 3. SSHed onto the box and found that the first drive was not marked as
>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>> the disk and let the array resync.
>>> 4. After resync the volume still was unaccessible so I looked at the
>>> logs once more and saw something like the following which seemed to
>>> indicate that the replay log had been corrupted when the power went
>>> out:
>>>
>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>> is 0: block=232292352, root=7, slot=0
>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>> is 0: block=232292352, root=7, slot=0
>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>> failure (Failed to recover log tree)
>>> BTRFS error (device md127): pending csums is 155648
>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>> is 0: block=232292352, root=7, slot=0
>>>
>>> 5. Then:
>>>
>>> btrfs rescue zero-log
>>>
>>> 6. Was then able to mount the volume in readonly mode.
>>>
>>> btrfs scrub start
>>>
>>> Which fixed some errors but not all:
>>>
>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>
>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>> total bytes scrubbed: 224.26GiB with 6 errors
>>> error details: csum=6
>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>
>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>> total bytes scrubbed: 224.26GiB with 6 errors
>>> error details: csum=6
>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>
>>> 6. Seeing this hanging I rebooted the NAS
>>> 7. Think this is when the volume would not mount at all.
>>> 8. Seeing log entries like these:
>>>
>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>
>>> I ran
>>>
>>> btrfs check --fix-crc
>>>
>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>> metadata and unable to mount the drive even with the recovery option.
>>>
>>> Any help you can give is much appreciated!
>>>
>>> Kind regards
>>> Michael
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
       [not found]       ` <CAB+znrF_d+Hg_A9AMvWEB=S5eVAtYrvr2jUPcvR4FfB4hnCMWA@mail.gmail.com>
@ 2018-04-29  8:33         ` Qu Wenruo
  2018-04-29  8:59           ` Michael Wade
  0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-04-29  8:33 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 9893 bytes --]



On 2018年04月29日 16:11, Michael Wade wrote:
> Thanks Qu,
> 
> Please find attached the log file for the chunk recover command.

Strangely, btrfs chunk recovery found no extra chunk beyond current
system chunk range.

Which means, it's chunk tree corrupted.

Please dump the chunk tree with latest btrfs-progs (which provides the
new --follow option).

# btrfs inspect dump-tree -b 20800943685632 <device>

If it doesn't work, please provide the following binary dump:

# dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
# dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
(And will need to repeat similar dump for several times according to
above dump)

Thanks,
Qu


> 
> Kind regards
> Michael
> 
> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年04月28日 17:37, Michael Wade wrote:
>>> Hi Qu,
>>>
>>> Thanks for your reply. I will investigate upgrading the kernel,
>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>> newer kernel version (I don't have much linux experience so maybe my
>>> concerns are unfounded!?).
>>>
>>> I have attached the output of the dump super command.
>>>
>>> I did actually run chunk recover before, without the verbose option,
>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>> to start that again if you need its output.
>>
>> The system chunk only contains the following chunks:
>> [0, 4194304]:           Initial temporary chunk, not used at all
>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>                         used up
>> [20800943685632, 20800977240064]:
>>                         The newly created large system chunk.
>>
>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>> out of the range.
>>
>> If you can't wait 24h for chunk recovery to run, my advice would be move
>> the disk to some other computer, and use latest btrfs-progs to execute
>> the following command:
>>
>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>
>> If we're lucky enough, we may read out the tree leaf containing the new
>> system chunk and save a day.
>>
>> Thanks,
>> Qu
>>
>>>
>>> Thanks so much for your help.
>>>
>>> Kind regards
>>> Michael
>>>
>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>> Hi all,
>>>>>
>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>> started after a power cut, subsequently the volume would not mount.
>>>>> Here are the details of my setup as it is at the moment:
>>>>>
>>>>> uname -a
>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>
>>>> The kernel is pretty old for btrfs.
>>>> Strongly recommended to upgrade.
>>>>
>>>>>
>>>>> btrfs --version
>>>>> btrfs-progs v4.12
>>>>
>>>> So is the user tools.
>>>>
>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>
>>>>>
>>>>> btrfs fi show
>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>
>>>> So, it's btrfs on mdraid.
>>>> It would normally make things harder to debug, so I could only provide
>>>> advice from the respect of btrfs.
>>>> For mdraid part, I can't ensure anything.
>>>>
>>>>>
>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>
>>>>> [   19.119391] md: md127 stopped.
>>>>> [   19.120841] md: bind<sdb3>
>>>>> [   19.121120] md: bind<sdc3>
>>>>> [   19.121380] md: bind<sda3>
>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>> devices, algorithm 2
>>>>> [   19.126784] RAID conf printout:
>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>> across:523708k
>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>> 151800 /dev/md127
>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>> 3208757641216 len 4096
>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>> 3208757641216 len 4096
>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>> 3208757641216 len 4096
>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>> 3208757641216 len 4096
>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>> 3208757641216 len 4096
>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>> 3208757641216 len 4096
>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>
>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>> using 4K nodesize other than 16K nodesize.
>>>>
>>>> According to above output, it means your superblock by somehow lacks the
>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>
>>>> Please provide the following command output:
>>>>
>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>
>>>> Also, please consider run the following command and dump all its output:
>>>>
>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>
>>>> Please note that, above command can take a long time to finish, and if
>>>> it works without problem, it may solve your problem.
>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>> to your super block.
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>
>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>
>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>> mostly using advice from here:
>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>> that actually seems to have made things worse as I can no longer mount
>>>>> the file system, not even in readonly mode.
>>>>>
>>>>> So starting from the beginning here is a list of things I have done so
>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>
>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>> notified that the volume had basically "died")
>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>> the disk and let the array resync.
>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>> logs once more and saw something like the following which seemed to
>>>>> indicate that the replay log had been corrupted when the power went
>>>>> out:
>>>>>
>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>> is 0: block=232292352, root=7, slot=0
>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>> is 0: block=232292352, root=7, slot=0
>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>> failure (Failed to recover log tree)
>>>>> BTRFS error (device md127): pending csums is 155648
>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>> is 0: block=232292352, root=7, slot=0
>>>>>
>>>>> 5. Then:
>>>>>
>>>>> btrfs rescue zero-log
>>>>>
>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>
>>>>> btrfs scrub start
>>>>>
>>>>> Which fixed some errors but not all:
>>>>>
>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>
>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>> error details: csum=6
>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>
>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>> error details: csum=6
>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>
>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>> 7. Think this is when the volume would not mount at all.
>>>>> 8. Seeing log entries like these:
>>>>>
>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>
>>>>> I ran
>>>>>
>>>>> btrfs check --fix-crc
>>>>>
>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>
>>>>> Any help you can give is much appreciated!
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-04-29  8:33         ` Qu Wenruo
@ 2018-04-29  8:59           ` Michael Wade
  2018-04-29  9:33             ` Qu Wenruo
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-04-29  8:59 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

Ok, will it be possible for me to install the new version of the tools
on my current kernel without overriding the existing install? Hesitant
to update kernel/btrfs as it might break the ReadyNAS interface /
future firmware upgrades.

Perhaps I could grab this:
https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
hopefully build from source and then run the binaries directly?

Kind regards

On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2018年04月29日 16:11, Michael Wade wrote:
>> Thanks Qu,
>>
>> Please find attached the log file for the chunk recover command.
>
> Strangely, btrfs chunk recovery found no extra chunk beyond current
> system chunk range.
>
> Which means, it's chunk tree corrupted.
>
> Please dump the chunk tree with latest btrfs-progs (which provides the
> new --follow option).
>
> # btrfs inspect dump-tree -b 20800943685632 <device>
>
> If it doesn't work, please provide the following binary dump:
>
> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
> (And will need to repeat similar dump for several times according to
> above dump)
>
> Thanks,
> Qu
>
>
>>
>> Kind regards
>> Michael
>>
>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>> Hi Qu,
>>>>
>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>> newer kernel version (I don't have much linux experience so maybe my
>>>> concerns are unfounded!?).
>>>>
>>>> I have attached the output of the dump super command.
>>>>
>>>> I did actually run chunk recover before, without the verbose option,
>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>> to start that again if you need its output.
>>>
>>> The system chunk only contains the following chunks:
>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>                         used up
>>> [20800943685632, 20800977240064]:
>>>                         The newly created large system chunk.
>>>
>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>> out of the range.
>>>
>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>> the disk to some other computer, and use latest btrfs-progs to execute
>>> the following command:
>>>
>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>
>>> If we're lucky enough, we may read out the tree leaf containing the new
>>> system chunk and save a day.
>>>
>>> Thanks,
>>> Qu
>>>
>>>>
>>>> Thanks so much for your help.
>>>>
>>>> Kind regards
>>>> Michael
>>>>
>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>
>>>>>
>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>
>>>>>> uname -a
>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>
>>>>> The kernel is pretty old for btrfs.
>>>>> Strongly recommended to upgrade.
>>>>>
>>>>>>
>>>>>> btrfs --version
>>>>>> btrfs-progs v4.12
>>>>>
>>>>> So is the user tools.
>>>>>
>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>
>>>>>>
>>>>>> btrfs fi show
>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>
>>>>> So, it's btrfs on mdraid.
>>>>> It would normally make things harder to debug, so I could only provide
>>>>> advice from the respect of btrfs.
>>>>> For mdraid part, I can't ensure anything.
>>>>>
>>>>>>
>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>
>>>>>> [   19.119391] md: md127 stopped.
>>>>>> [   19.120841] md: bind<sdb3>
>>>>>> [   19.121120] md: bind<sdc3>
>>>>>> [   19.121380] md: bind<sda3>
>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>> devices, algorithm 2
>>>>>> [   19.126784] RAID conf printout:
>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>> across:523708k
>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>> 151800 /dev/md127
>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>> 3208757641216 len 4096
>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>> 3208757641216 len 4096
>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>> 3208757641216 len 4096
>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>> 3208757641216 len 4096
>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>> 3208757641216 len 4096
>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>> 3208757641216 len 4096
>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>
>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>> using 4K nodesize other than 16K nodesize.
>>>>>
>>>>> According to above output, it means your superblock by somehow lacks the
>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>
>>>>> Please provide the following command output:
>>>>>
>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>
>>>>> Also, please consider run the following command and dump all its output:
>>>>>
>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>
>>>>> Please note that, above command can take a long time to finish, and if
>>>>> it works without problem, it may solve your problem.
>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>> to your super block.
>>>>>
>>>>> Thanks,
>>>>> Qu
>>>>>
>>>>>
>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>
>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>> mostly using advice from here:
>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>> the file system, not even in readonly mode.
>>>>>>
>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>
>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>> notified that the volume had basically "died")
>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>> the disk and let the array resync.
>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>> logs once more and saw something like the following which seemed to
>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>> out:
>>>>>>
>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>> failure (Failed to recover log tree)
>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>
>>>>>> 5. Then:
>>>>>>
>>>>>> btrfs rescue zero-log
>>>>>>
>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>
>>>>>> btrfs scrub start
>>>>>>
>>>>>> Which fixed some errors but not all:
>>>>>>
>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>
>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>> error details: csum=6
>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>
>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>> error details: csum=6
>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>
>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>> 8. Seeing log entries like these:
>>>>>>
>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>
>>>>>> I ran
>>>>>>
>>>>>> btrfs check --fix-crc
>>>>>>
>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>
>>>>>> Any help you can give is much appreciated!
>>>>>>
>>>>>> Kind regards
>>>>>> Michael
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>
>>>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-04-29  8:59           ` Michael Wade
@ 2018-04-29  9:33             ` Qu Wenruo
       [not found]               ` <CAB+znrEcW3+++ZBrB_ZGRFncssO-zffbJ6ug8_z0DJOhbp+vGA@mail.gmail.com>
  0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-04-29  9:33 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 11270 bytes --]



On 2018年04月29日 16:59, Michael Wade wrote:
> Ok, will it be possible for me to install the new version of the tools
> on my current kernel without overriding the existing install? Hesitant
> to update kernel/btrfs as it might break the ReadyNAS interface /
> future firmware upgrades.
> 
> Perhaps I could grab this:
> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
> hopefully build from source and then run the binaries directly?

Of course, that's how most of us test btrfs-progs builds.

Thanks,
Qu

> 
> Kind regards
> 
> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年04月29日 16:11, Michael Wade wrote:
>>> Thanks Qu,
>>>
>>> Please find attached the log file for the chunk recover command.
>>
>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>> system chunk range.
>>
>> Which means, it's chunk tree corrupted.
>>
>> Please dump the chunk tree with latest btrfs-progs (which provides the
>> new --follow option).
>>
>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>
>> If it doesn't work, please provide the following binary dump:
>>
>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>> (And will need to repeat similar dump for several times according to
>> above dump)
>>
>> Thanks,
>> Qu
>>
>>
>>>
>>> Kind regards
>>> Michael
>>>
>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>> Hi Qu,
>>>>>
>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>> concerns are unfounded!?).
>>>>>
>>>>> I have attached the output of the dump super command.
>>>>>
>>>>> I did actually run chunk recover before, without the verbose option,
>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>> to start that again if you need its output.
>>>>
>>>> The system chunk only contains the following chunks:
>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>                         used up
>>>> [20800943685632, 20800977240064]:
>>>>                         The newly created large system chunk.
>>>>
>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>> out of the range.
>>>>
>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>> the following command:
>>>>
>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>
>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>> system chunk and save a day.
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>> Thanks so much for your help.
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>>
>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>> Hi all,
>>>>>>>
>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>
>>>>>>> uname -a
>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>
>>>>>> The kernel is pretty old for btrfs.
>>>>>> Strongly recommended to upgrade.
>>>>>>
>>>>>>>
>>>>>>> btrfs --version
>>>>>>> btrfs-progs v4.12
>>>>>>
>>>>>> So is the user tools.
>>>>>>
>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>
>>>>>>>
>>>>>>> btrfs fi show
>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>
>>>>>> So, it's btrfs on mdraid.
>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>> advice from the respect of btrfs.
>>>>>> For mdraid part, I can't ensure anything.
>>>>>>
>>>>>>>
>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>
>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>> devices, algorithm 2
>>>>>>> [   19.126784] RAID conf printout:
>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>> across:523708k
>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>> 151800 /dev/md127
>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>> 3208757641216 len 4096
>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>> 3208757641216 len 4096
>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>> 3208757641216 len 4096
>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>> 3208757641216 len 4096
>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>> 3208757641216 len 4096
>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>> 3208757641216 len 4096
>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>
>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>
>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>
>>>>>> Please provide the following command output:
>>>>>>
>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>
>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>
>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>
>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>> it works without problem, it may solve your problem.
>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>> to your super block.
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>
>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>
>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>> mostly using advice from here:
>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>> the file system, not even in readonly mode.
>>>>>>>
>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>
>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>> notified that the volume had basically "died")
>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>> the disk and let the array resync.
>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>> out:
>>>>>>>
>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>> failure (Failed to recover log tree)
>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>
>>>>>>> 5. Then:
>>>>>>>
>>>>>>> btrfs rescue zero-log
>>>>>>>
>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>
>>>>>>> btrfs scrub start
>>>>>>>
>>>>>>> Which fixed some errors but not all:
>>>>>>>
>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>
>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>> error details: csum=6
>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>
>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>> error details: csum=6
>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>
>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>> 8. Seeing log entries like these:
>>>>>>>
>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>
>>>>>>> I ran
>>>>>>>
>>>>>>> btrfs check --fix-crc
>>>>>>>
>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>
>>>>>>> Any help you can give is much appreciated!
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Michael
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>
>>>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
       [not found]               ` <CAB+znrEcW3+++ZBrB_ZGRFncssO-zffbJ6ug8_z0DJOhbp+vGA@mail.gmail.com>
@ 2018-04-30  1:52                 ` Qu Wenruo
  2018-04-30  3:02                 ` Qu Wenruo
  1 sibling, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2018-04-30  1:52 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 12613 bytes --]



On 2018年04月29日 22:08, Michael Wade wrote:
> Hi Qu,
> 
> Got this error message:
> 
> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
> btrfs-progs v4.16.1
> bytenr mismatch, want=20800943685632, have=3118598835113619663
> ERROR: cannot read chunk root
> ERROR: unable to open /dev/md127
> 
> I have attached the dumps for:
> 
> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520

A little strange, the two copies mismatch with each other.

I'll double check the different of it, and maybe that's the reason.

Thanks,
Qu

> 
> Kind regards
> Michael
> 
> 
> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年04月29日 16:59, Michael Wade wrote:
>>> Ok, will it be possible for me to install the new version of the tools
>>> on my current kernel without overriding the existing install? Hesitant
>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>> future firmware upgrades.
>>>
>>> Perhaps I could grab this:
>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>> hopefully build from source and then run the binaries directly?
>>
>> Of course, that's how most of us test btrfs-progs builds.
>>
>> Thanks,
>> Qu
>>
>>>
>>> Kind regards
>>>
>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>> Thanks Qu,
>>>>>
>>>>> Please find attached the log file for the chunk recover command.
>>>>
>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>> system chunk range.
>>>>
>>>> Which means, it's chunk tree corrupted.
>>>>
>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>> new --follow option).
>>>>
>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>
>>>> If it doesn't work, please provide the following binary dump:
>>>>
>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>> (And will need to repeat similar dump for several times according to
>>>> above dump)
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>>
>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>> Hi Qu,
>>>>>>>
>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>> concerns are unfounded!?).
>>>>>>>
>>>>>>> I have attached the output of the dump super command.
>>>>>>>
>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>> to start that again if you need its output.
>>>>>>
>>>>>> The system chunk only contains the following chunks:
>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>                         used up
>>>>>> [20800943685632, 20800977240064]:
>>>>>>                         The newly created large system chunk.
>>>>>>
>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>> out of the range.
>>>>>>
>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>> the following command:
>>>>>>
>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>
>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>> system chunk and save a day.
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>>
>>>>>>> Thanks so much for your help.
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Michael
>>>>>>>
>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>> Hi all,
>>>>>>>>>
>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>
>>>>>>>>> uname -a
>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>
>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> btrfs --version
>>>>>>>>> btrfs-progs v4.12
>>>>>>>>
>>>>>>>> So is the user tools.
>>>>>>>>
>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> btrfs fi show
>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>
>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>> advice from the respect of btrfs.
>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>
>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>> devices, algorithm 2
>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>> across:523708k
>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>> 151800 /dev/md127
>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>
>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>
>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>
>>>>>>>> Please provide the following command output:
>>>>>>>>
>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>
>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>
>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>
>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>> to your super block.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Qu
>>>>>>>>
>>>>>>>>
>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>
>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>> mostly using advice from here:
>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>
>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>
>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>> the disk and let the array resync.
>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>> out:
>>>>>>>>>
>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>
>>>>>>>>> 5. Then:
>>>>>>>>>
>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>
>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>
>>>>>>>>> btrfs scrub start
>>>>>>>>>
>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>
>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>
>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>> error details: csum=6
>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>
>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>> error details: csum=6
>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>
>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>
>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>
>>>>>>>>> I ran
>>>>>>>>>
>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>
>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>
>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>
>>>>>>>>> Kind regards
>>>>>>>>> Michael
>>>>>>>>> --
>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
       [not found]               ` <CAB+znrEcW3+++ZBrB_ZGRFncssO-zffbJ6ug8_z0DJOhbp+vGA@mail.gmail.com>
  2018-04-30  1:52                 ` Qu Wenruo
@ 2018-04-30  3:02                 ` Qu Wenruo
  2018-05-01 15:50                   ` Michael Wade
  1 sibling, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-04-30  3:02 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 12963 bytes --]



On 2018年04月29日 22:08, Michael Wade wrote:
> Hi Qu,
> 
> Got this error message:
> 
> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
> btrfs-progs v4.16.1
> bytenr mismatch, want=20800943685632, have=3118598835113619663
> ERROR: cannot read chunk root
> ERROR: unable to open /dev/md127
> 
> I have attached the dumps for:
> 
> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520

Unfortunately, both dumps are corrupted and contain mostly garbage.
I think it's the underlying stack (mdraid) has something wrong or failed
to recover its data.

This means your last chance will be btrfs-find-root.

Please try:
# btrfs-find-root -o 3 <device>

And provide all the output.

But please keep in mind, chunk root is a critical tree, and so far it's
already heavily damaged.
Although I could still continue try to recover, there is pretty low
chance now.

Thanks,
Qu
> 
> Kind regards
> Michael
> 
> 
> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年04月29日 16:59, Michael Wade wrote:
>>> Ok, will it be possible for me to install the new version of the tools
>>> on my current kernel without overriding the existing install? Hesitant
>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>> future firmware upgrades.
>>>
>>> Perhaps I could grab this:
>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>> hopefully build from source and then run the binaries directly?
>>
>> Of course, that's how most of us test btrfs-progs builds.
>>
>> Thanks,
>> Qu
>>
>>>
>>> Kind regards
>>>
>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>> Thanks Qu,
>>>>>
>>>>> Please find attached the log file for the chunk recover command.
>>>>
>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>> system chunk range.
>>>>
>>>> Which means, it's chunk tree corrupted.
>>>>
>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>> new --follow option).
>>>>
>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>
>>>> If it doesn't work, please provide the following binary dump:
>>>>
>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>> (And will need to repeat similar dump for several times according to
>>>> above dump)
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>>
>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>> Hi Qu,
>>>>>>>
>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>> concerns are unfounded!?).
>>>>>>>
>>>>>>> I have attached the output of the dump super command.
>>>>>>>
>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>> to start that again if you need its output.
>>>>>>
>>>>>> The system chunk only contains the following chunks:
>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>                         used up
>>>>>> [20800943685632, 20800977240064]:
>>>>>>                         The newly created large system chunk.
>>>>>>
>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>> out of the range.
>>>>>>
>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>> the following command:
>>>>>>
>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>
>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>> system chunk and save a day.
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>>
>>>>>>> Thanks so much for your help.
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Michael
>>>>>>>
>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>> Hi all,
>>>>>>>>>
>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>
>>>>>>>>> uname -a
>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>
>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> btrfs --version
>>>>>>>>> btrfs-progs v4.12
>>>>>>>>
>>>>>>>> So is the user tools.
>>>>>>>>
>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> btrfs fi show
>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>
>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>> advice from the respect of btrfs.
>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>
>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>> devices, algorithm 2
>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>> across:523708k
>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>> 151800 /dev/md127
>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>> 3208757641216 len 4096
>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>
>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>
>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>
>>>>>>>> Please provide the following command output:
>>>>>>>>
>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>
>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>
>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>
>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>> to your super block.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Qu
>>>>>>>>
>>>>>>>>
>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>
>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>> mostly using advice from here:
>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>
>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>
>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>> the disk and let the array resync.
>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>> out:
>>>>>>>>>
>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>
>>>>>>>>> 5. Then:
>>>>>>>>>
>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>
>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>
>>>>>>>>> btrfs scrub start
>>>>>>>>>
>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>
>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>
>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>> error details: csum=6
>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>
>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>> error details: csum=6
>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>
>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>
>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>
>>>>>>>>> I ran
>>>>>>>>>
>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>
>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>
>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>
>>>>>>>>> Kind regards
>>>>>>>>> Michael
>>>>>>>>> --
>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-04-30  3:02                 ` Qu Wenruo
@ 2018-05-01 15:50                   ` Michael Wade
  2018-05-02  1:31                     ` Qu Wenruo
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-05-01 15:50 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

Hi Qu,

Oh dear that is not good news!

I have been running the find root command since yesterday but it only
seems to be only be outputting the following message:

ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
ERROR: tree block bytenr 0 is not aligned to sectorsize 4096

I tried with the latest btrfs tools compiled from source and the ones
I have installed with the same result. Is there a CLI utility I could
use to determine if the log contains any other content?

Kind regards
Michael


On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2018年04月29日 22:08, Michael Wade wrote:
>> Hi Qu,
>>
>> Got this error message:
>>
>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>> btrfs-progs v4.16.1
>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>> ERROR: cannot read chunk root
>> ERROR: unable to open /dev/md127
>>
>> I have attached the dumps for:
>>
>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>
> Unfortunately, both dumps are corrupted and contain mostly garbage.
> I think it's the underlying stack (mdraid) has something wrong or failed
> to recover its data.
>
> This means your last chance will be btrfs-find-root.
>
> Please try:
> # btrfs-find-root -o 3 <device>
>
> And provide all the output.
>
> But please keep in mind, chunk root is a critical tree, and so far it's
> already heavily damaged.
> Although I could still continue try to recover, there is pretty low
> chance now.
>
> Thanks,
> Qu
>>
>> Kind regards
>> Michael
>>
>>
>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>> Ok, will it be possible for me to install the new version of the tools
>>>> on my current kernel without overriding the existing install? Hesitant
>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>> future firmware upgrades.
>>>>
>>>> Perhaps I could grab this:
>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>> hopefully build from source and then run the binaries directly?
>>>
>>> Of course, that's how most of us test btrfs-progs builds.
>>>
>>> Thanks,
>>> Qu
>>>
>>>>
>>>> Kind regards
>>>>
>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>
>>>>>
>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>> Thanks Qu,
>>>>>>
>>>>>> Please find attached the log file for the chunk recover command.
>>>>>
>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>> system chunk range.
>>>>>
>>>>> Which means, it's chunk tree corrupted.
>>>>>
>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>> new --follow option).
>>>>>
>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>
>>>>> If it doesn't work, please provide the following binary dump:
>>>>>
>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>> (And will need to repeat similar dump for several times according to
>>>>> above dump)
>>>>>
>>>>> Thanks,
>>>>> Qu
>>>>>
>>>>>
>>>>>>
>>>>>> Kind regards
>>>>>> Michael
>>>>>>
>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>> Hi Qu,
>>>>>>>>
>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>> concerns are unfounded!?).
>>>>>>>>
>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>
>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>> to start that again if you need its output.
>>>>>>>
>>>>>>> The system chunk only contains the following chunks:
>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>                         used up
>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>                         The newly created large system chunk.
>>>>>>>
>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>> out of the range.
>>>>>>>
>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>> the following command:
>>>>>>>
>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>
>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>> system chunk and save a day.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Qu
>>>>>>>
>>>>>>>>
>>>>>>>> Thanks so much for your help.
>>>>>>>>
>>>>>>>> Kind regards
>>>>>>>> Michael
>>>>>>>>
>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>> Hi all,
>>>>>>>>>>
>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>
>>>>>>>>>> uname -a
>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>
>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> btrfs --version
>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>
>>>>>>>>> So is the user tools.
>>>>>>>>>
>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> btrfs fi show
>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>
>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>
>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>> devices, algorithm 2
>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>> across:523708k
>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>
>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>
>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>
>>>>>>>>> Please provide the following command output:
>>>>>>>>>
>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>
>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>
>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>
>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>> to your super block.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Qu
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>
>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>> mostly using advice from here:
>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>
>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>
>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>> out:
>>>>>>>>>>
>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>
>>>>>>>>>> 5. Then:
>>>>>>>>>>
>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>
>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>
>>>>>>>>>> btrfs scrub start
>>>>>>>>>>
>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>
>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>
>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>> error details: csum=6
>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>
>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>> error details: csum=6
>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>
>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>
>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>
>>>>>>>>>> I ran
>>>>>>>>>>
>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>
>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>
>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>
>>>>>>>>>> Kind regards
>>>>>>>>>> Michael
>>>>>>>>>> --
>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-05-01 15:50                   ` Michael Wade
@ 2018-05-02  1:31                     ` Qu Wenruo
  2018-05-02  5:29                       ` Michael Wade
  0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-05-02  1:31 UTC (permalink / raw)
  To: Michael Wade, linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 15136 bytes --]



On 2018年05月01日 23:50, Michael Wade wrote:
> Hi Qu,
> 
> Oh dear that is not good news!
> 
> I have been running the find root command since yesterday but it only
> seems to be only be outputting the following message:
> 
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096

It's mostly fine, as find-root will go through all tree blocks and try
to read them as tree blocks.
Although btrfs-find-root will suppress csum error output, but such basic
tree validation check is not suppressed, thus you get such message.

> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
> 
> I tried with the latest btrfs tools compiled from source and the ones
> I have installed with the same result. Is there a CLI utility I could
> use to determine if the log contains any other content?

Did it report any useful info at the end?

Thanks,
Qu

> 
> Kind regards
> Michael
> 
> 
> On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年04月29日 22:08, Michael Wade wrote:
>>> Hi Qu,
>>>
>>> Got this error message:
>>>
>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>>> btrfs-progs v4.16.1
>>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>>> ERROR: cannot read chunk root
>>> ERROR: unable to open /dev/md127
>>>
>>> I have attached the dumps for:
>>>
>>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>
>> Unfortunately, both dumps are corrupted and contain mostly garbage.
>> I think it's the underlying stack (mdraid) has something wrong or failed
>> to recover its data.
>>
>> This means your last chance will be btrfs-find-root.
>>
>> Please try:
>> # btrfs-find-root -o 3 <device>
>>
>> And provide all the output.
>>
>> But please keep in mind, chunk root is a critical tree, and so far it's
>> already heavily damaged.
>> Although I could still continue try to recover, there is pretty low
>> chance now.
>>
>> Thanks,
>> Qu
>>>
>>> Kind regards
>>> Michael
>>>
>>>
>>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>>> Ok, will it be possible for me to install the new version of the tools
>>>>> on my current kernel without overriding the existing install? Hesitant
>>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>>> future firmware upgrades.
>>>>>
>>>>> Perhaps I could grab this:
>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>>> hopefully build from source and then run the binaries directly?
>>>>
>>>> Of course, that's how most of us test btrfs-progs builds.
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>> Kind regards
>>>>>
>>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>>> Thanks Qu,
>>>>>>>
>>>>>>> Please find attached the log file for the chunk recover command.
>>>>>>
>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>>> system chunk range.
>>>>>>
>>>>>> Which means, it's chunk tree corrupted.
>>>>>>
>>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>>> new --follow option).
>>>>>>
>>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>>
>>>>>> If it doesn't work, please provide the following binary dump:
>>>>>>
>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>> (And will need to repeat similar dump for several times according to
>>>>>> above dump)
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Michael
>>>>>>>
>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>>> Hi Qu,
>>>>>>>>>
>>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>>> concerns are unfounded!?).
>>>>>>>>>
>>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>>
>>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>>> to start that again if you need its output.
>>>>>>>>
>>>>>>>> The system chunk only contains the following chunks:
>>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>>                         used up
>>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>>                         The newly created large system chunk.
>>>>>>>>
>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>>> out of the range.
>>>>>>>>
>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>>> the following command:
>>>>>>>>
>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>>
>>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>>> system chunk and save a day.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Qu
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks so much for your help.
>>>>>>>>>
>>>>>>>>> Kind regards
>>>>>>>>> Michael
>>>>>>>>>
>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>>> Hi all,
>>>>>>>>>>>
>>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>>
>>>>>>>>>>> uname -a
>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>>
>>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> btrfs --version
>>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>>
>>>>>>>>>> So is the user tools.
>>>>>>>>>>
>>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> btrfs fi show
>>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>>
>>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>>
>>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>>> devices, algorithm 2
>>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>>> across:523708k
>>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>>
>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>>
>>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>>
>>>>>>>>>> Please provide the following command output:
>>>>>>>>>>
>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>>
>>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>>
>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>>
>>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>>> to your super block.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Qu
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>>
>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>>> mostly using advice from here:
>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>>
>>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>>
>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>>> out:
>>>>>>>>>>>
>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>
>>>>>>>>>>> 5. Then:
>>>>>>>>>>>
>>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>>
>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>>
>>>>>>>>>>> btrfs scrub start
>>>>>>>>>>>
>>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>>
>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>
>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>> error details: csum=6
>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>
>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>> error details: csum=6
>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>
>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>>
>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>>
>>>>>>>>>>> I ran
>>>>>>>>>>>
>>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>>
>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>>
>>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>>
>>>>>>>>>>> Kind regards
>>>>>>>>>>> Michael
>>>>>>>>>>> --
>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>> the body of a message to majordomo@vger.kernel.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-05-02  1:31                     ` Qu Wenruo
@ 2018-05-02  5:29                       ` Michael Wade
  2018-05-04 16:18                         ` Michael Wade
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-05-02  5:29 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

Thanks Qu,

I actually aborted the run with the old btrfs tools once I saw its
output. The new btrfs tools is still running and has produced a log
file of ~85mb filled with that content so far.

Kind regards
Michael

On 2 May 2018 at 02:31, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2018年05月01日 23:50, Michael Wade wrote:
>> Hi Qu,
>>
>> Oh dear that is not good news!
>>
>> I have been running the find root command since yesterday but it only
>> seems to be only be outputting the following message:
>>
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>
> It's mostly fine, as find-root will go through all tree blocks and try
> to read them as tree blocks.
> Although btrfs-find-root will suppress csum error output, but such basic
> tree validation check is not suppressed, thus you get such message.
>
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>
>> I tried with the latest btrfs tools compiled from source and the ones
>> I have installed with the same result. Is there a CLI utility I could
>> use to determine if the log contains any other content?
>
> Did it report any useful info at the end?
>
> Thanks,
> Qu
>
>>
>> Kind regards
>> Michael
>>
>>
>> On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>> On 2018年04月29日 22:08, Michael Wade wrote:
>>>> Hi Qu,
>>>>
>>>> Got this error message:
>>>>
>>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>>>> btrfs-progs v4.16.1
>>>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>>>> ERROR: cannot read chunk root
>>>> ERROR: unable to open /dev/md127
>>>>
>>>> I have attached the dumps for:
>>>>
>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>
>>> Unfortunately, both dumps are corrupted and contain mostly garbage.
>>> I think it's the underlying stack (mdraid) has something wrong or failed
>>> to recover its data.
>>>
>>> This means your last chance will be btrfs-find-root.
>>>
>>> Please try:
>>> # btrfs-find-root -o 3 <device>
>>>
>>> And provide all the output.
>>>
>>> But please keep in mind, chunk root is a critical tree, and so far it's
>>> already heavily damaged.
>>> Although I could still continue try to recover, there is pretty low
>>> chance now.
>>>
>>> Thanks,
>>> Qu
>>>>
>>>> Kind regards
>>>> Michael
>>>>
>>>>
>>>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>
>>>>>
>>>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>>>> Ok, will it be possible for me to install the new version of the tools
>>>>>> on my current kernel without overriding the existing install? Hesitant
>>>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>>>> future firmware upgrades.
>>>>>>
>>>>>> Perhaps I could grab this:
>>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>>>> hopefully build from source and then run the binaries directly?
>>>>>
>>>>> Of course, that's how most of us test btrfs-progs builds.
>>>>>
>>>>> Thanks,
>>>>> Qu
>>>>>
>>>>>>
>>>>>> Kind regards
>>>>>>
>>>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>>>> Thanks Qu,
>>>>>>>>
>>>>>>>> Please find attached the log file for the chunk recover command.
>>>>>>>
>>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>>>> system chunk range.
>>>>>>>
>>>>>>> Which means, it's chunk tree corrupted.
>>>>>>>
>>>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>>>> new --follow option).
>>>>>>>
>>>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>>>
>>>>>>> If it doesn't work, please provide the following binary dump:
>>>>>>>
>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>> (And will need to repeat similar dump for several times according to
>>>>>>> above dump)
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Qu
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Kind regards
>>>>>>>> Michael
>>>>>>>>
>>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>>>> Hi Qu,
>>>>>>>>>>
>>>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>>>> concerns are unfounded!?).
>>>>>>>>>>
>>>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>>>
>>>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>>>> to start that again if you need its output.
>>>>>>>>>
>>>>>>>>> The system chunk only contains the following chunks:
>>>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>>>                         used up
>>>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>>>                         The newly created large system chunk.
>>>>>>>>>
>>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>>>> out of the range.
>>>>>>>>>
>>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>>>> the following command:
>>>>>>>>>
>>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>>>
>>>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>>>> system chunk and save a day.
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Qu
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks so much for your help.
>>>>>>>>>>
>>>>>>>>>> Kind regards
>>>>>>>>>> Michael
>>>>>>>>>>
>>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>
>>>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>>>
>>>>>>>>>>>> uname -a
>>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>>>
>>>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> btrfs --version
>>>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>>>
>>>>>>>>>>> So is the user tools.
>>>>>>>>>>>
>>>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> btrfs fi show
>>>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>>>
>>>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>>>
>>>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>>>> devices, algorithm 2
>>>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>>>> across:523708k
>>>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>>>
>>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>>>
>>>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>>>
>>>>>>>>>>> Please provide the following command output:
>>>>>>>>>>>
>>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>>>
>>>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>>>
>>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>>>
>>>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>>>> to your super block.
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Qu
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>>>
>>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>>>> mostly using advice from here:
>>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>>>
>>>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>>>
>>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>>>> out:
>>>>>>>>>>>>
>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>
>>>>>>>>>>>> 5. Then:
>>>>>>>>>>>>
>>>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>>>
>>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>>>
>>>>>>>>>>>> btrfs scrub start
>>>>>>>>>>>>
>>>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>>>
>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>
>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>
>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>
>>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>>>
>>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>>>
>>>>>>>>>>>> I ran
>>>>>>>>>>>>
>>>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>>>
>>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>>>
>>>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>>>
>>>>>>>>>>>> Kind regards
>>>>>>>>>>>> Michael
>>>>>>>>>>>> --
>>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>> --
>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>
>>>>>
>>>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-05-02  5:29                       ` Michael Wade
@ 2018-05-04 16:18                         ` Michael Wade
  2018-05-05  0:43                           ` Qu Wenruo
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-05-04 16:18 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

Hi Qu,

The tool is still running and the log file is now ~300mb. I guess it
shouldn't normally take this long.. Is there anything else worth
trying?

Kind regards
Michael

On 2 May 2018 at 06:29, Michael Wade <spikewade@gmail.com> wrote:
> Thanks Qu,
>
> I actually aborted the run with the old btrfs tools once I saw its
> output. The new btrfs tools is still running and has produced a log
> file of ~85mb filled with that content so far.
>
> Kind regards
> Michael
>
> On 2 May 2018 at 02:31, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2018年05月01日 23:50, Michael Wade wrote:
>>> Hi Qu,
>>>
>>> Oh dear that is not good news!
>>>
>>> I have been running the find root command since yesterday but it only
>>> seems to be only be outputting the following message:
>>>
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>
>> It's mostly fine, as find-root will go through all tree blocks and try
>> to read them as tree blocks.
>> Although btrfs-find-root will suppress csum error output, but such basic
>> tree validation check is not suppressed, thus you get such message.
>>
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>
>>> I tried with the latest btrfs tools compiled from source and the ones
>>> I have installed with the same result. Is there a CLI utility I could
>>> use to determine if the log contains any other content?
>>
>> Did it report any useful info at the end?
>>
>> Thanks,
>> Qu
>>
>>>
>>> Kind regards
>>> Michael
>>>
>>>
>>> On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年04月29日 22:08, Michael Wade wrote:
>>>>> Hi Qu,
>>>>>
>>>>> Got this error message:
>>>>>
>>>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>>>>> btrfs-progs v4.16.1
>>>>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>>>>> ERROR: cannot read chunk root
>>>>> ERROR: unable to open /dev/md127
>>>>>
>>>>> I have attached the dumps for:
>>>>>
>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>
>>>> Unfortunately, both dumps are corrupted and contain mostly garbage.
>>>> I think it's the underlying stack (mdraid) has something wrong or failed
>>>> to recover its data.
>>>>
>>>> This means your last chance will be btrfs-find-root.
>>>>
>>>> Please try:
>>>> # btrfs-find-root -o 3 <device>
>>>>
>>>> And provide all the output.
>>>>
>>>> But please keep in mind, chunk root is a critical tree, and so far it's
>>>> already heavily damaged.
>>>> Although I could still continue try to recover, there is pretty low
>>>> chance now.
>>>>
>>>> Thanks,
>>>> Qu
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>>
>>>>>
>>>>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>>>>> Ok, will it be possible for me to install the new version of the tools
>>>>>>> on my current kernel without overriding the existing install? Hesitant
>>>>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>>>>> future firmware upgrades.
>>>>>>>
>>>>>>> Perhaps I could grab this:
>>>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>>>>> hopefully build from source and then run the binaries directly?
>>>>>>
>>>>>> Of course, that's how most of us test btrfs-progs builds.
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>>
>>>>>>> Kind regards
>>>>>>>
>>>>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>>>>> Thanks Qu,
>>>>>>>>>
>>>>>>>>> Please find attached the log file for the chunk recover command.
>>>>>>>>
>>>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>>>>> system chunk range.
>>>>>>>>
>>>>>>>> Which means, it's chunk tree corrupted.
>>>>>>>>
>>>>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>>>>> new --follow option).
>>>>>>>>
>>>>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>>>>
>>>>>>>> If it doesn't work, please provide the following binary dump:
>>>>>>>>
>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>>> (And will need to repeat similar dump for several times according to
>>>>>>>> above dump)
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Qu
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Kind regards
>>>>>>>>> Michael
>>>>>>>>>
>>>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>>>>> Hi Qu,
>>>>>>>>>>>
>>>>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>>>>> concerns are unfounded!?).
>>>>>>>>>>>
>>>>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>>>>
>>>>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>>>>> to start that again if you need its output.
>>>>>>>>>>
>>>>>>>>>> The system chunk only contains the following chunks:
>>>>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>>>>                         used up
>>>>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>>>>                         The newly created large system chunk.
>>>>>>>>>>
>>>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>>>>> out of the range.
>>>>>>>>>>
>>>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>>>>> the following command:
>>>>>>>>>>
>>>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>>>>
>>>>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>>>>> system chunk and save a day.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Qu
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Thanks so much for your help.
>>>>>>>>>>>
>>>>>>>>>>> Kind regards
>>>>>>>>>>> Michael
>>>>>>>>>>>
>>>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>>
>>>>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>>>>
>>>>>>>>>>>>> uname -a
>>>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>>>>
>>>>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> btrfs --version
>>>>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>>>>
>>>>>>>>>>>> So is the user tools.
>>>>>>>>>>>>
>>>>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> btrfs fi show
>>>>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>>>>
>>>>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>>>>
>>>>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>>>>> devices, algorithm 2
>>>>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>>>>> across:523708k
>>>>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>>>>
>>>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>>>>
>>>>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>>>>
>>>>>>>>>>>> Please provide the following command output:
>>>>>>>>>>>>
>>>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>>>>
>>>>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>>>>
>>>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>>>>
>>>>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>>>>> to your super block.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Qu
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>>>>
>>>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>>>>> mostly using advice from here:
>>>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>>>>
>>>>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>>>>> out:
>>>>>>>>>>>>>
>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>
>>>>>>>>>>>>> 5. Then:
>>>>>>>>>>>>>
>>>>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>>>>
>>>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>>>>
>>>>>>>>>>>>> btrfs scrub start
>>>>>>>>>>>>>
>>>>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>>>>
>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>
>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>
>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>
>>>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>>>>
>>>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>>>>
>>>>>>>>>>>>> I ran
>>>>>>>>>>>>>
>>>>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>>>>
>>>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>>>>
>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>> Michael
>>>>>>>>>>>>> --
>>>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>> --
>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>
>>>>>>
>>>>
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-05-04 16:18                         ` Michael Wade
@ 2018-05-05  0:43                           ` Qu Wenruo
  2018-05-19 11:43                             ` Michael Wade
  0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2018-05-05  0:43 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 17135 bytes --]



On 2018年05月05日 00:18, Michael Wade wrote:
> Hi Qu,
> 
> The tool is still running and the log file is now ~300mb. I guess it
> shouldn't normally take this long.. Is there anything else worth
> trying?

I'm afraid not much.

Although there is a possibility to modify btrfs-find-root to do much
faster but limited search.

But from the result, it looks like underlying device corruption, and not
much we can do right now.

Thanks,
Qu

> 
> Kind regards
> Michael
> 
> On 2 May 2018 at 06:29, Michael Wade <spikewade@gmail.com> wrote:
>> Thanks Qu,
>>
>> I actually aborted the run with the old btrfs tools once I saw its
>> output. The new btrfs tools is still running and has produced a log
>> file of ~85mb filled with that content so far.
>>
>> Kind regards
>> Michael
>>
>> On 2 May 2018 at 02:31, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>> On 2018年05月01日 23:50, Michael Wade wrote:
>>>> Hi Qu,
>>>>
>>>> Oh dear that is not good news!
>>>>
>>>> I have been running the find root command since yesterday but it only
>>>> seems to be only be outputting the following message:
>>>>
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>
>>> It's mostly fine, as find-root will go through all tree blocks and try
>>> to read them as tree blocks.
>>> Although btrfs-find-root will suppress csum error output, but such basic
>>> tree validation check is not suppressed, thus you get such message.
>>>
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>
>>>> I tried with the latest btrfs tools compiled from source and the ones
>>>> I have installed with the same result. Is there a CLI utility I could
>>>> use to determine if the log contains any other content?
>>>
>>> Did it report any useful info at the end?
>>>
>>> Thanks,
>>> Qu
>>>
>>>>
>>>> Kind regards
>>>> Michael
>>>>
>>>>
>>>> On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>
>>>>>
>>>>> On 2018年04月29日 22:08, Michael Wade wrote:
>>>>>> Hi Qu,
>>>>>>
>>>>>> Got this error message:
>>>>>>
>>>>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>>>>>> btrfs-progs v4.16.1
>>>>>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>>>>>> ERROR: cannot read chunk root
>>>>>> ERROR: unable to open /dev/md127
>>>>>>
>>>>>> I have attached the dumps for:
>>>>>>
>>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>
>>>>> Unfortunately, both dumps are corrupted and contain mostly garbage.
>>>>> I think it's the underlying stack (mdraid) has something wrong or failed
>>>>> to recover its data.
>>>>>
>>>>> This means your last chance will be btrfs-find-root.
>>>>>
>>>>> Please try:
>>>>> # btrfs-find-root -o 3 <device>
>>>>>
>>>>> And provide all the output.
>>>>>
>>>>> But please keep in mind, chunk root is a critical tree, and so far it's
>>>>> already heavily damaged.
>>>>> Although I could still continue try to recover, there is pretty low
>>>>> chance now.
>>>>>
>>>>> Thanks,
>>>>> Qu
>>>>>>
>>>>>> Kind regards
>>>>>> Michael
>>>>>>
>>>>>>
>>>>>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>>>>>> Ok, will it be possible for me to install the new version of the tools
>>>>>>>> on my current kernel without overriding the existing install? Hesitant
>>>>>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>>>>>> future firmware upgrades.
>>>>>>>>
>>>>>>>> Perhaps I could grab this:
>>>>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>>>>>> hopefully build from source and then run the binaries directly?
>>>>>>>
>>>>>>> Of course, that's how most of us test btrfs-progs builds.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Qu
>>>>>>>
>>>>>>>>
>>>>>>>> Kind regards
>>>>>>>>
>>>>>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>>>>>> Thanks Qu,
>>>>>>>>>>
>>>>>>>>>> Please find attached the log file for the chunk recover command.
>>>>>>>>>
>>>>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>>>>>> system chunk range.
>>>>>>>>>
>>>>>>>>> Which means, it's chunk tree corrupted.
>>>>>>>>>
>>>>>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>>>>>> new --follow option).
>>>>>>>>>
>>>>>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>>>>>
>>>>>>>>> If it doesn't work, please provide the following binary dump:
>>>>>>>>>
>>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>>>> (And will need to repeat similar dump for several times according to
>>>>>>>>> above dump)
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Qu
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Kind regards
>>>>>>>>>> Michael
>>>>>>>>>>
>>>>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>>>>>> Hi Qu,
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>>>>>> concerns are unfounded!?).
>>>>>>>>>>>>
>>>>>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>>>>>
>>>>>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>>>>>> to start that again if you need its output.
>>>>>>>>>>>
>>>>>>>>>>> The system chunk only contains the following chunks:
>>>>>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>>>>>                         used up
>>>>>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>>>>>                         The newly created large system chunk.
>>>>>>>>>>>
>>>>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>>>>>> out of the range.
>>>>>>>>>>>
>>>>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>>>>>> the following command:
>>>>>>>>>>>
>>>>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>>>>>
>>>>>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>>>>>> system chunk and save a day.
>>>>>>>>>>>
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Qu
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks so much for your help.
>>>>>>>>>>>>
>>>>>>>>>>>> Kind regards
>>>>>>>>>>>> Michael
>>>>>>>>>>>>
>>>>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> uname -a
>>>>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>>>>>
>>>>>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> btrfs --version
>>>>>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>>>>>
>>>>>>>>>>>>> So is the user tools.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> btrfs fi show
>>>>>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>>>>>
>>>>>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>>>>>> devices, algorithm 2
>>>>>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>>>>>> across:523708k
>>>>>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>>>>>
>>>>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>>>>>
>>>>>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Please provide the following command output:
>>>>>>>>>>>>>
>>>>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>>>>>
>>>>>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>>>>>
>>>>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>>>>>> to your super block.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> Qu
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>>>>>> mostly using advice from here:
>>>>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>>>>>> out:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 5. Then:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> btrfs scrub start
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I ran
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>>> Michael
>>>>>>>>>>>>>> --
>>>>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>
>>>>>>>> --
>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>
>>>>>>>
>>>>>
>>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
  2018-05-05  0:43                           ` Qu Wenruo
@ 2018-05-19 11:43                             ` Michael Wade
       [not found]                               ` <CAB+znrFS=Xi+4tPS3szqZro1FdjnVcbe29UV9UMUUxsGL6NJUg@mail.gmail.com>
  0 siblings, 1 reply; 16+ messages in thread
From: Michael Wade @ 2018-05-19 11:43 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

I have let the find root command run for 14+ days, its produced a
pretty huge log file 1.6 GB but still hasn't completed. I think I will
start the process of reformatting my drives and starting over.

Thanks for your help anyway.

Kind regards
Michael

On 5 May 2018 at 01:43, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2018年05月05日 00:18, Michael Wade wrote:
>> Hi Qu,
>>
>> The tool is still running and the log file is now ~300mb. I guess it
>> shouldn't normally take this long.. Is there anything else worth
>> trying?
>
> I'm afraid not much.
>
> Although there is a possibility to modify btrfs-find-root to do much
> faster but limited search.
>
> But from the result, it looks like underlying device corruption, and not
> much we can do right now.
>
> Thanks,
> Qu
>
>>
>> Kind regards
>> Michael
>>
>> On 2 May 2018 at 06:29, Michael Wade <spikewade@gmail.com> wrote:
>>> Thanks Qu,
>>>
>>> I actually aborted the run with the old btrfs tools once I saw its
>>> output. The new btrfs tools is still running and has produced a log
>>> file of ~85mb filled with that content so far.
>>>
>>> Kind regards
>>> Michael
>>>
>>> On 2 May 2018 at 02:31, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2018年05月01日 23:50, Michael Wade wrote:
>>>>> Hi Qu,
>>>>>
>>>>> Oh dear that is not good news!
>>>>>
>>>>> I have been running the find root command since yesterday but it only
>>>>> seems to be only be outputting the following message:
>>>>>
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>
>>>> It's mostly fine, as find-root will go through all tree blocks and try
>>>> to read them as tree blocks.
>>>> Although btrfs-find-root will suppress csum error output, but such basic
>>>> tree validation check is not suppressed, thus you get such message.
>>>>
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>
>>>>> I tried with the latest btrfs tools compiled from source and the ones
>>>>> I have installed with the same result. Is there a CLI utility I could
>>>>> use to determine if the log contains any other content?
>>>>
>>>> Did it report any useful info at the end?
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>>
>>>>>
>>>>> On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年04月29日 22:08, Michael Wade wrote:
>>>>>>> Hi Qu,
>>>>>>>
>>>>>>> Got this error message:
>>>>>>>
>>>>>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>>>>>>> btrfs-progs v4.16.1
>>>>>>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>>>>>>> ERROR: cannot read chunk root
>>>>>>> ERROR: unable to open /dev/md127
>>>>>>>
>>>>>>> I have attached the dumps for:
>>>>>>>
>>>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>
>>>>>> Unfortunately, both dumps are corrupted and contain mostly garbage.
>>>>>> I think it's the underlying stack (mdraid) has something wrong or failed
>>>>>> to recover its data.
>>>>>>
>>>>>> This means your last chance will be btrfs-find-root.
>>>>>>
>>>>>> Please try:
>>>>>> # btrfs-find-root -o 3 <device>
>>>>>>
>>>>>> And provide all the output.
>>>>>>
>>>>>> But please keep in mind, chunk root is a critical tree, and so far it's
>>>>>> already heavily damaged.
>>>>>> Although I could still continue try to recover, there is pretty low
>>>>>> chance now.
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Michael
>>>>>>>
>>>>>>>
>>>>>>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>>>>>>> Ok, will it be possible for me to install the new version of the tools
>>>>>>>>> on my current kernel without overriding the existing install? Hesitant
>>>>>>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>>>>>>> future firmware upgrades.
>>>>>>>>>
>>>>>>>>> Perhaps I could grab this:
>>>>>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>>>>>>> hopefully build from source and then run the binaries directly?
>>>>>>>>
>>>>>>>> Of course, that's how most of us test btrfs-progs builds.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Qu
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Kind regards
>>>>>>>>>
>>>>>>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>>>>>>> Thanks Qu,
>>>>>>>>>>>
>>>>>>>>>>> Please find attached the log file for the chunk recover command.
>>>>>>>>>>
>>>>>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>>>>>>> system chunk range.
>>>>>>>>>>
>>>>>>>>>> Which means, it's chunk tree corrupted.
>>>>>>>>>>
>>>>>>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>>>>>>> new --follow option).
>>>>>>>>>>
>>>>>>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>>>>>>
>>>>>>>>>> If it doesn't work, please provide the following binary dump:
>>>>>>>>>>
>>>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>>>>> (And will need to repeat similar dump for several times according to
>>>>>>>>>> above dump)
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Qu
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Kind regards
>>>>>>>>>>> Michael
>>>>>>>>>>>
>>>>>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>>>>>>> Hi Qu,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>>>>>>> concerns are unfounded!?).
>>>>>>>>>>>>>
>>>>>>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>>>>>>> to start that again if you need its output.
>>>>>>>>>>>>
>>>>>>>>>>>> The system chunk only contains the following chunks:
>>>>>>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>>>>>>                         used up
>>>>>>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>>>>>>                         The newly created large system chunk.
>>>>>>>>>>>>
>>>>>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>>>>>>> out of the range.
>>>>>>>>>>>>
>>>>>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>>>>>>> the following command:
>>>>>>>>>>>>
>>>>>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>>>>>>
>>>>>>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>>>>>>> system chunk and save a day.
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Qu
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks so much for your help.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>> Michael
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> uname -a
>>>>>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> btrfs --version
>>>>>>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So is the user tools.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> btrfs fi show
>>>>>>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>>>>>>> devices, algorithm 2
>>>>>>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>>>>>>> across:523708k
>>>>>>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Please provide the following command output:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>>>>>>> to your super block.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Qu
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>>>>>>> mostly using advice from here:
>>>>>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>>>>>>> out:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 5. Then:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> btrfs scrub start
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I ran
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>>>> Michael
>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: BTRFS RAID filesystem unmountable
       [not found]                               ` <CAB+znrFS=Xi+4tPS3szqZro1FdjnVcbe29UV9UMUUxsGL6NJUg@mail.gmail.com>
@ 2018-12-06 23:26                                 ` Qu Wenruo
  0 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2018-12-06 23:26 UTC (permalink / raw)
  To: Michael Wade; +Cc: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 19971 bytes --]



On 2018/12/7 上午7:15, Michael Wade wrote:
> Hi Qu,
> 
> Me again! Having formatted the drives and rebuilt the RAID array I
> seem to have be having the same problem as before (no power cut this
> time [I bought a UPS]).

But strangely, your super block shows it has log tree, which means
either your hit a kernel panic/transaction abort, or a unexpected power
loss.

> The brtfs volume is broken on my ReadyNAS.
> 
> I have attached the results of some of the commands you asked me to
> run last time, and I am hoping you might be able to help me out.

This time, the problem is more serious, some chunk tree blocks are not
even inside system chunk range, no wonder it fails to mount.

To confirm it, you could run "btrfs ins dump-tree -b 17725903077376
<device>" and paste the output.

But I don't have any clue. My guess is some kernel problem related to
new chunk allocation, or the chunk root node itself is already seriously
corrupted.

Considering how old your kernel is (4.4), it's not recommended to use
btrfs on such old kernel, unless it's well backported with tons of btrfs
fixes.

Thanks,
Qu

> 
> Kind regards
> Michael
> On Sat, 19 May 2018 at 12:43, Michael Wade <spikewade@gmail.com> wrote:
>>
>> I have let the find root command run for 14+ days, its produced a
>> pretty huge log file 1.6 GB but still hasn't completed. I think I will
>> start the process of reformatting my drives and starting over.
>>
>> Thanks for your help anyway.
>>
>> Kind regards
>> Michael
>>
>> On 5 May 2018 at 01:43, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>> On 2018年05月05日 00:18, Michael Wade wrote:
>>>> Hi Qu,
>>>>
>>>> The tool is still running and the log file is now ~300mb. I guess it
>>>> shouldn't normally take this long.. Is there anything else worth
>>>> trying?
>>>
>>> I'm afraid not much.
>>>
>>> Although there is a possibility to modify btrfs-find-root to do much
>>> faster but limited search.
>>>
>>> But from the result, it looks like underlying device corruption, and not
>>> much we can do right now.
>>>
>>> Thanks,
>>> Qu
>>>
>>>>
>>>> Kind regards
>>>> Michael
>>>>
>>>> On 2 May 2018 at 06:29, Michael Wade <spikewade@gmail.com> wrote:
>>>>> Thanks Qu,
>>>>>
>>>>> I actually aborted the run with the old btrfs tools once I saw its
>>>>> output. The new btrfs tools is still running and has produced a log
>>>>> file of ~85mb filled with that content so far.
>>>>>
>>>>> Kind regards
>>>>> Michael
>>>>>
>>>>> On 2 May 2018 at 02:31, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> On 2018年05月01日 23:50, Michael Wade wrote:
>>>>>>> Hi Qu,
>>>>>>>
>>>>>>> Oh dear that is not good news!
>>>>>>>
>>>>>>> I have been running the find root command since yesterday but it only
>>>>>>> seems to be only be outputting the following message:
>>>>>>>
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>
>>>>>> It's mostly fine, as find-root will go through all tree blocks and try
>>>>>> to read them as tree blocks.
>>>>>> Although btrfs-find-root will suppress csum error output, but such basic
>>>>>> tree validation check is not suppressed, thus you get such message.
>>>>>>
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>> ERROR: tree block bytenr 0 is not aligned to sectorsize 4096
>>>>>>>
>>>>>>> I tried with the latest btrfs tools compiled from source and the ones
>>>>>>> I have installed with the same result. Is there a CLI utility I could
>>>>>>> use to determine if the log contains any other content?
>>>>>>
>>>>>> Did it report any useful info at the end?
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>>
>>>>>>> Kind regards
>>>>>>> Michael
>>>>>>>
>>>>>>>
>>>>>>> On 30 April 2018 at 04:02, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2018年04月29日 22:08, Michael Wade wrote:
>>>>>>>>> Hi Qu,
>>>>>>>>>
>>>>>>>>> Got this error message:
>>>>>>>>>
>>>>>>>>> ./btrfs inspect dump-tree -b 20800943685632 /dev/md127
>>>>>>>>> btrfs-progs v4.16.1
>>>>>>>>> bytenr mismatch, want=20800943685632, have=3118598835113619663
>>>>>>>>> ERROR: cannot read chunk root
>>>>>>>>> ERROR: unable to open /dev/md127
>>>>>>>>>
>>>>>>>>> I have attached the dumps for:
>>>>>>>>>
>>>>>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>>>> dd if=/dev/md127 of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>>>
>>>>>>>> Unfortunately, both dumps are corrupted and contain mostly garbage.
>>>>>>>> I think it's the underlying stack (mdraid) has something wrong or failed
>>>>>>>> to recover its data.
>>>>>>>>
>>>>>>>> This means your last chance will be btrfs-find-root.
>>>>>>>>
>>>>>>>> Please try:
>>>>>>>> # btrfs-find-root -o 3 <device>
>>>>>>>>
>>>>>>>> And provide all the output.
>>>>>>>>
>>>>>>>> But please keep in mind, chunk root is a critical tree, and so far it's
>>>>>>>> already heavily damaged.
>>>>>>>> Although I could still continue try to recover, there is pretty low
>>>>>>>> chance now.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Qu
>>>>>>>>>
>>>>>>>>> Kind regards
>>>>>>>>> Michael
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 29 April 2018 at 10:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2018年04月29日 16:59, Michael Wade wrote:
>>>>>>>>>>> Ok, will it be possible for me to install the new version of the tools
>>>>>>>>>>> on my current kernel without overriding the existing install? Hesitant
>>>>>>>>>>> to update kernel/btrfs as it might break the ReadyNAS interface /
>>>>>>>>>>> future firmware upgrades.
>>>>>>>>>>>
>>>>>>>>>>> Perhaps I could grab this:
>>>>>>>>>>> https://github.com/kdave/btrfs-progs/releases/tag/v4.16.1 and
>>>>>>>>>>> hopefully build from source and then run the binaries directly?
>>>>>>>>>>
>>>>>>>>>> Of course, that's how most of us test btrfs-progs builds.
>>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Qu
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Kind regards
>>>>>>>>>>>
>>>>>>>>>>> On 29 April 2018 at 09:33, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On 2018年04月29日 16:11, Michael Wade wrote:
>>>>>>>>>>>>> Thanks Qu,
>>>>>>>>>>>>>
>>>>>>>>>>>>> Please find attached the log file for the chunk recover command.
>>>>>>>>>>>>
>>>>>>>>>>>> Strangely, btrfs chunk recovery found no extra chunk beyond current
>>>>>>>>>>>> system chunk range.
>>>>>>>>>>>>
>>>>>>>>>>>> Which means, it's chunk tree corrupted.
>>>>>>>>>>>>
>>>>>>>>>>>> Please dump the chunk tree with latest btrfs-progs (which provides the
>>>>>>>>>>>> new --follow option).
>>>>>>>>>>>>
>>>>>>>>>>>> # btrfs inspect dump-tree -b 20800943685632 <device>
>>>>>>>>>>>>
>>>>>>>>>>>> If it doesn't work, please provide the following binary dump:
>>>>>>>>>>>>
>>>>>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy1 bs=1 count=32K skip=266325721088
>>>>>>>>>>>> # dd if=<dev> of=/tmp/chunk_root.copy2 bs=1 count=32K skip=266359275520
>>>>>>>>>>>> (And will need to repeat similar dump for several times according to
>>>>>>>>>>>> above dump)
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Qu
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>> Michael
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 28 April 2018 at 12:38, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On 2018年04月28日 17:37, Michael Wade wrote:
>>>>>>>>>>>>>>> Hi Qu,
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks for your reply. I will investigate upgrading the kernel,
>>>>>>>>>>>>>>> however I worry that future ReadyNAS firmware upgrades would fail on a
>>>>>>>>>>>>>>> newer kernel version (I don't have much linux experience so maybe my
>>>>>>>>>>>>>>> concerns are unfounded!?).
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I have attached the output of the dump super command.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I did actually run chunk recover before, without the verbose option,
>>>>>>>>>>>>>>> it took around 24 hours to finish but did not resolve my issue. Happy
>>>>>>>>>>>>>>> to start that again if you need its output.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The system chunk only contains the following chunks:
>>>>>>>>>>>>>> [0, 4194304]:           Initial temporary chunk, not used at all
>>>>>>>>>>>>>> [20971520, 29360128]:   System chunk created by mkfs, should be full
>>>>>>>>>>>>>>                         used up
>>>>>>>>>>>>>> [20800943685632, 20800977240064]:
>>>>>>>>>>>>>>                         The newly created large system chunk.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> The chunk root is still in 2nd chunk thus valid, but some of its leaf is
>>>>>>>>>>>>>> out of the range.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If you can't wait 24h for chunk recovery to run, my advice would be move
>>>>>>>>>>>>>> the disk to some other computer, and use latest btrfs-progs to execute
>>>>>>>>>>>>>> the following command:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> # btrfs inpsect dump-tree -b 20800943685632 --follow
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> If we're lucky enough, we may read out the tree leaf containing the new
>>>>>>>>>>>>>> system chunk and save a day.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Qu
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks so much for your help.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>>>> Michael
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On 28 April 2018 at 09:45, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On 2018年04月28日 16:30, Michael Wade wrote:
>>>>>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I was hoping that someone would be able to help me resolve the issues
>>>>>>>>>>>>>>>>> I am having with my ReadyNAS BTRFS volume. Basically my trouble
>>>>>>>>>>>>>>>>> started after a power cut, subsequently the volume would not mount.
>>>>>>>>>>>>>>>>> Here are the details of my setup as it is at the moment:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> uname -a
>>>>>>>>>>>>>>>>> Linux QAI 4.4.116.alpine.1 #1 SMP Mon Feb 19 21:58:38 PST 2018 armv7l GNU/Linux
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> The kernel is pretty old for btrfs.
>>>>>>>>>>>>>>>> Strongly recommended to upgrade.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> btrfs --version
>>>>>>>>>>>>>>>>> btrfs-progs v4.12
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So is the user tools.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Although I think it won't be a big problem, as needed tool should be there.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> btrfs fi show
>>>>>>>>>>>>>>>>> Label: '11baed92:data'  uuid: 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>>>> Total devices 1 FS bytes used 5.12TiB
>>>>>>>>>>>>>>>>> devid    1 size 7.27TiB used 6.24TiB path /dev/md127
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> So, it's btrfs on mdraid.
>>>>>>>>>>>>>>>> It would normally make things harder to debug, so I could only provide
>>>>>>>>>>>>>>>> advice from the respect of btrfs.
>>>>>>>>>>>>>>>> For mdraid part, I can't ensure anything.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Here are the relevant dmesg logs for the current state of the device:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [   19.119391] md: md127 stopped.
>>>>>>>>>>>>>>>>> [   19.120841] md: bind<sdb3>
>>>>>>>>>>>>>>>>> [   19.121120] md: bind<sdc3>
>>>>>>>>>>>>>>>>> [   19.121380] md: bind<sda3>
>>>>>>>>>>>>>>>>> [   19.125535] md/raid:md127: device sda3 operational as raid disk 0
>>>>>>>>>>>>>>>>> [   19.125547] md/raid:md127: device sdc3 operational as raid disk 2
>>>>>>>>>>>>>>>>> [   19.125554] md/raid:md127: device sdb3 operational as raid disk 1
>>>>>>>>>>>>>>>>> [   19.126712] md/raid:md127: allocated 3240kB
>>>>>>>>>>>>>>>>> [   19.126778] md/raid:md127: raid level 5 active with 3 out of 3
>>>>>>>>>>>>>>>>> devices, algorithm 2
>>>>>>>>>>>>>>>>> [   19.126784] RAID conf printout:
>>>>>>>>>>>>>>>>> [   19.126789]  --- level:5 rd:3 wd:3
>>>>>>>>>>>>>>>>> [   19.126794]  disk 0, o:1, dev:sda3
>>>>>>>>>>>>>>>>> [   19.126799]  disk 1, o:1, dev:sdb3
>>>>>>>>>>>>>>>>> [   19.126804]  disk 2, o:1, dev:sdc3
>>>>>>>>>>>>>>>>> [   19.128118] md127: detected capacity change from 0 to 7991637573632
>>>>>>>>>>>>>>>>> [   19.395112] Adding 523708k swap on /dev/md1.  Priority:-1 extents:1
>>>>>>>>>>>>>>>>> across:523708k
>>>>>>>>>>>>>>>>> [   19.434956] BTRFS: device label 11baed92:data devid 1 transid
>>>>>>>>>>>>>>>>> 151800 /dev/md127
>>>>>>>>>>>>>>>>> [   19.739276] BTRFS info (device md127): setting nodatasum
>>>>>>>>>>>>>>>>> [   19.740440] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>>>> [   19.740450] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>>>> [   19.740498] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>>>> [   19.740512] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>>>> [   19.740552] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>>>> [   19.740560] BTRFS critical (device md127): unable to find logical
>>>>>>>>>>>>>>>>> 3208757641216 len 4096
>>>>>>>>>>>>>>>>> [   19.740576] BTRFS error (device md127): failed to read chunk root
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> This shows it pretty clear, btrfs fails to read chunk root.
>>>>>>>>>>>>>>>> And according your above "len 4096" it's pretty old fs, as it's still
>>>>>>>>>>>>>>>> using 4K nodesize other than 16K nodesize.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> According to above output, it means your superblock by somehow lacks the
>>>>>>>>>>>>>>>> needed system chunk mapping, which is used to initialize chunk mapping.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Please provide the following command output:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> # btrfs inspect dump-super -fFa /dev/md127
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Also, please consider run the following command and dump all its output:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> # btrfs rescue chunk-recover -v /dev/md127.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Please note that, above command can take a long time to finish, and if
>>>>>>>>>>>>>>>> it works without problem, it may solve your problem.
>>>>>>>>>>>>>>>> But if it doesn't work, the output could help me to manually craft a fix
>>>>>>>>>>>>>>>> to your super block.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>> Qu
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> [   19.783975] BTRFS error (device md127): open_ctree failed
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> In an attempt to recover the volume myself I run a few BTRFS commands
>>>>>>>>>>>>>>>>> mostly using advice from here:
>>>>>>>>>>>>>>>>> https://lists.opensuse.org/opensuse/2017-02/msg00930.html. However
>>>>>>>>>>>>>>>>> that actually seems to have made things worse as I can no longer mount
>>>>>>>>>>>>>>>>> the file system, not even in readonly mode.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So starting from the beginning here is a list of things I have done so
>>>>>>>>>>>>>>>>> far (hopefully I remembered the order in which I ran them!)
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 1. Noticed that my backups to the NAS were not running (didn't get
>>>>>>>>>>>>>>>>> notified that the volume had basically "died")
>>>>>>>>>>>>>>>>> 2. ReadyNAS UI indicated that the volume was inactive.
>>>>>>>>>>>>>>>>> 3. SSHed onto the box and found that the first drive was not marked as
>>>>>>>>>>>>>>>>> operational (log showed I/O errors / UNKOWN (0x2003))  so I replaced
>>>>>>>>>>>>>>>>> the disk and let the array resync.
>>>>>>>>>>>>>>>>> 4. After resync the volume still was unaccessible so I looked at the
>>>>>>>>>>>>>>>>> logs once more and saw something like the following which seemed to
>>>>>>>>>>>>>>>>> indicate that the replay log had been corrupted when the power went
>>>>>>>>>>>>>>>>> out:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>>>> BTRFS: error (device md127) in btrfs_replay_log:2524: errno=-5 IO
>>>>>>>>>>>>>>>>> failure (Failed to recover log tree)
>>>>>>>>>>>>>>>>> BTRFS error (device md127): pending csums is 155648
>>>>>>>>>>>>>>>>> BTRFS error (device md127): cleaner transaction attach returned -30
>>>>>>>>>>>>>>>>> BTRFS critical (device md127): corrupt leaf, non-root leaf's nritems
>>>>>>>>>>>>>>>>> is 0: block=232292352, root=7, slot=0
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 5. Then:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> btrfs rescue zero-log
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 6. Was then able to mount the volume in readonly mode.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> btrfs scrub start
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Which fixed some errors but not all:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:00:34
>>>>>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> scrub status for 20628cda-d98f-4f85-955c-932a367f8821
>>>>>>>>>>>>>>>>> scrub started at Tue Apr 24 17:27:44 2018, running for 04:34:43
>>>>>>>>>>>>>>>>> total bytes scrubbed: 224.26GiB with 6 errors
>>>>>>>>>>>>>>>>> error details: csum=6
>>>>>>>>>>>>>>>>> corrected errors: 0, uncorrectable errors: 6, unverified errors: 0
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> 6. Seeing this hanging I rebooted the NAS
>>>>>>>>>>>>>>>>> 7. Think this is when the volume would not mount at all.
>>>>>>>>>>>>>>>>> 8. Seeing log entries like these:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> BTRFS warning (device md127): checksum error at logical 20800943685632
>>>>>>>>>>>>>>>>> on dev /dev/md127, sector 520167424: metadata node (level 1) in tree 3
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> I ran
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> btrfs check --fix-crc
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> And that brings us to where I am now: Some seemly corrupted BTRFS
>>>>>>>>>>>>>>>>> metadata and unable to mount the drive even with the recovery option.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Any help you can give is much appreciated!
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Kind regards
>>>>>>>>>>>>>>>>> Michael
>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>>>>>>>>>>> the body of a message to majordomo@vger.kernel.org
>>>>>>>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, back to index

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-28  8:30 BTRFS RAID filesystem unmountable Michael Wade
2018-04-28  8:45 ` Qu Wenruo
2018-04-28  9:37   ` Michael Wade
2018-04-28 11:38     ` Qu Wenruo
     [not found]       ` <CAB+znrF_d+Hg_A9AMvWEB=S5eVAtYrvr2jUPcvR4FfB4hnCMWA@mail.gmail.com>
2018-04-29  8:33         ` Qu Wenruo
2018-04-29  8:59           ` Michael Wade
2018-04-29  9:33             ` Qu Wenruo
     [not found]               ` <CAB+znrEcW3+++ZBrB_ZGRFncssO-zffbJ6ug8_z0DJOhbp+vGA@mail.gmail.com>
2018-04-30  1:52                 ` Qu Wenruo
2018-04-30  3:02                 ` Qu Wenruo
2018-05-01 15:50                   ` Michael Wade
2018-05-02  1:31                     ` Qu Wenruo
2018-05-02  5:29                       ` Michael Wade
2018-05-04 16:18                         ` Michael Wade
2018-05-05  0:43                           ` Qu Wenruo
2018-05-19 11:43                             ` Michael Wade
     [not found]                               ` <CAB+znrFS=Xi+4tPS3szqZro1FdjnVcbe29UV9UMUUxsGL6NJUg@mail.gmail.com>
2018-12-06 23:26                                 ` Qu Wenruo

Linux-BTRFS Archive on lore.kernel.org

Archives are clonable: git clone --mirror https://lore.kernel.org/linux-btrfs/0 linux-btrfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-btrfs linux-btrfs/ https://lore.kernel.org/linux-btrfs \
		linux-btrfs@vger.kernel.org linux-btrfs@archiver.kernel.org
	public-inbox-index linux-btrfs


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-btrfs


AGPL code for this site: git clone https://public-inbox.org/ public-inbox