* issue mounting volume
@ 2019-01-15 16:33 Christian Schneider
2019-01-15 22:13 ` Chris Murphy
2019-01-17 0:50 ` Qu Wenruo
0 siblings, 2 replies; 12+ messages in thread
From: Christian Schneider @ 2019-01-15 16:33 UTC (permalink / raw)
To: linux-btrfs
Hello all, after a power failure I have issues mounting a btrfs volume:
mount /dev/md42
mount: /home: wrong fs type, bad option, bad superblock on /dev/md42,
missing codepage or helper program, or other error
dmesg
[...]
[ 4322.061000] BTRFS info (device md42): use lzo compression, level 0
[ 4322.061004] BTRFS info (device md42): disk space caching is enabled
[ 4322.061005] BTRFS info (device md42): has skinny extents
[ 4323.016007] BTRFS error (device md42): parent transid verify failed
on 448888832 wanted 68773 found 68768
[ 4323.025656] BTRFS error (device md42): parent transid verify failed
on 448888832 wanted 68773 found 68771
[ 4323.025665] BTRFS error (device md42): failed to read block groups: -5
[ 4323.036088] BTRFS error (device md42): open_ctree failed
adding -o ro,usebackuproot doesn't change anything, same mount error,
same error messages in dmesg.
Also tried btrfs check with this error.
btrfs check /dev/md42
Opening filesystem to check...
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68771
parent transid verify failed on 448888832 wanted 68773 found 68771
Ignoring transid failure
leaf parent key incorrect 448888832
ERROR: cannot open file system
I also tried btrfs-find-root with this result:
btrfs-find-root /dev/md42
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68771
parent transid verify failed on 448888832 wanted 68773 found 68771
Ignoring transid failure
leaf parent key incorrect 448888832
Superblock thinks the generation is 68921
Superblock thinks the level is 1
Found tree root at 629129216 gen 68921 level 1
Well block 625803264(gen: 68917 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 602259456(gen: 68899 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 600997888(gen: 68898 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 581926912(gen: 68885 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 582320128(gen: 68884 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 578306048(gen: 68882 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 577339392(gen: 68881 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 575225856(gen: 68880 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 574341120(gen: 68879 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 574947328(gen: 68878 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 572555264(gen: 68877 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 571179008(gen: 68876 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 569393152(gen: 68875 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 568164352(gen: 68874 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 566050816(gen: 68873 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 565002240(gen: 68872 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 563920896(gen: 68871 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 564346880(gen: 68870 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 561446912(gen: 68869 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 560267264(gen: 68868 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 560545792(gen: 68867 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 557858816(gen: 68866 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
Well block 556924928(gen: 68865 level: 1) seems good, but
generation/level doesn't match, want gen: 68921 level: 1
[...]
The last line repeats with different block numbers with decreasing gen
and level sometimes 0.
A simple
btrfs filesystem show /dev/md42
Label: none uuid: 8c6746f0-944a-45c5-90f3-622724d15998
Total devices 1 FS bytes used 1.63TiB
devid 1 size 7.26TiB used 1.90TiB path /dev/md42
for me, seems to be ok.
Is there something I could try to recover from this?
Any help is welcome, as it happened during backup creation, and the
backup volume suffers from the same issue.
Additional info of system:
uname -a
uname -a
Linux jane 4.19.6-gentoo #1 SMP PREEMPT Sat Dec 15 13:26:24 CET 2018
x86_64 Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz GenuineIntel GNU/Linux
btrfs --version
btrfs-progs v4.19.1
BR, Christian
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-15 16:33 issue mounting volume Christian Schneider
@ 2019-01-15 22:13 ` Chris Murphy
2019-01-16 18:22 ` Christian Schneider
2019-01-17 0:50 ` Qu Wenruo
1 sibling, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2019-01-15 22:13 UTC (permalink / raw)
To: Christian Schneider, Btrfs BTRFS, Qu Wenruo
On Tue, Jan 15, 2019 at 10:03 AM Christian Schneider <christian@ch-sc.de> wrote:
> uname -a
> Linux jane 4.19.6-gentoo #1 SMP PREEMPT Sat Dec 15 13:26:24 CET 2018
> x86_64 Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz GenuineIntel GNU/Linux
Your problem could be due to a block layer bug that's been discovered.
It should be fixed in 4.19.8+
https://lwn.net/Articles/774440/
I'm not sure off hand how to fix the corruption once it's happened;
and I'm not even sure if it's a contributing factor to the power
failure or not. It is disappointing that btrfs check has no way to
iterate or try to infer a correction for a bad leaf, but it's not
certain how badly damaged that leaf is yet. You can output it
'btrfs insp dump-t -b 448888832 <dev>'
and remove file names before posting it; this might help a dev sort
out what the problem is.
--
Chris Murphy
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-15 22:13 ` Chris Murphy
@ 2019-01-16 18:22 ` Christian Schneider
2019-01-17 0:12 ` Chris Murphy
0 siblings, 1 reply; 12+ messages in thread
From: Christian Schneider @ 2019-01-16 18:22 UTC (permalink / raw)
To: Chris Murphy, Btrfs BTRFS, Qu Wenruo
THx for your hints!
> Your problem could be due to a block layer bug that's been discovered.
> It should be fixed in 4.19.8+
> https://lwn.net/Articles/774440/
I looked into the article, and it mentiones, that the bug occurs, when
no IO scheduler is used, which is not the case for me. So, I would rule
this out.
> 'btrfs insp dump-t -b 448888832 <dev>'
>
> and remove file names before posting it; this might help a dev sort
> out what the problem is.
Have done this, though no file names appear there. this is the output:
btrfs inspect-internal dump-tree -b 448888832 /dev/md42
btrfs-progs v4.19.1
leaf 448888832 items 29 free space 1298 generation 68768 owner CSUM_TREE
leaf 448888832 flags 0x1(WRITTEN) backref revision 1
fs uuid 8c6746f0-944a-45c5-90f3-622724d15998
chunk uuid 7fdb3778-4545-405e-84a9-fc3675e913e5
item 0 key (EXTENT_CSUM EXTENT_CSUM 273782669312) itemoff 15787
itemsize 496
range start 273782669312 end 273783177216 length 507904
item 1 key (EXTENT_CSUM EXTENT_CSUM 273783177216) itemoff 15783
itemsize 4
range start 273783177216 end 273783181312 length 4096
item 2 key (EXTENT_CSUM EXTENT_CSUM 273783181312) itemoff 15007
itemsize 776
range start 273783181312 end 273783975936 length 794624
item 3 key (EXTENT_CSUM EXTENT_CSUM 273783975936) itemoff 15003
itemsize 4
range start 273783975936 end 273783980032 length 4096
item 4 key (EXTENT_CSUM EXTENT_CSUM 273783980032) itemoff 10575
itemsize 4428
range start 273783980032 end 273788514304 length 4534272
item 5 key (EXTENT_CSUM EXTENT_CSUM 273788514304) itemoff 5599
itemsize 4976
range start 273788514304 end 273793609728 length 5095424
item 6 key (EXTENT_CSUM EXTENT_CSUM 273793871872) itemoff 5299
itemsize 300
range start 273793871872 end 273794179072 length 307200
item 7 key (EXTENT_CSUM EXTENT_CSUM 273794195456) itemoff 5291
itemsize 8
range start 273794195456 end 273794203648 length 8192
item 8 key (EXTENT_CSUM EXTENT_CSUM 273794203648) itemoff 5099
itemsize 192
range start 273794203648 end 273794400256 length 196608
item 9 key (EXTENT_CSUM EXTENT_CSUM 273794400256) itemoff 4791
itemsize 308
range start 273794400256 end 273794715648 length 315392
item 10 key (EXTENT_CSUM EXTENT_CSUM 273794715648) itemoff 4779
itemsize 12
range start 273794715648 end 273794727936 length 12288
item 11 key (EXTENT_CSUM EXTENT_CSUM 273794990080) itemoff 4683
itemsize 96
range start 273794990080 end 273795088384 length 98304
item 12 key (EXTENT_CSUM EXTENT_CSUM 273795088384) itemoff 4587
itemsize 96
range start 273795088384 end 273795186688 length 98304
item 13 key (EXTENT_CSUM EXTENT_CSUM 273795448832) itemoff 4571
itemsize 16
range start 273795448832 end 273795465216 length 16384
item 14 key (EXTENT_CSUM EXTENT_CSUM 273795465216) itemoff 4551
itemsize 20
range start 273795465216 end 273795485696 length 20480
item 15 key (EXTENT_CSUM EXTENT_CSUM 273795485696) itemoff 4523
itemsize 28
range start 273795485696 end 273795514368 length 28672
item 16 key (EXTENT_CSUM EXTENT_CSUM 273795514368) itemoff 4427
itemsize 96
range start 273795514368 end 273795612672 length 98304
item 17 key (EXTENT_CSUM EXTENT_CSUM 273795612672) itemoff 4315
itemsize 112
range start 273795612672 end 273795727360 length 114688
item 18 key (EXTENT_CSUM EXTENT_CSUM 273795727360) itemoff 4299
itemsize 16
range start 273795727360 end 273795743744 length 16384
item 19 key (EXTENT_CSUM EXTENT_CSUM 273795743744) itemoff 4283
itemsize 16
range start 273795743744 end 273795760128 length 16384
item 20 key (EXTENT_CSUM EXTENT_CSUM 273795760128) itemoff 4267
itemsize 16
range start 273795760128 end 273795776512 length 16384
item 21 key (EXTENT_CSUM EXTENT_CSUM 273796366336) itemoff 4015
itemsize 252
range start 273796366336 end 273796624384 length 258048
item 22 key (EXTENT_CSUM EXTENT_CSUM 273797410816) itemoff 3543
itemsize 472
range start 273797410816 end 273797894144 length 483328
item 23 key (EXTENT_CSUM EXTENT_CSUM 273798201344) itemoff 2931
itemsize 612
range start 273798201344 end 273798828032 length 626688
item 24 key (EXTENT_CSUM EXTENT_CSUM 273798828032) itemoff 2315
itemsize 616
range start 273798828032 end 273799458816 length 630784
item 25 key (EXTENT_CSUM EXTENT_CSUM 273799720960) itemoff 2255
itemsize 60
range start 273799720960 end 273799782400 length 61440
item 26 key (EXTENT_CSUM EXTENT_CSUM 273799782400) itemoff 2199
itemsize 56
range start 273799782400 end 273799839744 length 57344
item 27 key (EXTENT_CSUM EXTENT_CSUM 273799839744) itemoff 2139
itemsize 60
range start 273799839744 end 273799901184 length 61440
item 28 key (EXTENT_CSUM EXTENT_CSUM 273799901184) itemoff 2023
itemsize 116
range start 273799901184 end 273800019968 length 118784
Hope, this enables someone to help me recover my stuff.
BR, CHristian
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-16 18:22 ` Christian Schneider
@ 2019-01-17 0:12 ` Chris Murphy
2019-01-17 10:33 ` Christian Schneider
0 siblings, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2019-01-17 0:12 UTC (permalink / raw)
To: Christian Schneider; +Cc: Btrfs BTRFS, Qu Wenruo
On Wed, Jan 16, 2019 at 11:22 AM Christian Schneider <christian@ch-sc.de> wrote:
>
> THx for your hints!
>
> > Your problem could be due to a block layer bug that's been discovered.
> > It should be fixed in 4.19.8+
> > https://lwn.net/Articles/774440/
> I looked into the article, and it mentiones, that the bug occurs, when
> no IO scheduler is used, which is not the case for me. So, I would rule
> this out.
>
> > 'btrfs insp dump-t -b 448888832 <dev>'
> >
> > and remove file names before posting it; this might help a dev sort
> > out what the problem is.
> Have done this, though no file names appear there. this is the output:
>
>
> btrfs inspect-internal dump-tree -b 448888832 /dev/md42
> btrfs-progs v4.19.1
> leaf 448888832 items 29 free space 1298 generation 68768 owner CSUM_TREE
> leaf 448888832 flags 0x1(WRITTEN) backref revision 1
OK so for some reason that leaf is considered stale. I can't tell if
it really is stale, or if the complaint is bogus. The generation for
this leaf is 68768 but the current good tree expects it to be 68773,
which isn't that far off. Some decent chance it can be repaired
depending on what was happening at the time of the power failure.
What do you get for:
btrfs rescue super -v <dev>
btrfs insp dump-s -fa <dev>
These are readonly commands and do not change anything on disk; just
to reiterate I don't recommend 'btrfs check --repair' yet. If the
first command reports that all supers are good, no bad supers, then
you can try
btrfs check -b <dev> which will use the previous backup roots and see
if there's anything that can be done, or if it falls over with the
same complaint. It is possible to use your btrfs-find-tree results to
plug in specific root addresses using btrfs check -b <address> <dev>
working from the top of the list (the highest generation number) and
work down. But for starters just the first two commands above might
reveal a clue.
--
Chris Murphy
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 0:12 ` Chris Murphy
@ 2019-01-17 10:33 ` Christian Schneider
0 siblings, 0 replies; 12+ messages in thread
From: Christian Schneider @ 2019-01-17 10:33 UTC (permalink / raw)
To: Chris Murphy; +Cc: Btrfs BTRFS, Qu Wenruo
[-- Attachment #1: Type: text/plain, Size: 4120 bytes --]
> What do you get for:
>
> btrfs rescue super -v <dev>
> btrfs insp dump-s -fa <dev>
btrfs rescue super -v /dev/md42
All Devices:
Device: id = 1, name = /dev/md42
Before Recovering:
[All good supers]:
device name = /dev/md42
superblock bytenr = 65536
device name = /dev/md42
superblock bytenr = 67108864
device name = /dev/md42
superblock bytenr = 274877906944
[All bad supers]:
All supers are valid, no need to recover
btrfs insp dump-s -fa /dev/md42
attached as separate file, since it is a little bit lengthy.
But both commands give no errors.
> btrfs check -b <dev> which will use the previous backup roots and see
> if there's anything that can be done, or if it falls over with the
> same complaint. It is possible to use your btrfs-find-tree results to
> plug in specific root addresses using btrfs check -b <address> <dev>
> working from the top of the list (the highest generation number) and
> work down. But for starters just the first two commands above might
> reveal a clue.
>
>
btrfs check -b /dev/md42
Opening filesystem to check...
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68771
parent transid verify failed on 448888832 wanted 68773 found 68771
Ignoring transid failure
leaf parent key incorrect 448888832
ERROR: cannot open file system
Yields the same as without -b.
btrfs check -b <address> <dev> isn't valid, I assume you mean btrfs
check -r <address> <dev>?
btrfs check /dev/md42
Opening filesystem to check...
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68771
parent transid verify failed on 448888832 wanted 68773 found 68771
Ignoring transid failure
leaf parent key incorrect 448888832
ERROR: cannot open file system
sudo btrfs check -r 629129216 /dev/md42
Opening filesystem to check...
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68771
parent transid verify failed on 448888832 wanted 68773 found 68771
Ignoring transid failure
leaf parent key incorrect 448888832
ERROR: cannot open file system
btrfs check -r 625803264 /dev/md42
Opening filesystem to check...
parent transid verify failed on 625803264 wanted 68921 found 68917
parent transid verify failed on 625803264 wanted 68921 found 68917
parent transid verify failed on 625803264 wanted 68921 found 68917
parent transid verify failed on 625803264 wanted 68921 found 68917
Ignoring transid failure
parent transid verify failed on 626180096 wanted 68917 found 68919
parent transid verify failed on 626180096 wanted 68917 found 68919
parent transid verify failed on 626180096 wanted 68917 found 68919
parent transid verify failed on 626180096 wanted 68917 found 68919
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=624934912 item=2 parent level=2
child level=0
ERROR: cannot open file system
btrfs check -r 602259456 /dev/md42
Opening filesystem to check...
parent transid verify failed on 602259456 wanted 68921 found 68899
parent transid verify failed on 602259456 wanted 68921 found 68899
parent transid verify failed on 602259456 wanted 68921 found 68899
parent transid verify failed on 602259456 wanted 68921 found 68899
Ignoring transid failure
parent transid verify failed on 604602368 wanted 68899 found 68901
parent transid verify failed on 604602368 wanted 68899 found 68901
parent transid verify failed on 604602368 wanted 68899 found 68901
parent transid verify failed on 604602368 wanted 68899 found 68901
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=601964544 item=2 parent level=2
child level=2
ERROR: cannot open file system
THis continues, didn't find a address that worked.
[-- Attachment #2: dump-super --]
[-- Type: text/plain, Size: 14727 bytes --]
superblock: bytenr=65536, device=/dev/md42
---------------------------------------------------------
csum_type 0 (crc32c)
csum_size 4
csum 0x035e2cd4 [match]
bytenr 65536
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
fsid 8c6746f0-944a-45c5-90f3-622724d15998
label
generation 68921
root 629129216
sys_array_size 129
chunk_root_generation 68764
root_level 1
chunk_root 22052864
chunk_root_level 1
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 7979827986432
bytes_used 1796020310016
sectorsize 4096
nodesize 16384
leafsize (deprecated) 16384
stripesize 4096
root_dir 6
num_devices 1
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA )
cache_generation 68921
uuid_tree_generation 68921
dev_item.uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
dev_item.fsid 8c6746f0-944a-45c5-90f3-622724d15998 [match]
dev_item.type 0
dev_item.total_bytes 7979827986432
dev_item.bytes_used 2086305529856
dev_item.io_align 4096
dev_item.io_width 4096
dev_item.sector_size 4096
dev_item.devid 1
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0
sys_chunk_array[2048]:
item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 22020096)
length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
io_align 65536 io_width 65536 sector_size 4096
num_stripes 2 sub_stripes 0
stripe 0 devid 1 offset 22020096
dev_uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
stripe 1 devid 1 offset 30408704
dev_uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
backup_roots[4]:
backup 0:
backup_tree_root: 631357440 gen: 68920 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 628768768 gen: 68920 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 629637120 gen: 68920 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
backup 1:
backup_tree_root: 629129216 gen: 68921 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 629473280 gen: 68921 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 631275520 gen: 68921 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
backup 2:
backup_tree_root: 628654080 gen: 68918 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 625639424 gen: 68918 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 625393664 gen: 68918 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796018110464
backup_num_devices: 1
backup 3:
backup_tree_root: 632176640 gen: 68919 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 629129216 gen: 68919 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 631013376 gen: 68919 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
superblock: bytenr=67108864, device=/dev/md42
---------------------------------------------------------
csum_type 0 (crc32c)
csum_size 4
csum 0xa33f041a [match]
bytenr 67108864
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
fsid 8c6746f0-944a-45c5-90f3-622724d15998
label
generation 68921
root 629129216
sys_array_size 129
chunk_root_generation 68764
root_level 1
chunk_root 22052864
chunk_root_level 1
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 7979827986432
bytes_used 1796020310016
sectorsize 4096
nodesize 16384
leafsize (deprecated) 16384
stripesize 4096
root_dir 6
num_devices 1
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA )
cache_generation 68921
uuid_tree_generation 68921
dev_item.uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
dev_item.fsid 8c6746f0-944a-45c5-90f3-622724d15998 [match]
dev_item.type 0
dev_item.total_bytes 7979827986432
dev_item.bytes_used 2086305529856
dev_item.io_align 4096
dev_item.io_width 4096
dev_item.sector_size 4096
dev_item.devid 1
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0
sys_chunk_array[2048]:
item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 22020096)
length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
io_align 65536 io_width 65536 sector_size 4096
num_stripes 2 sub_stripes 0
stripe 0 devid 1 offset 22020096
dev_uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
stripe 1 devid 1 offset 30408704
dev_uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
backup_roots[4]:
backup 0:
backup_tree_root: 631357440 gen: 68920 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 628768768 gen: 68920 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 629637120 gen: 68920 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
backup 1:
backup_tree_root: 629129216 gen: 68921 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 629473280 gen: 68921 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 631275520 gen: 68921 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
backup 2:
backup_tree_root: 628654080 gen: 68918 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 625639424 gen: 68918 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 625393664 gen: 68918 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796018110464
backup_num_devices: 1
backup 3:
backup_tree_root: 632176640 gen: 68919 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 629129216 gen: 68919 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 631013376 gen: 68919 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
superblock: bytenr=274877906944, device=/dev/md42
---------------------------------------------------------
csum_type 0 (crc32c)
csum_size 4
csum 0x5eb8522b [match]
bytenr 274877906944
flags 0x1
( WRITTEN )
magic _BHRfS_M [match]
fsid 8c6746f0-944a-45c5-90f3-622724d15998
label
generation 68921
root 629129216
sys_array_size 129
chunk_root_generation 68764
root_level 1
chunk_root 22052864
chunk_root_level 1
log_root 0
log_root_transid 0
log_root_level 0
total_bytes 7979827986432
bytes_used 1796020310016
sectorsize 4096
nodesize 16384
leafsize (deprecated) 16384
stripesize 4096
root_dir 6
num_devices 1
compat_flags 0x0
compat_ro_flags 0x0
incompat_flags 0x169
( MIXED_BACKREF |
COMPRESS_LZO |
BIG_METADATA |
EXTENDED_IREF |
SKINNY_METADATA )
cache_generation 68921
uuid_tree_generation 68921
dev_item.uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
dev_item.fsid 8c6746f0-944a-45c5-90f3-622724d15998 [match]
dev_item.type 0
dev_item.total_bytes 7979827986432
dev_item.bytes_used 2086305529856
dev_item.io_align 4096
dev_item.io_width 4096
dev_item.sector_size 4096
dev_item.devid 1
dev_item.dev_group 0
dev_item.seek_speed 0
dev_item.bandwidth 0
dev_item.generation 0
sys_chunk_array[2048]:
item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 22020096)
length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
io_align 65536 io_width 65536 sector_size 4096
num_stripes 2 sub_stripes 0
stripe 0 devid 1 offset 22020096
dev_uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
stripe 1 devid 1 offset 30408704
dev_uuid 14a50874-e2ab-49ce-92bd-135d7adb6de9
backup_roots[4]:
backup 0:
backup_tree_root: 631357440 gen: 68920 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 628768768 gen: 68920 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 629637120 gen: 68920 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
backup 1:
backup_tree_root: 629129216 gen: 68921 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 629473280 gen: 68921 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 631275520 gen: 68921 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
backup 2:
backup_tree_root: 628654080 gen: 68918 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 625639424 gen: 68918 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 625393664 gen: 68918 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796018110464
backup_num_devices: 1
backup 3:
backup_tree_root: 632176640 gen: 68919 level: 1
backup_chunk_root: 22052864 gen: 68764 level: 1
backup_extent_root: 629129216 gen: 68919 level: 2
backup_fs_root: 30457856 gen: 6359 level: 0
backup_dev_root: 30474240 gen: 68764 level: 1
backup_csum_root: 631013376 gen: 68919 level: 2
backup_total_bytes: 7979827986432
backup_bytes_used: 1796020310016
backup_num_devices: 1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-15 16:33 issue mounting volume Christian Schneider
2019-01-15 22:13 ` Chris Murphy
@ 2019-01-17 0:50 ` Qu Wenruo
2019-01-17 10:42 ` Christian Schneider
1 sibling, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2019-01-17 0:50 UTC (permalink / raw)
To: Christian Schneider, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 2870 bytes --]
On 2019/1/16 上午12:33, Christian Schneider wrote:
>
> Hello all, after a power failure I have issues mounting a btrfs volume:
>
> mount /dev/md42
> mount: /home: wrong fs type, bad option, bad superblock on /dev/md42,
> missing codepage or helper program, or other error
>
> dmesg
> [...]
> [ 4322.061000] BTRFS info (device md42): use lzo compression, level 0
> [ 4322.061004] BTRFS info (device md42): disk space caching is enabled
> [ 4322.061005] BTRFS info (device md42): has skinny extents
> [ 4323.016007] BTRFS error (device md42): parent transid verify failed
> on 448888832 wanted 68773 found 68768
> [ 4323.025656] BTRFS error (device md42): parent transid verify failed
> on 448888832 wanted 68773 found 68771
Transid error, and furthermore, two copies are pointing to different leaves.
So in short conclusion, your fs is screwed up.
You could try this patch:
https://patchwork.kernel.org/patch/10738583/
Then mount with "ro,skip_bg" mount options.
Or go btrfs-restore.
> [ 4323.025665] BTRFS error (device md42): failed to read block groups: -5
> [ 4323.036088] BTRFS error (device md42): open_ctree failed
>
> adding -o ro,usebackuproot doesn't change anything, same mount error,
> same error messages in dmesg.
>
> Also tried btrfs check with this error.
> btrfs check /dev/md42
> Opening filesystem to check...
> parent transid verify failed on 448888832 wanted 68773 found 68768
> parent transid verify failed on 448888832 wanted 68773 found 68768
> parent transid verify failed on 448888832 wanted 68773 found 68771
> parent transid verify failed on 448888832 wanted 68773 found 68771
> Ignoring transid failure
> leaf parent key incorrect 448888832
> ERROR: cannot open file system
<snip>
> [...]
>
> The last line repeats with different block numbers with decreasing gen
> and level sometimes 0.
>
> A simple
> btrfs filesystem show /dev/md42
> Label: none uuid: 8c6746f0-944a-45c5-90f3-622724d15998
> Total devices 1 FS bytes used 1.63TiB
> devid 1 size 7.26TiB used 1.90TiB path /dev/md42
>
> for me, seems to be ok.
>
> Is there something I could try to recover from this?
> Any help is welcome, as it happened during backup creation, and the
> backup volume suffers from the same issue.
>
> Additional info of system:
> uname -a
> uname -a
> Linux jane 4.19.6-gentoo #1 SMP PREEMPT Sat Dec 15 13:26:24 CET 2018
> x86_64 Intel(R) Core(TM) i7-4785T CPU @ 2.20GHz GenuineIntel GNU/Linux
I'm more interested in the history of the fs.
Did the fs get created/modified by some older kernel?
Especially considering you're using Gentoo, and every kernel update
needs time consuming compile, it may be caused by some older kernel.
Thanks,
Qu
>
> btrfs --version
> btrfs-progs v4.19.1
>
> BR, Christian
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 0:50 ` Qu Wenruo
@ 2019-01-17 10:42 ` Christian Schneider
2019-01-17 11:42 ` Qu Wenruo
0 siblings, 1 reply; 12+ messages in thread
From: Christian Schneider @ 2019-01-17 10:42 UTC (permalink / raw)
To: Qu Wenruo, linux-btrfs
>
> You could try this patch:
> https://patchwork.kernel.org/patch/10738583/
Do you know, which kernel is needed as base for the patch? Can I apply
it to 4.19 or do I need more recent? If you don't know, I can just try
it out.
> Or go btrfs-restore.
I already tried a dry run:
btrfs restore -D /dev/md42 /
This is a dry-run, no files are going to be restored
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
parent transid verify failed on 448937984 wanted 68772 found 68770
parent transid verify failed on 448937984 wanted 68772 found 68770
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
parent transid verify failed on 450281472 wanted 32623 found 30451
parent transid verify failed on 450281472 wanted 32623 found 30451
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
We have looped trying to restore files in <filename> too many times to
be making progress, stopping
The filenames are actually directories, but as far as I understand,
actually no files can be restored.
> I'm more interested in the history of the fs.
>
> Did the fs get created/modified by some older kernel?
> Especially considering you're using Gentoo, and every kernel update
> needs time consuming compile, it may be caused by some older kernel.
Yes, the filesystem is "older", though I don't really know how old, I
would guess something between 1 and 2 years, and I update kernels every
1-2 months. Unfortunatelly I can't give better details of creation of
the fs.
BR, Christian
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 10:42 ` Christian Schneider
@ 2019-01-17 11:42 ` Qu Wenruo
2019-01-17 13:54 ` Christian Schneider
0 siblings, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2019-01-17 11:42 UTC (permalink / raw)
To: Christian Schneider, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 4135 bytes --]
On 2019/1/17 下午6:42, Christian Schneider wrote:
>>
>> You could try this patch:
>> https://patchwork.kernel.org/patch/10738583/
>
> Do you know, which kernel is needed as base for the patch? Can I apply
> it to 4.19 or do I need more recent? If you don't know, I can just try
> it out.
My base is v5.0-rc1.
Although I think there shouldn't be too many conflicts for older kernels.
Thanks,
Qu
>> Or go btrfs-restore.
>
> I already tried a dry run:
> btrfs restore -D /dev/md42 /
> This is a dry-run, no files are going to be restored
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> parent transid verify failed on 448937984 wanted 68772 found 68770
> parent transid verify failed on 448937984 wanted 68772 found 68770
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> parent transid verify failed on 450281472 wanted 32623 found 30451
> parent transid verify failed on 450281472 wanted 32623 found 30451
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
> We have looped trying to restore files in <filename> too many times to
> be making progress, stopping
>
> The filenames are actually directories, but as far as I understand,
> actually no files can be restored.
>
>
>> I'm more interested in the history of the fs.
>>
>> Did the fs get created/modified by some older kernel?
>> Especially considering you're using Gentoo, and every kernel update
>> needs time consuming compile, it may be caused by some older kernel.
>
> Yes, the filesystem is "older", though I don't really know how old, I
> would guess something between 1 and 2 years, and I update kernels every
> 1-2 months. Unfortunatelly I can't give better details of creation of
> the fs.
>
> BR, Christian
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 11:42 ` Qu Wenruo
@ 2019-01-17 13:54 ` Christian Schneider
2019-01-17 14:12 ` Qu Wenruo
0 siblings, 1 reply; 12+ messages in thread
From: Christian Schneider @ 2019-01-17 13:54 UTC (permalink / raw)
To: Qu Wenruo, linux-btrfs
>>
>> Do you know, which kernel is needed as base for the patch? Can I apply
>> it to 4.19 or do I need more recent? If you don't know, I can just try
>> it out.
>
> My base is v5.0-rc1.
>
> Although I think there shouldn't be too many conflicts for older kernels.
>
I could apply the patch on 4.19, but compilation failed. So I went
straight to master, where it worked, and I could even mount the fs now.
Your patch also has a positive impact on free space:
df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/md42 7.3T 1.9T 1.8P 1% /home
1.8PB available space should be enough for the next few years :D
Thank you very much so far!!!
So, for further steps: As far as I understood, no possibility to repair
the fs? I just get the data I can and create it new?
BR, CHristian
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 13:54 ` Christian Schneider
@ 2019-01-17 14:12 ` Qu Wenruo
2019-01-17 14:38 ` Christian Schneider
0 siblings, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2019-01-17 14:12 UTC (permalink / raw)
To: Christian Schneider, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 1432 bytes --]
On 2019/1/17 下午9:54, Christian Schneider wrote:
>>>
>>> Do you know, which kernel is needed as base for the patch? Can I apply
>>> it to 4.19 or do I need more recent? If you don't know, I can just try
>>> it out.
>>
>> My base is v5.0-rc1.
>>
>> Although I think there shouldn't be too many conflicts for older kernels.
>>
> I could apply the patch on 4.19, but compilation failed. So I went
> straight to master, where it worked, and I could even mount the fs now.
>
> Your patch also has a positive impact on free space:
>
> df -h /home
> Filesystem Size Used Avail Use% Mounted on
> /dev/md42 7.3T 1.9T 1.8P 1% /home
>
> 1.8PB available space should be enough for the next few years :D
>
> Thank you very much so far!!!
>
> So, for further steps: As far as I understood, no possibility to repair
> the fs?
Unfortunately, no possibility.
The corruption of extent tree is pretty nasty.
Your metadata CoW is completely broken.
It really doesn't make much sense to repair, and I don't really believe
the repaired result could be any good.
> I just get the data I can and create it new?
Yep.
And just a general tip, for any unexpected power loss, do a btrfs check
--readonly before doing RW mount.
It would help us to detect and locate possible cause of any corruption
before it cause more damage.
Thanks,
Qu
>
> BR, CHristian
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 14:12 ` Qu Wenruo
@ 2019-01-17 14:38 ` Christian Schneider
2019-01-17 14:54 ` Qu Wenruo
0 siblings, 1 reply; 12+ messages in thread
From: Christian Schneider @ 2019-01-17 14:38 UTC (permalink / raw)
To: Qu Wenruo, linux-btrfs
May I ask for a little technical details, about what happened/was wrong?
I don't know really anything about internal btrfs stuff, but would like
to gain a little insight. Also, if there is a explanation online, where
you can point me to would be nice.
BR, CHristian
Am 17.01.19 um 15:12 schrieb Qu Wenruo:
>
>
> On 2019/1/17 下午9:54, Christian Schneider wrote:
>>>>
>>>> Do you know, which kernel is needed as base for the patch? Can I apply
>>>> it to 4.19 or do I need more recent? If you don't know, I can just try
>>>> it out.
>>>
>>> My base is v5.0-rc1.
>>>
>>> Although I think there shouldn't be too many conflicts for older kernels.
>>>
>> I could apply the patch on 4.19, but compilation failed. So I went
>> straight to master, where it worked, and I could even mount the fs now.
>>
>> Your patch also has a positive impact on free space:
>>
>> df -h /home
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/md42 7.3T 1.9T 1.8P 1% /home
>>
>> 1.8PB available space should be enough for the next few years :D
>>
>> Thank you very much so far!!!
>>
>> So, for further steps: As far as I understood, no possibility to repair
>> the fs?
>
> Unfortunately, no possibility.
>
> The corruption of extent tree is pretty nasty.
> Your metadata CoW is completely broken.
> It really doesn't make much sense to repair, and I don't really believe
> the repaired result could be any good.
>
>> I just get the data I can and create it new?
>
> Yep.
>
> And just a general tip, for any unexpected power loss, do a btrfs check
> --readonly before doing RW mount.
>
> It would help us to detect and locate possible cause of any corruption
> before it cause more damage.
>
> Thanks,
> Qu
>
>>
>> BR, CHristian
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: issue mounting volume
2019-01-17 14:38 ` Christian Schneider
@ 2019-01-17 14:54 ` Qu Wenruo
0 siblings, 0 replies; 12+ messages in thread
From: Qu Wenruo @ 2019-01-17 14:54 UTC (permalink / raw)
To: Christian Schneider, linux-btrfs
[-- Attachment #1.1: Type: text/plain, Size: 3662 bytes --]
On 2019/1/17 下午10:38, Christian Schneider wrote:
> May I ask for a little technical details, about what happened/was wrong?
(This may be pretty similar to what I explained before)
parent transid verify failed on 448888832 wanted 68773 found 68768
parent transid verify failed on 448888832 wanted 68773 found 68771
These two lines are the root cause.
Your tree block at 448888832 doesn't has the transid its parent expects.
Normally this means either
a) One tree block overwrites an existing tree block
This means btrfs metadata CoW is screwed up completely.
Possible causes are bad free space cache/tree or corrupted extent
tree.
(Thus metadata backup profile like DUP/RAID1/RAID10/RAID5/6 provides
no help at all)
b) The tree block at 448888832 is heavily damaged
Normally this means the generation should be some garbage, but not
for your case.
So a) should be your case.
But unlike normal a) case, your two metadata copies points to 2
different tree blocks, as the generation is completely different.
So this looks like there is a power loss happened after one metadata
copy written.
And since the powerloss happened, one of the 3 generations should be the
problem.
My guess is, the last transaction 68773 who is writting the parent of
448888832 is causing the problem.
But that doesn't explain everything, especially why one copy differs
from the other.
So I'm saying your fs may be totally corrupted, but as long as no power
loss happen, the seed of destruction doesn't grow.
But when power loss happens, the already screwed-up extent tree/space
cache/space tree could destroy the full fs, as btrfs is way too
dependent on metadata CoW to protect itself, and if the basis of
metadata CoW is screwed up, nothing you can do but salvaging your data.
Thanks,
Qu
> I don't know really anything about internal btrfs stuff, but would like
> to gain a little insight. Also, if there is a explanation online, where
> you can point me to would be nice.
> BR, CHristian
>
> Am 17.01.19 um 15:12 schrieb Qu Wenruo:
>>
>>
>> On 2019/1/17 下午9:54, Christian Schneider wrote:
>>>>>
>>>>> Do you know, which kernel is needed as base for the patch? Can I apply
>>>>> it to 4.19 or do I need more recent? If you don't know, I can just try
>>>>> it out.
>>>>
>>>> My base is v5.0-rc1.
>>>>
>>>> Although I think there shouldn't be too many conflicts for older
>>>> kernels.
>>>>
>>> I could apply the patch on 4.19, but compilation failed. So I went
>>> straight to master, where it worked, and I could even mount the fs now.
>>>
>>> Your patch also has a positive impact on free space:
>>>
>>> df -h /home
>>> Filesystem Size Used Avail Use% Mounted on
>>> /dev/md42 7.3T 1.9T 1.8P 1% /home
>>>
>>> 1.8PB available space should be enough for the next few years :D
>>>
>>> Thank you very much so far!!!
>>>
>>> So, for further steps: As far as I understood, no possibility to repair
>>> the fs?
>>
>> Unfortunately, no possibility.
>>
>> The corruption of extent tree is pretty nasty.
>> Your metadata CoW is completely broken.
>> It really doesn't make much sense to repair, and I don't really believe
>> the repaired result could be any good.
>>
>>> I just get the data I can and create it new?
>>
>> Yep.
>>
>> And just a general tip, for any unexpected power loss, do a btrfs check
>> --readonly before doing RW mount.
>>
>> It would help us to detect and locate possible cause of any corruption
>> before it cause more damage.
>>
>> Thanks,
>> Qu
>>
>>>
>>> BR, CHristian
>>
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2019-01-17 14:54 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-15 16:33 issue mounting volume Christian Schneider
2019-01-15 22:13 ` Chris Murphy
2019-01-16 18:22 ` Christian Schneider
2019-01-17 0:12 ` Chris Murphy
2019-01-17 10:33 ` Christian Schneider
2019-01-17 0:50 ` Qu Wenruo
2019-01-17 10:42 ` Christian Schneider
2019-01-17 11:42 ` Qu Wenruo
2019-01-17 13:54 ` Christian Schneider
2019-01-17 14:12 ` Qu Wenruo
2019-01-17 14:38 ` Christian Schneider
2019-01-17 14:54 ` Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).