All of lore.kernel.org
 help / color / mirror / Atom feed
* Support demand on Btrfs crashed fs.
@ 2021-03-27 11:09 Thierry Testeur
  2021-03-31  0:17 ` Thierry Testeur
  2021-03-31  6:13 ` Chris Murphy
  0 siblings, 2 replies; 8+ messages in thread
From: Thierry Testeur @ 2021-03-27 11:09 UTC (permalink / raw)
  To: linux-btrfs

Dear btrfs support community,

after a bad power event, my 9 years old Btrfs has crashed.
This Btrfs partirion is on an 8 disk mdadm array.
After a few try on different options recovery found on "forums", have
alreday tryed some

btrfs check --repair
btrfs check --init-extent-tree
btrfs check --clear-space-cache,......
btrfs rescue super-recover
btrfs rescue chunk-recover
btrfs rescue restore
btrfs rescue chunk-recover: result on pastebin :https://pastebin.com/9aHewZU4
.....
....
  Chunk: start = 26461360619520, len = 1073741824, type = 1, num_stripes = 0
      Stripes list:
      Block Group: start = 26461360619520, len = 1073741824, flag = 1
      No device extent.
  Chunk: start = 26490351648768, len = 1073741824, type = 4, num_stripes = 0
      Stripes list:
      Block Group: start = 26490351648768, len = 1073741824, flag = 4
      No device extent.

Total Chunks:           13797
  Recoverable:          12233
  Unrecoverable:        1564

Orphan Block Groups:

Orphan Device Extents:
  Device extent: devid = 1, start = 20624466509824, len = 1073741824,
chunk offset = 26284193218560
  Device extent: devid = 1, start = 20625540251648, len = 1073741824,
chunk offset = 26285266960384
  Device extent: devid = 1, start = 20628761477120, len = 1073741824,
chunk offset = 26288488185856
  Device extent: devid = 1, start = 20630908960768, len = 1073741824,
chunk offset = 26290635669504
  Device extent: devid = 1, start = 20640572637184, len = 1073741824,
chunk offset = 26300299345920
  Device extent: devid = 1, start = 20641646379008, len = 1073741824,
chunk offset = 26301373087744
  Device extent: devid = 1, start = 20643793862656, len = 1073741824,
chunk offset = 26303520571392
  Device extent: devid = 1, start = 20645941346304, len = 1073741824,
chunk offset = 26305668055040
  Device extent: devid = 1, start = 20838141132800, len = 1073741824,
chunk offset = 26497867841536
  Device extent: devid = 1, start = 20839214874624, len = 1073741824,
chunk offset = 26498941583360
  Device extent: devid = 1, start = 20840288616448, len = 1073741824,
chunk offset = 26500015325184
  Device extent: devid = 1, start = 20841362358272, len = 1073741824,
chunk offset = 26501089067008
  Device extent: devid = 1, start = 20842436100096, len = 1073741824,
chunk offset = 26502162808832
  Device extent: devid = 1, start = 20844583583744, len = 1073741824,
chunk offset = 26504310292480
  Device extent: devid = 1, start = 20845657325568, len = 1073741824,
chunk offset = 26505384034304
  Device extent: devid = 1, start = 20846731067392, len = 1073741824,
chunk offset = 26506457776128
  Device extent: devid = 1, start = 20847804809216, len = 1073741824,
chunk offset = 26507531517952
  Device extent: devid = 1, start = 20848878551040, len = 1073741824,
chunk offset = 26508605259776
  Device extent: devid = 1, start = 20849952292864, len = 1073741824,
chunk offset = 26509679001600
  Device extent: devid = 1, start = 20851026034688, len = 1073741824,
chunk offset = 26510752743424
  Device extent: devid = 1, start = 20852099776512, len = 1073741824,
chunk offset = 26511826485248
  Device extent: devid = 1, start = 20856394743808, len = 1073741824,
chunk offset = 26516121452544
  Device extent: devid = 1, start = 20858542227456, len = 1073741824,
chunk offset = 26518268936192
  Device extent: devid = 1, start = 20862837194752, len = 1073741824,
chunk offset = 26522563903488
  Device extent: devid = 1, start = 20867132162048, len = 1073741824,
chunk offset = 26526858870784
  Device extent: devid = 1, start = 20868205903872, len = 1073741824,
chunk offset = 26527932612608
  Device extent: devid = 1, start = 20869279645696, len = 1073741824,
chunk offset = 26529006354432
  Device extent: devid = 1, start = 20870353387520, len = 1073741824,
chunk offset = 26530080096256
  Device extent: devid = 1, start = 20875722096640, len = 1073741824,
chunk offset = 26535448805376
  Device extent: devid = 1, start = 20876795838464, len = 1073741824,
chunk offset = 26536522547200
  Device extent: devid = 1, start = 20877869580288, len = 1073741824,
chunk offset = 26537596289024
  Device extent: devid = 1, start = 20882164547584, len = 1073741824,
chunk offset = 26541891256320
  Device extent: devid = 1, start = 20883238289408, len = 1073741824,
chunk offset = 26542964998144
  Device extent: devid = 1, start = 20884312031232, len = 1073741824,
chunk offset = 26544038739968
  Device extent: devid = 1, start = 20885385773056, len = 1073741824,
chunk offset = 26545112481792

Invalid mapping for 23079040999424-23079041003520, got
23080114061312-23081187803136
Couldn't map the block 23079040999424
Couldn't map the block 23079040999424
bad tree block 23079040999424, bytenr mismatch, want=23079040999424, have=0
Couldn't read tree root
open with broken chunk error
Chunk tree recovery failed



Informations required for support:
uname -a:
Linux UBUNTU-SERVER 5.8.0-48-generic #54-Ubuntu SMP Fri Mar 19
14:25:20 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
btrfs --version:
btrfs-progs v5.11
btrfs fi show:
Label: none  uuid: f4f04e16-ce38-4a57-8434-67562a0790bd
        Total devices 1 FS bytes used 24.71TiB
        devid    1 size 18.83TiB used 18.67TiB path /dev/md0
btrfs fi df /
not mountable fs, so no results.
 dmesg |grep -i btrfs:
[    3.869542] Btrfs loaded, crc32c=crc32c-intel
[    3.927255] BTRFS: device fsid f4f04e16-ce38-4a57-8434-67562a0790bd
devid 1 transid 524941 /dev/md0 scanned by btrfs (260)
[  667.478169] BTRFS info (device md0): disk space caching is enabled
[  667.722168] BTRFS error (device md0): parent transid verify failed
on 23079040831488 wanted 524940 found 524941
[  667.722181] BTRFS warning (device md0): failed to read root (objectid=2): -5
[  667.742931] BTRFS error (device md0): open_ctree failed


If someone have any idea if i  have a little chance to get at least
some of the data back, i will be very happy, as the last found backup
of this massive partition was an 5 years old.
Sorry for my English.
Thierry

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-03-27 11:09 Support demand on Btrfs crashed fs Thierry Testeur
@ 2021-03-31  0:17 ` Thierry Testeur
  2021-03-31 12:23   ` Lukas Straub
  2021-03-31  6:13 ` Chris Murphy
  1 sibling, 1 reply; 8+ messages in thread
From: Thierry Testeur @ 2021-03-31  0:17 UTC (permalink / raw)
  To: linux-btrfs

Hello,

if anyone can help me with the problem above?
Have tried a Photorec (even if i know the chance are really poor), and
have got some non-sens files, lkie pdf of 2Gb, .... most of them are
unusable, except smal size file, like jpg pic...

thanks for any help.
Thierry

Le sam. 27 mars 2021 à 12:09, Thierry Testeur
<thierry.testeur@gmail.com> a écrit :
>
> Dear btrfs support community,
>
> after a bad power event, my 9 years old Btrfs has crashed.
> This Btrfs partirion is on an 8 disk mdadm array.
> After a few try on different options recovery found on "forums", have
> alreday tryed some
>
> btrfs check --repair
> btrfs check --init-extent-tree
> btrfs check --clear-space-cache,......
> btrfs rescue super-recover
> btrfs rescue chunk-recover
> btrfs rescue restore
> btrfs rescue chunk-recover: result on pastebin :https://pastebin.com/9aHewZU4
> .....
> ....
>   Chunk: start = 26461360619520, len = 1073741824, type = 1, num_stripes = 0
>       Stripes list:
>       Block Group: start = 26461360619520, len = 1073741824, flag = 1
>       No device extent.
>   Chunk: start = 26490351648768, len = 1073741824, type = 4, num_stripes = 0
>       Stripes list:
>       Block Group: start = 26490351648768, len = 1073741824, flag = 4
>       No device extent.
>
> Total Chunks:           13797
>   Recoverable:          12233
>   Unrecoverable:        1564
>
> Orphan Block Groups:
>
> Orphan Device Extents:
>   Device extent: devid = 1, start = 20624466509824, len = 1073741824,
> chunk offset = 26284193218560
>   Device extent: devid = 1, start = 20625540251648, len = 1073741824,
> chunk offset = 26285266960384
>   Device extent: devid = 1, start = 20628761477120, len = 1073741824,
> chunk offset = 26288488185856
>   Device extent: devid = 1, start = 20630908960768, len = 1073741824,
> chunk offset = 26290635669504
>   Device extent: devid = 1, start = 20640572637184, len = 1073741824,
> chunk offset = 26300299345920
>   Device extent: devid = 1, start = 20641646379008, len = 1073741824,
> chunk offset = 26301373087744
>   Device extent: devid = 1, start = 20643793862656, len = 1073741824,
> chunk offset = 26303520571392
>   Device extent: devid = 1, start = 20645941346304, len = 1073741824,
> chunk offset = 26305668055040
>   Device extent: devid = 1, start = 20838141132800, len = 1073741824,
> chunk offset = 26497867841536
>   Device extent: devid = 1, start = 20839214874624, len = 1073741824,
> chunk offset = 26498941583360
>   Device extent: devid = 1, start = 20840288616448, len = 1073741824,
> chunk offset = 26500015325184
>   Device extent: devid = 1, start = 20841362358272, len = 1073741824,
> chunk offset = 26501089067008
>   Device extent: devid = 1, start = 20842436100096, len = 1073741824,
> chunk offset = 26502162808832
>   Device extent: devid = 1, start = 20844583583744, len = 1073741824,
> chunk offset = 26504310292480
>   Device extent: devid = 1, start = 20845657325568, len = 1073741824,
> chunk offset = 26505384034304
>   Device extent: devid = 1, start = 20846731067392, len = 1073741824,
> chunk offset = 26506457776128
>   Device extent: devid = 1, start = 20847804809216, len = 1073741824,
> chunk offset = 26507531517952
>   Device extent: devid = 1, start = 20848878551040, len = 1073741824,
> chunk offset = 26508605259776
>   Device extent: devid = 1, start = 20849952292864, len = 1073741824,
> chunk offset = 26509679001600
>   Device extent: devid = 1, start = 20851026034688, len = 1073741824,
> chunk offset = 26510752743424
>   Device extent: devid = 1, start = 20852099776512, len = 1073741824,
> chunk offset = 26511826485248
>   Device extent: devid = 1, start = 20856394743808, len = 1073741824,
> chunk offset = 26516121452544
>   Device extent: devid = 1, start = 20858542227456, len = 1073741824,
> chunk offset = 26518268936192
>   Device extent: devid = 1, start = 20862837194752, len = 1073741824,
> chunk offset = 26522563903488
>   Device extent: devid = 1, start = 20867132162048, len = 1073741824,
> chunk offset = 26526858870784
>   Device extent: devid = 1, start = 20868205903872, len = 1073741824,
> chunk offset = 26527932612608
>   Device extent: devid = 1, start = 20869279645696, len = 1073741824,
> chunk offset = 26529006354432
>   Device extent: devid = 1, start = 20870353387520, len = 1073741824,
> chunk offset = 26530080096256
>   Device extent: devid = 1, start = 20875722096640, len = 1073741824,
> chunk offset = 26535448805376
>   Device extent: devid = 1, start = 20876795838464, len = 1073741824,
> chunk offset = 26536522547200
>   Device extent: devid = 1, start = 20877869580288, len = 1073741824,
> chunk offset = 26537596289024
>   Device extent: devid = 1, start = 20882164547584, len = 1073741824,
> chunk offset = 26541891256320
>   Device extent: devid = 1, start = 20883238289408, len = 1073741824,
> chunk offset = 26542964998144
>   Device extent: devid = 1, start = 20884312031232, len = 1073741824,
> chunk offset = 26544038739968
>   Device extent: devid = 1, start = 20885385773056, len = 1073741824,
> chunk offset = 26545112481792
>
> Invalid mapping for 23079040999424-23079041003520, got
> 23080114061312-23081187803136
> Couldn't map the block 23079040999424
> Couldn't map the block 23079040999424
> bad tree block 23079040999424, bytenr mismatch, want=23079040999424, have=0
> Couldn't read tree root
> open with broken chunk error
> Chunk tree recovery failed
>
>
>
> Informations required for support:
> uname -a:
> Linux UBUNTU-SERVER 5.8.0-48-generic #54-Ubuntu SMP Fri Mar 19
> 14:25:20 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
> btrfs --version:
> btrfs-progs v5.11
> btrfs fi show:
> Label: none  uuid: f4f04e16-ce38-4a57-8434-67562a0790bd
>         Total devices 1 FS bytes used 24.71TiB
>         devid    1 size 18.83TiB used 18.67TiB path /dev/md0
> btrfs fi df /
> not mountable fs, so no results.
>  dmesg |grep -i btrfs:
> [    3.869542] Btrfs loaded, crc32c=crc32c-intel
> [    3.927255] BTRFS: device fsid f4f04e16-ce38-4a57-8434-67562a0790bd
> devid 1 transid 524941 /dev/md0 scanned by btrfs (260)
> [  667.478169] BTRFS info (device md0): disk space caching is enabled
> [  667.722168] BTRFS error (device md0): parent transid verify failed
> on 23079040831488 wanted 524940 found 524941
> [  667.722181] BTRFS warning (device md0): failed to read root (objectid=2): -5
> [  667.742931] BTRFS error (device md0): open_ctree failed
>
>
> If someone have any idea if i  have a little chance to get at least
> some of the data back, i will be very happy, as the last found backup
> of this massive partition was an 5 years old.
> Sorry for my English.
> Thierry

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-03-27 11:09 Support demand on Btrfs crashed fs Thierry Testeur
  2021-03-31  0:17 ` Thierry Testeur
@ 2021-03-31  6:13 ` Chris Murphy
  2021-03-31 10:22   ` Thierry Testeur
  1 sibling, 1 reply; 8+ messages in thread
From: Chris Murphy @ 2021-03-31  6:13 UTC (permalink / raw)
  To: Thierry Testeur; +Cc: Btrfs BTRFS

I'm going to fill in some details from the multiday conversation with
IRC regulars. We couldn't figure out a way forward.

* WDC Red with Firmware Version: 80.00A80, which is highly suspected
to deal with power fail and write caching incorrectly, and at least on
Btrfs apparently pretty much always drops writes for critical
metadata.
* A power fail / reset happened
* No snapshots
* --repair and --init-extent-tree  may not have done anything because
they didn't complete
* Less than 10% needs to be recovered and it's accepted that it can't
be repaired. The focus is just on a limited restore, but we can't get
past the transid failures.


zapan@UBUNTU-SERVER:~$ sudo btrfs check --readonly /dev/md0
Opening filesystem to check...
parent transid verify failed on 23079040831488 wanted 524940 found 524941
parent transid verify failed on 23079040831488 wanted 524940 found 524941
Ignoring transid failure
parent transid verify failed on 23079040319488 wanted 524931 found 524939
Ignoring transid failure
Checking filesystem on /dev/md0
UUID: f4f04e16-ce38-4a57-8434-67562a0790bd
[1/7] checking root items
parent transid verify failed on 23079042863104 wanted 423153 found 524931
parent transid verify failed on 23079042863104 wanted 423153 found 524931
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=23079040999424 item=11 parent
level=2 child bytenr=23079042863104 child level=0
ERROR: failed to repair root items: Input/output error
[2/7] checking extents
parent transid verify failed on 23079042863104 wanted 423153 found 524931
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=23079040999424 item=11 parent
level=2 child bytenr=23079042863104 child level=0
ERROR: errors found in extent allocation tree or chunk allocation
[3/7] checking free space cache
cache and super generation don't match, space cache will be invalidated
[4/7] checking fs roots
root 5 root dir 256 not found
parent transid verify failed on 23079042863104 wanted 423153 found 524931
Ignoring transid failure
ERROR: child eb corrupted: parent bytenr=23079040999424 item=11 parent
level=2 child bytenr=23079042863104 child level=0
ERROR: errors found in fs roots
found 0 bytes used, error(s) found
total csum bytes: 0
total tree bytes: 0
total fs tree bytes: 0
total extent tree bytes: 0
btree space waste bytes: 0
file data blocks allocated: 0
referenced 0

btrfs-find-root doesn't find many options to work with, and all of
them fail with 'btrfs restore -t'


zapan@UBUNTU-SERVER:~$ sudo btrfs-find-root /dev/md0
parent transid verify failed on 23079040831488 wanted 524940 found 524941
parent transid verify failed on 23079040831488 wanted 524940 found 524941
Ignoring transid failure
parent transid verify failed on 23079040319488 wanted 524931 found 524939
Ignoring transid failure
Superblock thinks the generation is 524941
Superblock thinks the level is 2
Found tree root at 23079040999424 gen 524941 level 2
Well block 23079040327680(gen: 524940 level: 2) seems good, but
generation/level doesn't match, want gen: 524941 level: 2
Well block 23079040389120(gen: 524939 level: 2) seems good, but
generation/level doesn't match, want gen: 524941 level: 2
zapan@UBUNTU-SERVER:~$ sudo btrfs restore -viD -t 23079040389120
/dev/md0 /mnt/raid1/restore/
parent transid verify failed on 23079040389120 wanted 524941 found 524939
parent transid verify failed on 23079040389120 wanted 524941 found 524939
Ignoring transid failure
parent transid verify failed on 23079040323584 wanted 524939 found 524941
parent transid verify failed on 23079040323584 wanted 524939 found 524941
Ignoring transid failure
parent transid verify failed on 23079040319488 wanted 524931 found 524939
Ignoring transid failure
This is a dry-run, no files are going to be restored
Reached the end of the tree searching the directory
zapan@UBUNTU-SERVER:~$ sudo btrfs restore -viD -t 23079040327680
/dev/md0 /mnt/raid1/restore/
parent transid verify failed on 23079040327680 wanted 524941 found 524940
parent transid verify failed on 23079040327680 wanted 524941 found 524940
Ignoring transid failure
parent transid verify failed on 23079040831488 wanted 524940 found 524941
parent transid verify failed on 23079040831488 wanted 524940 found 524941
Ignoring transid failure
parent transid verify failed on 23079040319488 wanted 524931 found 524939
Ignoring transid failure
This is a dry-run, no files are going to be restored
Reached the end of the tree searching the directory





-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-03-31  6:13 ` Chris Murphy
@ 2021-03-31 10:22   ` Thierry Testeur
  0 siblings, 0 replies; 8+ messages in thread
From: Thierry Testeur @ 2021-03-31 10:22 UTC (permalink / raw)
  To: Chris Murphy, linux-btrfs

WDC Red with Firmware Version: 80.00A80
Exact, 8 of thems on my array, and have forgotten to disable write cache  :/
And all that you decribed is my case.

I understand that it could be a very hard situation for expect an
massive data recovery, even if the datas are still here, the fs
structure is (partially?) crashed.

I don't have a good knowledge on btrfs structure, as i have on some
older FS, like AFS (Amiga file system) and FAT, because with childs, i
have less time to dig it (like others, we all have only 24hours by day
;)
 ).

As like i said, if anyone have encountered a similar crash, or have
any idea, i'm open to every suggestion, as the last tape backup that i
have is 5 years old, and my tape library have crashed when restored it
on a new disk. So i'm really stuck on that.

For futur raid (14Tb WD Gold x3, for the start), i wish to stay on
mdadm/raid5 an Btrfs. (because i don't have enought data on Btrfs
raid, and it seems it could have somes issues)
What is the best practice for max prenvention of this type of
corruption/crash? Snapshot stored on different disk and/or external
backup, <put any idea here>?

This is another question, but i think this time i have to minimize the
risk for this futur partition, and i have to make some diging on
internet on the subjet, but i'm convicted that this place is a far
away better place to ask for some advices thant on forums ;)

Thanks for all your time and share.
Thierry

Le mer. 31 mars 2021 à 08:14, Chris Murphy <lists@colorremedies.com> a écrit :
>
> I'm going to fill in some details from the multiday conversation with
> IRC regulars. We couldn't figure out a way forward.
>
> * WDC Red with Firmware Version: 80.00A80, which is highly suspected
> to deal with power fail and write caching incorrectly, and at least on
> Btrfs apparently pretty much always drops writes for critical
> metadata.
> * A power fail / reset happened
> * No snapshots
> * --repair and --init-extent-tree  may not have done anything because
> they didn't complete
> * Less than 10% needs to be recovered and it's accepted that it can't
> be repaired. The focus is just on a limited restore, but we can't get
> past the transid failures.
>
>
> zapan@UBUNTU-SERVER:~$ sudo btrfs check --readonly /dev/md0
> Opening filesystem to check...
> parent transid verify failed on 23079040831488 wanted 524940 found 524941
> parent transid verify failed on 23079040831488 wanted 524940 found 524941
> Ignoring transid failure
> parent transid verify failed on 23079040319488 wanted 524931 found 524939
> Ignoring transid failure
> Checking filesystem on /dev/md0
> UUID: f4f04e16-ce38-4a57-8434-67562a0790bd
> [1/7] checking root items
> parent transid verify failed on 23079042863104 wanted 423153 found 524931
> parent transid verify failed on 23079042863104 wanted 423153 found 524931
> Ignoring transid failure
> ERROR: child eb corrupted: parent bytenr=23079040999424 item=11 parent
> level=2 child bytenr=23079042863104 child level=0
> ERROR: failed to repair root items: Input/output error
> [2/7] checking extents
> parent transid verify failed on 23079042863104 wanted 423153 found 524931
> Ignoring transid failure
> ERROR: child eb corrupted: parent bytenr=23079040999424 item=11 parent
> level=2 child bytenr=23079042863104 child level=0
> ERROR: errors found in extent allocation tree or chunk allocation
> [3/7] checking free space cache
> cache and super generation don't match, space cache will be invalidated
> [4/7] checking fs roots
> root 5 root dir 256 not found
> parent transid verify failed on 23079042863104 wanted 423153 found 524931
> Ignoring transid failure
> ERROR: child eb corrupted: parent bytenr=23079040999424 item=11 parent
> level=2 child bytenr=23079042863104 child level=0
> ERROR: errors found in fs roots
> found 0 bytes used, error(s) found
> total csum bytes: 0
> total tree bytes: 0
> total fs tree bytes: 0
> total extent tree bytes: 0
> btree space waste bytes: 0
> file data blocks allocated: 0
> referenced 0
>
> btrfs-find-root doesn't find many options to work with, and all of
> them fail with 'btrfs restore -t'
>
>
> zapan@UBUNTU-SERVER:~$ sudo btrfs-find-root /dev/md0
> parent transid verify failed on 23079040831488 wanted 524940 found 524941
> parent transid verify failed on 23079040831488 wanted 524940 found 524941
> Ignoring transid failure
> parent transid verify failed on 23079040319488 wanted 524931 found 524939
> Ignoring transid failure
> Superblock thinks the generation is 524941
> Superblock thinks the level is 2
> Found tree root at 23079040999424 gen 524941 level 2
> Well block 23079040327680(gen: 524940 level: 2) seems good, but
> generation/level doesn't match, want gen: 524941 level: 2
> Well block 23079040389120(gen: 524939 level: 2) seems good, but
> generation/level doesn't match, want gen: 524941 level: 2
> zapan@UBUNTU-SERVER:~$ sudo btrfs restore -viD -t 23079040389120
> /dev/md0 /mnt/raid1/restore/
> parent transid verify failed on 23079040389120 wanted 524941 found 524939
> parent transid verify failed on 23079040389120 wanted 524941 found 524939
> Ignoring transid failure
> parent transid verify failed on 23079040323584 wanted 524939 found 524941
> parent transid verify failed on 23079040323584 wanted 524939 found 524941
> Ignoring transid failure
> parent transid verify failed on 23079040319488 wanted 524931 found 524939
> Ignoring transid failure
> This is a dry-run, no files are going to be restored
> Reached the end of the tree searching the directory
> zapan@UBUNTU-SERVER:~$ sudo btrfs restore -viD -t 23079040327680
> /dev/md0 /mnt/raid1/restore/
> parent transid verify failed on 23079040327680 wanted 524941 found 524940
> parent transid verify failed on 23079040327680 wanted 524941 found 524940
> Ignoring transid failure
> parent transid verify failed on 23079040831488 wanted 524940 found 524941
> parent transid verify failed on 23079040831488 wanted 524940 found 524941
> Ignoring transid failure
> parent transid verify failed on 23079040319488 wanted 524931 found 524939
> Ignoring transid failure
> This is a dry-run, no files are going to be restored
> Reached the end of the tree searching the directory
>
>
>
>
>
> --
> Chris Murphy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-03-31  0:17 ` Thierry Testeur
@ 2021-03-31 12:23   ` Lukas Straub
  2021-03-31 21:44     ` Thierry Testeur
  0 siblings, 1 reply; 8+ messages in thread
From: Lukas Straub @ 2021-03-31 12:23 UTC (permalink / raw)
  To: Thierry Testeur; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 517 bytes --]

On Wed, 31 Mar 2021 02:17:48 +0200
Thierry Testeur <thierry.testeur@gmail.com> wrote:

> Hello,
> 
> if anyone can help me with the problem above?
> Have tried a Photorec (even if i know the chance are really poor), and
> have got some non-sens files, lkie pdf of 2Gb, .... most of them are
> unusable, except smal size file, like jpg pic...
> 
> thanks for any help.
> Thierry

Weird, I would have expected photorec to recover more. Did you have compression enabled?

Regards,
Lukas Straub

-- 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-03-31 12:23   ` Lukas Straub
@ 2021-03-31 21:44     ` Thierry Testeur
  2021-04-01  9:04       ` Lukas Straub
  0 siblings, 1 reply; 8+ messages in thread
From: Thierry Testeur @ 2021-03-31 21:44 UTC (permalink / raw)
  To: Lukas Straub; +Cc: linux-btrfs

Yep, compression enabled (original fstab before having tried restore options):
compress=lzo

Best regards,
Thierry

Le mer. 31 mars 2021 à 14:23, Lukas Straub <lukasstraub2@web.de> a écrit :
>
> On Wed, 31 Mar 2021 02:17:48 +0200
> Thierry Testeur <thierry.testeur@gmail.com> wrote:
>
> > Hello,
> >
> > if anyone can help me with the problem above?
> > Have tried a Photorec (even if i know the chance are really poor), and
> > have got some non-sens files, lkie pdf of 2Gb, .... most of them are
> > unusable, except smal size file, like jpg pic...
> >
> > thanks for any help.
> > Thierry
>
> Weird, I would have expected photorec to recover more. Did you have compression enabled?
>
> Regards,
> Lukas Straub
>
> --
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-03-31 21:44     ` Thierry Testeur
@ 2021-04-01  9:04       ` Lukas Straub
  2021-04-01 13:59         ` Thierry Testeur
  0 siblings, 1 reply; 8+ messages in thread
From: Lukas Straub @ 2021-04-01  9:04 UTC (permalink / raw)
  To: Thierry Testeur; +Cc: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 2392 bytes --]

On Wed, 31 Mar 2021 23:44:28 +0200
Thierry Testeur <thierry.testeur@gmail.com> wrote:

> Yep, compression enabled (original fstab before having tried restore
> options): compress=lzo

Okay, that explains why photorec couldn't recover a lot. If you want to
get your hands dirty, I guess you could write a program that does the
following:
For every 4k block/address on the filesystem, attempt to decompress it
using the btrfs lzo implementation (see fs/btrfs/lzo.c and
lib/lzo/lzo1x_decompress_safe.c in the kernel).
Do some sanity checks:

Like, check that the length of the compressed data is reasonable. The
decompressed size of an compressed extend can be a maximum of 128k, so
considering the case that data which doesn't compress well may take up
more space in compressed from, the maximum length of the compressed
data should be a bit larger than 128k. I'd say like 256k.

Also check that the length of the segments is reasonable, etc.

If all sanity checks passed and decompression worked, look at the
decompressed size:

If it is exactly 128k, it is likely part of a bigger file. Append all
such data to a output file.

If it is below 128k, chances are pretty good you just recovered a small
file, save it directly somewhere. You can use the file(1) utility later
to figure out the file-format.
Or it could be the last part of a large file, so always append it to
the output file as well and fill up with zeroes so the end is aligned
to 4k.

Finally, you can run photorec on the output file that you appended
everything to, to rescue files that are larger than 128k.

I wish you luck.

Regards,
Lukas Straub

> Best regards,
> Thierry
> 
> Le mer. 31 mars 2021 à 14:23, Lukas Straub <lukasstraub2@web.de> a
> écrit :
> >
> > On Wed, 31 Mar 2021 02:17:48 +0200
> > Thierry Testeur <thierry.testeur@gmail.com> wrote:
> >  
> > > Hello,
> > >
> > > if anyone can help me with the problem above?
> > > Have tried a Photorec (even if i know the chance are really
> > > poor), and have got some non-sens files, lkie pdf of 2Gb, ....
> > > most of them are unusable, except smal size file, like jpg pic...
> > >
> > > thanks for any help.
> > > Thierry  
> >
> > Weird, I would have expected photorec to recover more. Did you have
> > compression enabled?
> >
> > Regards,
> > Lukas Straub
> >
> > --
> >  



-- 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Support demand on Btrfs crashed fs.
  2021-04-01  9:04       ` Lukas Straub
@ 2021-04-01 13:59         ` Thierry Testeur
  0 siblings, 0 replies; 8+ messages in thread
From: Thierry Testeur @ 2021-04-01 13:59 UTC (permalink / raw)
  To: Lukas Straub; +Cc: linux-btrfs

Thanks for your return, but i think the writing of this type of
programs is far away for me. Haven't really coded since the amiga 68k
assembly time, so... :/

I have appreciated your efforts to trying to help me, but i can't
handle this type of solution.

I will let this raid RIP until i will find a solution, maybe with
futur evolutions of Btrfs-Tools.

I wish you a nice day,
Thierry

Le jeu. 1 avr. 2021 à 11:04, Lukas Straub <lukasstraub2@web.de> a écrit :
>
> On Wed, 31 Mar 2021 23:44:28 +0200
> Thierry Testeur <thierry.testeur@gmail.com> wrote:
>
> > Yep, compression enabled (original fstab before having tried restore
> > options): compress=lzo
>
> Okay, that explains why photorec couldn't recover a lot. If you want to
> get your hands dirty, I guess you could write a program that does the
> following:
> For every 4k block/address on the filesystem, attempt to decompress it
> using the btrfs lzo implementation (see fs/btrfs/lzo.c and
> lib/lzo/lzo1x_decompress_safe.c in the kernel).
> Do some sanity checks:
>
> Like, check that the length of the compressed data is reasonable. The
> decompressed size of an compressed extend can be a maximum of 128k, so
> considering the case that data which doesn't compress well may take up
> more space in compressed from, the maximum length of the compressed
> data should be a bit larger than 128k. I'd say like 256k.
>
> Also check that the length of the segments is reasonable, etc.
>
> If all sanity checks passed and decompression worked, look at the
> decompressed size:
>
> If it is exactly 128k, it is likely part of a bigger file. Append all
> such data to a output file.
>
> If it is below 128k, chances are pretty good you just recovered a small
> file, save it directly somewhere. You can use the file(1) utility later
> to figure out the file-format.
> Or it could be the last part of a large file, so always append it to
> the output file as well and fill up with zeroes so the end is aligned
> to 4k.
>
> Finally, you can run photorec on the output file that you appended
> everything to, to rescue files that are larger than 128k.
>
> I wish you luck.
>
> Regards,
> Lukas Straub
>
> > Best regards,
> > Thierry
> >
> > Le mer. 31 mars 2021 à 14:23, Lukas Straub <lukasstraub2@web.de> a
> > écrit :
> > >
> > > On Wed, 31 Mar 2021 02:17:48 +0200
> > > Thierry Testeur <thierry.testeur@gmail.com> wrote:
> > >
> > > > Hello,
> > > >
> > > > if anyone can help me with the problem above?
> > > > Have tried a Photorec (even if i know the chance are really
> > > > poor), and have got some non-sens files, lkie pdf of 2Gb, ....
> > > > most of them are unusable, except smal size file, like jpg pic...
> > > >
> > > > thanks for any help.
> > > > Thierry
> > >
> > > Weird, I would have expected photorec to recover more. Did you have
> > > compression enabled?
> > >
> > > Regards,
> > > Lukas Straub
> > >
> > > --
> > >
>
>
>
> --
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-04-01 17:46 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-27 11:09 Support demand on Btrfs crashed fs Thierry Testeur
2021-03-31  0:17 ` Thierry Testeur
2021-03-31 12:23   ` Lukas Straub
2021-03-31 21:44     ` Thierry Testeur
2021-04-01  9:04       ` Lukas Straub
2021-04-01 13:59         ` Thierry Testeur
2021-03-31  6:13 ` Chris Murphy
2021-03-31 10:22   ` Thierry Testeur

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.