All of lore.kernel.org
 help / color / mirror / Atom feed
* Trying to recover data from SSD
@ 2021-08-10  4:41 Konstantin Svist
  2021-08-10  5:24 ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-10  4:41 UTC (permalink / raw)
  To: Btrfs BTRFS

Not sure exactly when it stopped working, possibly had a power outage..
I was able to pull most of a snapshot with btrfs restore -s -- but it's
months old and I want the more recent files from.


Testing the SSD for bad sectors, but nothing so far


While trying to mount:
[442587.465598] BTRFS info (device sdb3): allowing degraded mounts
[442587.465602] BTRFS info (device sdb3): disk space caching is enabled
[442587.465603] BTRFS info (device sdb3): has skinny extents
[442587.522301] BTRFS error (device sdb3): bad tree block start, want
952483840 have 0
[442587.522867] BTRFS error (device sdb3): bad tree block start, want
952483840 have 0
[442587.522876] BTRFS error (device sdb3): failed to read block groups: -5
[442587.523520] BTRFS error (device sdb3): open_ctree failed
[442782.661849] BTRFS error (device sdb3): unrecognized mount option
'rootflags=recovery'
[442782.661926] BTRFS error (device sdb3): open_ctree failed

# btrfs-find-root /dev/sdb3
ERROR: failed to read block groups: Input/output error
Superblock thinks the generation is 166932
Superblock thinks the level is 1
Found tree root at 787070976 gen 166932 level 1
Well block 786399232(gen: 166931 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 781172736(gen: 166930 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 778108928(gen: 166929 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 100696064(gen: 166928 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 99565568(gen: 166927 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 97599488(gen: 166926 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 91701248(gen: 166925 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 89620480(gen: 166924 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 86818816(gen: 166923 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 84197376(gen: 166922 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 76398592(gen: 166921 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 72400896(gen: 166920 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 63275008(gen: 166919 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 60080128(gen: 166918 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 58032128(gen: 166917 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 55689216(gen: 166916 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 52264960(gen: 166915 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 49758208(gen: 166914 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 48300032(gen: 166913 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 45350912(gen: 166912 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 40337408(gen: 166911 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 71172096(gen: 166846 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 61210624(gen: 166843 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 55492608(gen: 166840 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 36044800(gen: 166829 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 34095104(gen: 166828 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 33046528(gen: 166827 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 31014912(gen: 166826 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 30556160(gen: 166825 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 777011200(gen: 166822 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 766672896(gen: 166821 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 690274304(gen: 166820 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 175046656(gen: 166819 level: 1) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 766017536(gen: 166813 level: 0) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 765739008(gen: 166813 level: 0) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
Well block 32604160(gen: 152478 level: 0) seems good, but
generation/level doesn't match, want gen: 166932 level: 1
# btrfs check /dev/sdb3
Opening filesystem to check...
checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
bad tree block 952483840, bytenr mismatch, want=952483840, have=0
ERROR: failed to read block groups: Input/output error
ERROR: cannot open file system


# uname -a
Linux fry 5.13.6-200.fc34.x86_64 #1 SMP Wed Jul 28 15:31:21 UTC 2021
x86_64 x86_64 x86_64 GNU/Linux
# btrfs --version
btrfs-progs v5.13.1
# btrfs fi show /dev/sdb3
Label: none  uuid: 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
    Total devices 1 FS bytes used 171.92GiB
    devid    1 size 472.10GiB used 214.02GiB path /dev/sdb3



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10  4:41 Trying to recover data from SSD Konstantin Svist
@ 2021-08-10  5:24 ` Qu Wenruo
       [not found]   ` <CADQtc0=GDa-v_byewDmUHqr-TrX_S734ezwhLYL9OSkX-jcNOw@mail.gmail.com>
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-10  5:24 UTC (permalink / raw)
  To: Konstantin Svist, Btrfs BTRFS



On 2021/8/10 下午12:41, Konstantin Svist wrote:
> Not sure exactly when it stopped working, possibly had a power outage..
> I was able to pull most of a snapshot with btrfs restore -s -- but it's
> months old and I want the more recent files from.
>
>
> Testing the SSD for bad sectors, but nothing so far
>
>
> While trying to mount:
> [442587.465598] BTRFS info (device sdb3): allowing degraded mounts
> [442587.465602] BTRFS info (device sdb3): disk space caching is enabled
> [442587.465603] BTRFS info (device sdb3): has skinny extents
> [442587.522301] BTRFS error (device sdb3): bad tree block start, want
> 952483840 have 0
> [442587.522867] BTRFS error (device sdb3): bad tree block start, want
> 952483840 have 0

Some metadata is completely lost.

Mind to share the hardware model? Maybe it's some known bad hardware.

Just a small note, all filesystems (including btrfs) should survive a
power loss, as long as the disk is following the FLUSH/FUA requirement
properly.

> [442587.522876] BTRFS error (device sdb3): failed to read block groups: -5
> [442587.523520] BTRFS error (device sdb3): open_ctree failed
> [442782.661849] BTRFS error (device sdb3): unrecognized mount option
> 'rootflags=recovery'
> [442782.661926] BTRFS error (device sdb3): open_ctree failed

Since the fs is already corrupted, you can try to corrupt extent tree
root completely, then "rescue=all" mount option should allow you to
mount the fs RO, and grab as much data as you can.

But I doubt if it's any better than btrfs-restore.

Thanks,
Qu
>
> # btrfs-find-root /dev/sdb3
> ERROR: failed to read block groups: Input/output error
> Superblock thinks the generation is 166932
> Superblock thinks the level is 1
> Found tree root at 787070976 gen 166932 level 1
> Well block 786399232(gen: 166931 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 781172736(gen: 166930 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 778108928(gen: 166929 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 100696064(gen: 166928 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 99565568(gen: 166927 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 97599488(gen: 166926 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 91701248(gen: 166925 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 89620480(gen: 166924 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 86818816(gen: 166923 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 84197376(gen: 166922 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 76398592(gen: 166921 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 72400896(gen: 166920 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 63275008(gen: 166919 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 60080128(gen: 166918 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 58032128(gen: 166917 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 55689216(gen: 166916 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 52264960(gen: 166915 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 49758208(gen: 166914 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 48300032(gen: 166913 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 45350912(gen: 166912 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 40337408(gen: 166911 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 71172096(gen: 166846 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 61210624(gen: 166843 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 55492608(gen: 166840 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 36044800(gen: 166829 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 34095104(gen: 166828 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 33046528(gen: 166827 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 31014912(gen: 166826 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 30556160(gen: 166825 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 777011200(gen: 166822 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 766672896(gen: 166821 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 690274304(gen: 166820 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 175046656(gen: 166819 level: 1) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 766017536(gen: 166813 level: 0) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 765739008(gen: 166813 level: 0) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> Well block 32604160(gen: 152478 level: 0) seems good, but
> generation/level doesn't match, want gen: 166932 level: 1
> # btrfs check /dev/sdb3
> Opening filesystem to check...
> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
> bad tree block 952483840, bytenr mismatch, want=952483840, have=0
> ERROR: failed to read block groups: Input/output error
> ERROR: cannot open file system
>
>
> # uname -a
> Linux fry 5.13.6-200.fc34.x86_64 #1 SMP Wed Jul 28 15:31:21 UTC 2021
> x86_64 x86_64 x86_64 GNU/Linux
> # btrfs --version
> btrfs-progs v5.13.1
> # btrfs fi show /dev/sdb3
> Label: none  uuid: 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
>      Total devices 1 FS bytes used 171.92GiB
>      devid    1 size 472.10GiB used 214.02GiB path /dev/sdb3
>
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
       [not found]   ` <CADQtc0=GDa-v_byewDmUHqr-TrX_S734ezwhLYL9OSkX-jcNOw@mail.gmail.com>
@ 2021-08-10  6:56     ` Qu Wenruo
  2021-08-10 16:12       ` Konstantin Svist
  2021-08-11  0:30       ` Zygo Blaxell
  0 siblings, 2 replies; 30+ messages in thread
From: Qu Wenruo @ 2021-08-10  6:56 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell



On 2021/8/10 下午2:44, Konstantin Svist wrote:
> It's a Micron MTFDDAK512MBF, firmware M603

CC Zygo to see if he also hits such hardware.

>
> I don't know how to do that (corrupt the extent tree)

There is the more detailed version:
https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/

Or if you can re-compile btrfs kernel module, this patch would allow
your to mount with rescue=all, without destroying the existing extent
tree root:
https://patchwork.kernel.org/project/linux-btrfs/patch/20210719054304.181509-1-wqu@suse.com/


> Is there any other way to pull files off of the drive?

Either btrfs-restore or above rescue=all mount option.

Since btrfs-restore doesn't give you the recent files, I guess the
rescue=all method is the only alternative.

Thanks,
Qu

>
>
> On Mon, Aug 9, 2021, 22:24 Qu Wenruo <quwenruo.btrfs@gmx.com
> <mailto:quwenruo.btrfs@gmx.com>> wrote:
>
>
>
>     On 2021/8/10 下午12:41, Konstantin Svist wrote:
>      > Not sure exactly when it stopped working, possibly had a power
>     outage..
>      > I was able to pull most of a snapshot with btrfs restore -s --
>     but it's
>      > months old and I want the more recent files from.
>      >
>      >
>      > Testing the SSD for bad sectors, but nothing so far
>      >
>      >
>      > While trying to mount:
>      > [442587.465598] BTRFS info (device sdb3): allowing degraded mounts
>      > [442587.465602] BTRFS info (device sdb3): disk space caching is
>     enabled
>      > [442587.465603] BTRFS info (device sdb3): has skinny extents
>      > [442587.522301] BTRFS error (device sdb3): bad tree block start, want
>      > 952483840 have 0
>      > [442587.522867] BTRFS error (device sdb3): bad tree block start, want
>      > 952483840 have 0
>
>     Some metadata is completely lost.
>
>     Mind to share the hardware model? Maybe it's some known bad hardware.
>
>     Just a small note, all filesystems (including btrfs) should survive a
>     power loss, as long as the disk is following the FLUSH/FUA requirement
>     properly.
>
>      > [442587.522876] BTRFS error (device sdb3): failed to read block
>     groups: -5
>      > [442587.523520] BTRFS error (device sdb3): open_ctree failed
>      > [442782.661849] BTRFS error (device sdb3): unrecognized mount option
>      > 'rootflags=recovery'
>      > [442782.661926] BTRFS error (device sdb3): open_ctree failed
>
>     Since the fs is already corrupted, you can try to corrupt extent tree
>     root completely, then "rescue=all" mount option should allow you to
>     mount the fs RO, and grab as much data as you can.
>
>     But I doubt if it's any better than btrfs-restore.
>
>     Thanks,
>     Qu
>      >
>      > # btrfs-find-root /dev/sdb3
>      > ERROR: failed to read block groups: Input/output error
>      > Superblock thinks the generation is 166932
>      > Superblock thinks the level is 1
>      > Found tree root at 787070976 gen 166932 level 1
>      > Well block 786399232(gen: 166931 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 781172736(gen: 166930 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 778108928(gen: 166929 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 100696064(gen: 166928 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 99565568(gen: 166927 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 97599488(gen: 166926 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 91701248(gen: 166925 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 89620480(gen: 166924 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 86818816(gen: 166923 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 84197376(gen: 166922 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 76398592(gen: 166921 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 72400896(gen: 166920 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 63275008(gen: 166919 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 60080128(gen: 166918 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 58032128(gen: 166917 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 55689216(gen: 166916 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 52264960(gen: 166915 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 49758208(gen: 166914 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 48300032(gen: 166913 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 45350912(gen: 166912 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 40337408(gen: 166911 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 71172096(gen: 166846 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 61210624(gen: 166843 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 55492608(gen: 166840 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 36044800(gen: 166829 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 34095104(gen: 166828 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 33046528(gen: 166827 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 31014912(gen: 166826 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 30556160(gen: 166825 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 777011200(gen: 166822 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 766672896(gen: 166821 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 690274304(gen: 166820 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 175046656(gen: 166819 level: 1) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 766017536(gen: 166813 level: 0) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 765739008(gen: 166813 level: 0) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > Well block 32604160(gen: 152478 level: 0) seems good, but
>      > generation/level doesn't match, want gen: 166932 level: 1
>      > # btrfs check /dev/sdb3
>      > Opening filesystem to check...
>      > checksum verify failed on 952483840 wanted 0x00000000 found
>     0xb6bde3e4
>      > checksum verify failed on 952483840 wanted 0x00000000 found
>     0xb6bde3e4
>      > checksum verify failed on 952483840 wanted 0x00000000 found
>     0xb6bde3e4
>      > bad tree block 952483840, bytenr mismatch, want=952483840, have=0
>      > ERROR: failed to read block groups: Input/output error
>      > ERROR: cannot open file system
>      >
>      >
>      > # uname -a
>      > Linux fry 5.13.6-200.fc34.x86_64 #1 SMP Wed Jul 28 15:31:21 UTC 2021
>      > x86_64 x86_64 x86_64 GNU/Linux
>      > # btrfs --version
>      > btrfs-progs v5.13.1
>      > # btrfs fi show /dev/sdb3
>      > Label: none  uuid: 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
>      >      Total devices 1 FS bytes used 171.92GiB
>      >      devid    1 size 472.10GiB used 214.02GiB path /dev/sdb3
>      >
>      >
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10  6:56     ` Qu Wenruo
@ 2021-08-10 16:12       ` Konstantin Svist
  2021-08-10 22:24         ` Qu Wenruo
  2021-08-11  0:30       ` Zygo Blaxell
  1 sibling, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-10 16:12 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS, Zygo Blaxell


>> I don't know how to do that (corrupt the extent tree)
>
> There is the more detailed version:
> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/


So, here's what I get:


# btrfs ins dump-tree -t root /dev/sdb3 |grep -A5 'item 0 key (EXTENT_TREE'

    item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 15844 itemsize 439
        generation 166932 root_dirid 0 bytenr 786939904 level 2 refs 1
        lastsnap 0 byte_limit 0 bytes_used 50708480 flags 0x0(none)
        uuid 00000000-0000-0000-0000-000000000000
        drop key (0 UNKNOWN.0 0) level 0
    item 1 key (DEV_TREE ROOT_ITEM 0) itemoff 15405 itemsize 439


# btrfs-map-logical -l 786939904 /dev/sdb3

checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
bad tree block 952483840, bytenr mismatch, want=952483840, have=0
ERROR: failed to read block groups: Input/output error
Open ctree failed



Sooooo.. now what..?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10 16:12       ` Konstantin Svist
@ 2021-08-10 22:24         ` Qu Wenruo
  2021-08-10 23:21           ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-10 22:24 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell



On 2021/8/11 上午12:12, Konstantin Svist wrote:
>
>>> I don't know how to do that (corrupt the extent tree)
>>
>> There is the more detailed version:
>> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/
>
>
> So, here's what I get:
>
>
> # btrfs ins dump-tree -t root /dev/sdb3 |grep -A5 'item 0 key (EXTENT_TREE'
>
>      item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 15844 itemsize 439
>          generation 166932 root_dirid 0 bytenr 786939904 level 2 refs 1
>          lastsnap 0 byte_limit 0 bytes_used 50708480 flags 0x0(none)
>          uuid 00000000-0000-0000-0000-000000000000
>          drop key (0 UNKNOWN.0 0) level 0
>      item 1 key (DEV_TREE ROOT_ITEM 0) itemoff 15405 itemsize 439
>
>
> # btrfs-map-logical -l 786939904 /dev/sdb3
>
> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
> bad tree block 952483840, bytenr mismatch, want=952483840, have=0
> ERROR: failed to read block groups: Input/output error
> Open ctree failed
>
>
>
> Sooooo.. now what..?
>
With v5.11 or newer kernel, mount it with "-o rescue=all,ro".

Thanks,
Qu

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10 22:24         ` Qu Wenruo
@ 2021-08-10 23:21           ` Konstantin Svist
  2021-08-10 23:54             ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-10 23:21 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS, Zygo Blaxell

On 8/10/21 15:24, Qu Wenruo wrote:
>
> On 2021/8/11 上午12:12, Konstantin Svist wrote:
>>
>>>> I don't know how to do that (corrupt the extent tree)
>>>
>>> There is the more detailed version:
>>> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/
>>>
>>
>>
>> So, here's what I get:
>>
>>
>> # btrfs ins dump-tree -t root /dev/sdb3 |grep -A5 'item 0 key
>> (EXTENT_TREE'
>>
>>      item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 15844 itemsize 439
>>          generation 166932 root_dirid 0 bytenr 786939904 level 2 refs 1
>>          lastsnap 0 byte_limit 0 bytes_used 50708480 flags 0x0(none)
>>          uuid 00000000-0000-0000-0000-000000000000
>>          drop key (0 UNKNOWN.0 0) level 0
>>      item 1 key (DEV_TREE ROOT_ITEM 0) itemoff 15405 itemsize 439
>>
>>
>> # btrfs-map-logical -l 786939904 /dev/sdb3
>>
>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>> bad tree block 952483840, bytenr mismatch, want=952483840, have=0
>> ERROR: failed to read block groups: Input/output error
>> Open ctree failed
>>
>>
>>
>> Sooooo.. now what..?
>>
> With v5.11 or newer kernel, mount it with "-o rescue=all,ro".


Sorry, I guess that wasn't clear: that error above is what I get while
trying to corrupt the extent tree as per your guide.


That said, my kernel is 5.13.* (without your patch) and this mount
command still fails as before



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10 23:21           ` Konstantin Svist
@ 2021-08-10 23:54             ` Qu Wenruo
  2021-08-11  5:22               ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-10 23:54 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell

[-- Attachment #1: Type: text/plain, Size: 1965 bytes --]



On 2021/8/11 上午7:21, Konstantin Svist wrote:
> On 8/10/21 15:24, Qu Wenruo wrote:
>>
>> On 2021/8/11 上午12:12, Konstantin Svist wrote:
>>>
>>>>> I don't know how to do that (corrupt the extent tree)
>>>>
>>>> There is the more detailed version:
>>>> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/
>>>>
>>>
>>>
>>> So, here's what I get:
>>>
>>>
>>> # btrfs ins dump-tree -t root /dev/sdb3 |grep -A5 'item 0 key
>>> (EXTENT_TREE'
>>>
>>>       item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 15844 itemsize 439
>>>           generation 166932 root_dirid 0 bytenr 786939904 level 2 refs 1
>>>           lastsnap 0 byte_limit 0 bytes_used 50708480 flags 0x0(none)
>>>           uuid 00000000-0000-0000-0000-000000000000
>>>           drop key (0 UNKNOWN.0 0) level 0
>>>       item 1 key (DEV_TREE ROOT_ITEM 0) itemoff 15405 itemsize 439
>>>
>>>
>>> # btrfs-map-logical -l 786939904 /dev/sdb3
>>>
>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>> bad tree block 952483840, bytenr mismatch, want=952483840, have=0
>>> ERROR: failed to read block groups: Input/output error
>>> Open ctree failed
>>>
>>>
>>>
>>> Sooooo.. now what..?
>>>
>> With v5.11 or newer kernel, mount it with "-o rescue=all,ro".
>
>
> Sorry, I guess that wasn't clear: that error above is what I get while
> trying to corrupt the extent tree as per your guide.

Oh, that btrfs-map-logical is requiring unnecessary trees to continue.

Can you re-compile btrfs-progs with the attached patch?
Then the re-compiled btrfs-map-logical should work without problem.

Thanks,
Qu

>
>
> That said, my kernel is 5.13.* (without your patch) and this mount
> command still fails as before
>
>

[-- Attachment #2: 0001-btrfs-progs-map-logical-handle-corrupted-fs-better.patch --]
[-- Type: text/x-patch, Size: 4124 bytes --]

From c3c0dd1c04e0f96ef3f6d72534e367482871bb31 Mon Sep 17 00:00:00 2001
From: Qu Wenruo <wqu@suse.com>
Date: Wed, 11 Aug 2021 07:37:57 +0800
Subject: [PATCH] btrfs-progs: map-logical: handle corrupted fs better

Currently if running btrfs-map-logical on a filesystem with corrupted
extent tree, it will fail due to open_ctree() error.

But the truth is, btrfs-map-logical only requires chunk tree to do
logical bytenr mapping.

Make btrfs-map-logical more robust by:

- Loosen the open_ctree() requirement
  Now it doesn't require an extent tree to work.

- Don't return error for map_one_extent()
  Function map_one_extent() is too lookup extent tree to ensure there is
  at least one extent for the range we're looking for.

  But since now we don't require extent tree at all, there is no hard
  requirement for that function.
  Thus here we change it to return void, and only do the check when
  possible.

Now btrfs-map-logical can work on a filesystem with corrupted extent
tree.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 btrfs-map-logical.c | 50 +++++++++++----------------------------------
 1 file changed, 12 insertions(+), 38 deletions(-)

diff --git a/btrfs-map-logical.c b/btrfs-map-logical.c
index b35677730374..f06a612f6c14 100644
--- a/btrfs-map-logical.c
+++ b/btrfs-map-logical.c
@@ -38,8 +38,8 @@
  * */
 static FILE *info_file;
 
-static int map_one_extent(struct btrfs_fs_info *fs_info,
-			  u64 *logical_ret, u64 *len_ret, int search_forward)
+static void map_one_extent(struct btrfs_fs_info *fs_info,
+			   u64 *logical_ret, u64 *len_ret, int search_forward)
 {
 	struct btrfs_path *path;
 	struct btrfs_key key;
@@ -52,7 +52,7 @@ static int map_one_extent(struct btrfs_fs_info *fs_info,
 
 	path = btrfs_alloc_path();
 	if (!path)
-		return -ENOMEM;
+		return;
 
 	key.objectid = logical;
 	key.type = 0;
@@ -94,7 +94,11 @@ out:
 		if (len_ret)
 			*len_ret = len;
 	}
-	return ret;
+	/*
+	 * Ignore any error for extent item lookup, it can be corrupted
+	 * extent tree or whatever. In that case, just ignore the
+	 * extent item lookup and reset @ret to 0.
+	 */
 }
 
 static int __print_mapping_info(struct btrfs_fs_info *fs_info, u64 logical,
@@ -261,7 +265,8 @@ int main(int argc, char **argv)
 	radix_tree_init();
 	cache_tree_init(&root_cache);
 
-	root = open_ctree(dev, 0, 0);
+	root = open_ctree(dev, 0, OPEN_CTREE_PARTIAL |
+				  OPEN_CTREE_NO_BLOCK_GROUPS);
 	if (!root) {
 		fprintf(stderr, "Open ctree failed\n");
 		free(output_file);
@@ -293,34 +298,7 @@ int main(int argc, char **argv)
 	cur_len = bytes;
 
 	/* First find the nearest extent */
-	ret = map_one_extent(root->fs_info, &cur_logical, &cur_len, 0);
-	if (ret < 0) {
-		errno = -ret;
-		fprintf(stderr, "Failed to find extent at [%llu,%llu): %m\n",
-			cur_logical, cur_logical + cur_len);
-		goto out_close_fd;
-	}
-	/*
-	 * Normally, search backward should be OK, but for special case like
-	 * given logical is quite small where no extents are before it,
-	 * we need to search forward.
-	 */
-	if (ret > 0) {
-		ret = map_one_extent(root->fs_info, &cur_logical, &cur_len, 1);
-		if (ret < 0) {
-			errno = -ret;
-			fprintf(stderr,
-				"Failed to find extent at [%llu,%llu): %m\n",
-				cur_logical, cur_logical + cur_len);
-			goto out_close_fd;
-		}
-		if (ret > 0) {
-			fprintf(stderr,
-				"Failed to find any extent at [%llu,%llu)\n",
-				cur_logical, cur_logical + cur_len);
-			goto out_close_fd;
-		}
-	}
+	map_one_extent(root->fs_info, &cur_logical, &cur_len, 0);
 
 	while (cur_logical + cur_len >= logical && cur_logical < logical +
 	       bytes) {
@@ -328,11 +306,7 @@ int main(int argc, char **argv)
 		u64 real_len;
 
 		found = 1;
-		ret = map_one_extent(root->fs_info, &cur_logical, &cur_len, 1);
-		if (ret < 0)
-			goto out_close_fd;
-		if (ret > 0)
-			break;
+		map_one_extent(root->fs_info, &cur_logical, &cur_len, 1);
 		/* check again if there is overlap. */
 		if (cur_logical + cur_len < logical ||
 		    cur_logical >= logical + bytes)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10  6:56     ` Qu Wenruo
  2021-08-10 16:12       ` Konstantin Svist
@ 2021-08-11  0:30       ` Zygo Blaxell
  1 sibling, 0 replies; 30+ messages in thread
From: Zygo Blaxell @ 2021-08-11  0:30 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Konstantin Svist, Btrfs BTRFS

On Tue, Aug 10, 2021 at 02:56:22PM +0800, Qu Wenruo wrote:
> 
> 
> On 2021/8/10 下午2:44, Konstantin Svist wrote:
> > It's a Micron MTFDDAK512MBF, firmware M603
> 
> CC Zygo to see if he also hits such hardware.

I only have one data point for Micron M600 series, and this is it.

> > I don't know how to do that (corrupt the extent tree)
> 
> There is the more detailed version:
> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/
> 
> Or if you can re-compile btrfs kernel module, this patch would allow
> your to mount with rescue=all, without destroying the existing extent
> tree root:
> https://patchwork.kernel.org/project/linux-btrfs/patch/20210719054304.181509-1-wqu@suse.com/
> 
> 
> > Is there any other way to pull files off of the drive?
> 
> Either btrfs-restore or above rescue=all mount option.
> 
> Since btrfs-restore doesn't give you the recent files, I guess the
> rescue=all method is the only alternative.
> 
> Thanks,
> Qu
> 
> > 
> > 
> > On Mon, Aug 9, 2021, 22:24 Qu Wenruo <quwenruo.btrfs@gmx.com
> > <mailto:quwenruo.btrfs@gmx.com>> wrote:
> > 
> > 
> > 
> >     On 2021/8/10 下午12:41, Konstantin Svist wrote:
> >      > Not sure exactly when it stopped working, possibly had a power
> >     outage..
> >      > I was able to pull most of a snapshot with btrfs restore -s --
> >     but it's
> >      > months old and I want the more recent files from.
> >      >
> >      >
> >      > Testing the SSD for bad sectors, but nothing so far
> >      >
> >      >
> >      > While trying to mount:
> >      > [442587.465598] BTRFS info (device sdb3): allowing degraded mounts
> >      > [442587.465602] BTRFS info (device sdb3): disk space caching is
> >     enabled
> >      > [442587.465603] BTRFS info (device sdb3): has skinny extents
> >      > [442587.522301] BTRFS error (device sdb3): bad tree block start, want
> >      > 952483840 have 0
> >      > [442587.522867] BTRFS error (device sdb3): bad tree block start, want
> >      > 952483840 have 0
> > 
> >     Some metadata is completely lost.
> > 
> >     Mind to share the hardware model? Maybe it's some known bad hardware.
> > 
> >     Just a small note, all filesystems (including btrfs) should survive a
> >     power loss, as long as the disk is following the FLUSH/FUA requirement
> >     properly.
> > 
> >      > [442587.522876] BTRFS error (device sdb3): failed to read block
> >     groups: -5
> >      > [442587.523520] BTRFS error (device sdb3): open_ctree failed
> >      > [442782.661849] BTRFS error (device sdb3): unrecognized mount option
> >      > 'rootflags=recovery'
> >      > [442782.661926] BTRFS error (device sdb3): open_ctree failed
> > 
> >     Since the fs is already corrupted, you can try to corrupt extent tree
> >     root completely, then "rescue=all" mount option should allow you to
> >     mount the fs RO, and grab as much data as you can.
> > 
> >     But I doubt if it's any better than btrfs-restore.
> > 
> >     Thanks,
> >     Qu
> >      >
> >      > # btrfs-find-root /dev/sdb3
> >      > ERROR: failed to read block groups: Input/output error
> >      > Superblock thinks the generation is 166932
> >      > Superblock thinks the level is 1
> >      > Found tree root at 787070976 gen 166932 level 1
> >      > Well block 786399232(gen: 166931 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 781172736(gen: 166930 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 778108928(gen: 166929 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 100696064(gen: 166928 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 99565568(gen: 166927 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 97599488(gen: 166926 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 91701248(gen: 166925 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 89620480(gen: 166924 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 86818816(gen: 166923 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 84197376(gen: 166922 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 76398592(gen: 166921 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 72400896(gen: 166920 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 63275008(gen: 166919 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 60080128(gen: 166918 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 58032128(gen: 166917 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 55689216(gen: 166916 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 52264960(gen: 166915 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 49758208(gen: 166914 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 48300032(gen: 166913 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 45350912(gen: 166912 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 40337408(gen: 166911 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 71172096(gen: 166846 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 61210624(gen: 166843 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 55492608(gen: 166840 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 36044800(gen: 166829 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 34095104(gen: 166828 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 33046528(gen: 166827 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 31014912(gen: 166826 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 30556160(gen: 166825 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 777011200(gen: 166822 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 766672896(gen: 166821 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 690274304(gen: 166820 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 175046656(gen: 166819 level: 1) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 766017536(gen: 166813 level: 0) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 765739008(gen: 166813 level: 0) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > Well block 32604160(gen: 152478 level: 0) seems good, but
> >      > generation/level doesn't match, want gen: 166932 level: 1
> >      > # btrfs check /dev/sdb3
> >      > Opening filesystem to check...
> >      > checksum verify failed on 952483840 wanted 0x00000000 found
> >     0xb6bde3e4
> >      > checksum verify failed on 952483840 wanted 0x00000000 found
> >     0xb6bde3e4
> >      > checksum verify failed on 952483840 wanted 0x00000000 found
> >     0xb6bde3e4
> >      > bad tree block 952483840, bytenr mismatch, want=952483840, have=0
> >      > ERROR: failed to read block groups: Input/output error
> >      > ERROR: cannot open file system
> >      >
> >      >
> >      > # uname -a
> >      > Linux fry 5.13.6-200.fc34.x86_64 #1 SMP Wed Jul 28 15:31:21 UTC 2021
> >      > x86_64 x86_64 x86_64 GNU/Linux
> >      > # btrfs --version
> >      > btrfs-progs v5.13.1
> >      > # btrfs fi show /dev/sdb3
> >      > Label: none  uuid: 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
> >      >      Total devices 1 FS bytes used 171.92GiB
> >      >      devid    1 size 472.10GiB used 214.02GiB path /dev/sdb3
> >      >
> >      >
> > 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-10 23:54             ` Qu Wenruo
@ 2021-08-11  5:22               ` Konstantin Svist
  2021-08-11  5:24                 ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-11  5:22 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS, Zygo Blaxell

On 8/10/21 16:54, Qu Wenruo wrote:
>
>
> On 2021/8/11 上午7:21, Konstantin Svist wrote:
>> On 8/10/21 15:24, Qu Wenruo wrote:
>>>
>>> On 2021/8/11 上午12:12, Konstantin Svist wrote:
>>>>
>>>>>> I don't know how to do that (corrupt the extent tree)
>>>>>
>>>>> There is the more detailed version:
>>>>> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/
>>>>>
>>>>>
>>>>
>>>>
>>>> So, here's what I get:
>>>>
>>>>
>>>> # btrfs ins dump-tree -t root /dev/sdb3 |grep -A5 'item 0 key
>>>> (EXTENT_TREE'
>>>>
>>>>       item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 15844 itemsize 439
>>>>           generation 166932 root_dirid 0 bytenr 786939904 level 2
>>>> refs 1
>>>>           lastsnap 0 byte_limit 0 bytes_used 50708480 flags 0x0(none)
>>>>           uuid 00000000-0000-0000-0000-000000000000
>>>>           drop key (0 UNKNOWN.0 0) level 0
>>>>       item 1 key (DEV_TREE ROOT_ITEM 0) itemoff 15405 itemsize 439
>>>>
>>>>
>>>> # btrfs-map-logical -l 786939904 /dev/sdb3
>>>>
>>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>>> bad tree block 952483840, bytenr mismatch, want=952483840, have=0
>>>> ERROR: failed to read block groups: Input/output error
>>>> Open ctree failed
>>>>
>>>>
>>>>
>>>> Sooooo.. now what..?
>>>>
>>> With v5.11 or newer kernel, mount it with "-o rescue=all,ro".
>>
>>
>> Sorry, I guess that wasn't clear: that error above is what I get while
>> trying to corrupt the extent tree as per your guide.
>
> Oh, that btrfs-map-logical is requiring unnecessary trees to continue.
>
> Can you re-compile btrfs-progs with the attached patch?
> Then the re-compiled btrfs-map-logical should work without problem.



Awesome, that worked to map the sector & mount the partition.. but I
still can't access subvol_root, where the recent data is:

[root@fry ~]# mount -oro,rescue=all /dev/sdb3 /mnt/
[root@fry ~]# ll /mnt/
ls: cannot access '/mnt/subvol_root': Input/output error
total 0
d?????????? ? ?    ?     ?            ? subvol_root
drwxr-xr-x. 1 root root 12 Mar 18 20:55 subvol_snapshots


dmesg:

[532051.071515] BTRFS info (device sdb3): enabling all of the rescue options
[532051.071521] BTRFS info (device sdb3): ignoring data csums
[532051.071523] BTRFS info (device sdb3): ignoring bad roots
[532051.071524] BTRFS info (device sdb3): disabling log replay at mount time
[532051.071526] BTRFS info (device sdb3): disk space caching is enabled
[532051.071528] BTRFS info (device sdb3): has skinny extents
[532051.077018] BTRFS warning (device sdb3): sdb3 checksum verify failed
on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6 level 2
[532051.077710] BTRFS warning (device sdb3): sdb3 checksum verify failed
on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6 level 2
[532052.705324] BTRFS error (device sdb3): bad tree block start, want
920748032 have 0
[532052.705934] BTRFS error (device sdb3): bad tree block start, want
920748032 have 0


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11  5:22               ` Konstantin Svist
@ 2021-08-11  5:24                 ` Qu Wenruo
  2021-08-11  5:34                   ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-11  5:24 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell



On 2021/8/11 下午1:22, Konstantin Svist wrote:
> On 8/10/21 16:54, Qu Wenruo wrote:
>>
>>
>> On 2021/8/11 上午7:21, Konstantin Svist wrote:
>>> On 8/10/21 15:24, Qu Wenruo wrote:
>>>>
>>>> On 2021/8/11 上午12:12, Konstantin Svist wrote:
>>>>>
>>>>>>> I don't know how to do that (corrupt the extent tree)
>>>>>>
>>>>>> There is the more detailed version:
>>>>>> https://lore.kernel.org/linux-btrfs/744795fa-e45a-110a-103e-13caf597299a@gmx.com/
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> So, here's what I get:
>>>>>
>>>>>
>>>>> # btrfs ins dump-tree -t root /dev/sdb3 |grep -A5 'item 0 key
>>>>> (EXTENT_TREE'
>>>>>
>>>>>        item 0 key (EXTENT_TREE ROOT_ITEM 0) itemoff 15844 itemsize 439
>>>>>            generation 166932 root_dirid 0 bytenr 786939904 level 2
>>>>> refs 1
>>>>>            lastsnap 0 byte_limit 0 bytes_used 50708480 flags 0x0(none)
>>>>>            uuid 00000000-0000-0000-0000-000000000000
>>>>>            drop key (0 UNKNOWN.0 0) level 0
>>>>>        item 1 key (DEV_TREE ROOT_ITEM 0) itemoff 15405 itemsize 439
>>>>>
>>>>>
>>>>> # btrfs-map-logical -l 786939904 /dev/sdb3
>>>>>
>>>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>>>> checksum verify failed on 952483840 wanted 0x00000000 found 0xb6bde3e4
>>>>> bad tree block 952483840, bytenr mismatch, want=952483840, have=0
>>>>> ERROR: failed to read block groups: Input/output error
>>>>> Open ctree failed
>>>>>
>>>>>
>>>>>
>>>>> Sooooo.. now what..?
>>>>>
>>>> With v5.11 or newer kernel, mount it with "-o rescue=all,ro".
>>>
>>>
>>> Sorry, I guess that wasn't clear: that error above is what I get while
>>> trying to corrupt the extent tree as per your guide.
>>
>> Oh, that btrfs-map-logical is requiring unnecessary trees to continue.
>>
>> Can you re-compile btrfs-progs with the attached patch?
>> Then the re-compiled btrfs-map-logical should work without problem.
>
>
>
> Awesome, that worked to map the sector & mount the partition.. but I
> still can't access subvol_root, where the recent data is:

Is subvol_root a subvolume?

If so, you can try to mount the subvolume using subvolume id.

But in that case, it would be not much different than using
btrfs-restore with "-r" option.

Thanks,
Qu
>
> [root@fry ~]# mount -oro,rescue=all /dev/sdb3 /mnt/
> [root@fry ~]# ll /mnt/
> ls: cannot access '/mnt/subvol_root': Input/output error
> total 0
> d?????????? ? ?    ?     ?            ? subvol_root
> drwxr-xr-x. 1 root root 12 Mar 18 20:55 subvol_snapshots
>
>
> dmesg:
>
> [532051.071515] BTRFS info (device sdb3): enabling all of the rescue options
> [532051.071521] BTRFS info (device sdb3): ignoring data csums
> [532051.071523] BTRFS info (device sdb3): ignoring bad roots
> [532051.071524] BTRFS info (device sdb3): disabling log replay at mount time
> [532051.071526] BTRFS info (device sdb3): disk space caching is enabled
> [532051.071528] BTRFS info (device sdb3): has skinny extents
> [532051.077018] BTRFS warning (device sdb3): sdb3 checksum verify failed
> on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6 level 2
> [532051.077710] BTRFS warning (device sdb3): sdb3 checksum verify failed
> on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6 level 2
> [532052.705324] BTRFS error (device sdb3): bad tree block start, want
> 920748032 have 0
> [532052.705934] BTRFS error (device sdb3): bad tree block start, want
> 920748032 have 0
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11  5:24                 ` Qu Wenruo
@ 2021-08-11  5:34                   ` Konstantin Svist
  2021-08-11  5:49                     ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-11  5:34 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS, Zygo Blaxell

On 8/10/21 22:24, Qu Wenruo wrote:
>
>
> On 2021/8/11 下午1:22, Konstantin Svist wrote:
>> On 8/10/21 16:54, Qu Wenruo wrote:
>>>
>>> Oh, that btrfs-map-logical is requiring unnecessary trees to continue.
>>>
>>> Can you re-compile btrfs-progs with the attached patch?
>>> Then the re-compiled btrfs-map-logical should work without problem.
>>
>>
>>
>> Awesome, that worked to map the sector & mount the partition.. but I
>> still can't access subvol_root, where the recent data is:
>
> Is subvol_root a subvolume?
>
> If so, you can try to mount the subvolume using subvolume id.
>
> But in that case, it would be not much different than using
> btrfs-restore with "-r" option. 


Yes it is.

# mount -oro,rescue=all,subvol=subvol_root /dev/sdb3 /mnt/
mount: /mnt: can't read superblock on /dev/sdb3.

dmesg has the same errors, though..

Anything else I can do?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11  5:34                   ` Konstantin Svist
@ 2021-08-11  5:49                     ` Qu Wenruo
  2021-08-11 19:33                       ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-11  5:49 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell



On 2021/8/11 下午1:34, Konstantin Svist wrote:
> On 8/10/21 22:24, Qu Wenruo wrote:
>>
>>
>> On 2021/8/11 下午1:22, Konstantin Svist wrote:
>>> On 8/10/21 16:54, Qu Wenruo wrote:
>>>>
>>>> Oh, that btrfs-map-logical is requiring unnecessary trees to continue.
>>>>
>>>> Can you re-compile btrfs-progs with the attached patch?
>>>> Then the re-compiled btrfs-map-logical should work without problem.
>>>
>>>
>>>
>>> Awesome, that worked to map the sector & mount the partition.. but I
>>> still can't access subvol_root, where the recent data is:
>>
>> Is subvol_root a subvolume?
>>
>> If so, you can try to mount the subvolume using subvolume id.
>>
>> But in that case, it would be not much different than using
>> btrfs-restore with "-r" option.
>
>
> Yes it is.
>
> # mount -oro,rescue=all,subvol=subvol_root /dev/sdb3 /mnt/
> mount: /mnt: can't read superblock on /dev/sdb3.

I mean using subvolid=<number>

Using subvol= will still trigger the same path lookup code and get
aborted by the IO error.

To get the number, I guess the regular tools are not helpful.

You may want to manually exam the root tree:

# btrfs ins dump-tree -t root <device>

Then look for the keys like (<number> ROOT_ITEM <0 or number>), and try
passing the first number to "subvolid=" option.

Thanks,
Qu

>
> dmesg has the same errors, though..
>
> Anything else I can do?
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11  5:49                     ` Qu Wenruo
@ 2021-08-11 19:33                       ` Konstantin Svist
  2021-08-11 21:51                         ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-11 19:33 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS, Zygo Blaxell

On 8/10/21 22:49, Qu Wenruo wrote:
>
>
> On 2021/8/11 下午1:34, Konstantin Svist wrote:
>> On 8/10/21 22:24, Qu Wenruo wrote:
>>>
>>>
>>> On 2021/8/11 下午1:22, Konstantin Svist wrote:
>>>> On 8/10/21 16:54, Qu Wenruo wrote:
>>>>>
>>>>> Oh, that btrfs-map-logical is requiring unnecessary trees to
>>>>> continue.
>>>>>
>>>>> Can you re-compile btrfs-progs with the attached patch?
>>>>> Then the re-compiled btrfs-map-logical should work without problem.
>>>>
>>>>
>>>>
>>>> Awesome, that worked to map the sector & mount the partition.. but I
>>>> still can't access subvol_root, where the recent data is:
>>>
>>> Is subvol_root a subvolume?
>>>
>>> If so, you can try to mount the subvolume using subvolume id.
>>>
>>> But in that case, it would be not much different than using
>>> btrfs-restore with "-r" option.
>>
>>
>> Yes it is.
>>
>> # mount -oro,rescue=all,subvol=subvol_root /dev/sdb3 /mnt/
>> mount: /mnt: can't read superblock on /dev/sdb3.
>
> I mean using subvolid=<number>
>
> Using subvol= will still trigger the same path lookup code and get
> aborted by the IO error.
>
> To get the number, I guess the regular tools are not helpful.
>
> You may want to manually exam the root tree:
>
> # btrfs ins dump-tree -t root <device>
>
> Then look for the keys like (<number> ROOT_ITEM <0 or number>), and try
> passing the first number to "subvolid=" option. 

This works (and numbers seem to be the same as from dump-tree):
# mount -oro,rescue=all /dev/sdb3 /mnt/
# btrfs su li /mnt/
ID 257 gen 166932 top level 5 path subvol_root
ID 258 gen 56693 top level 5 path subvol_snapshots
ID 498 gen 56479 top level 258 path subvol_snapshots/29/snapshot
ID 499 gen 56642 top level 258 path subvol_snapshots/30/snapshot
ID 500 gen 56691 top level 258 path subvol_snapshots/31/snapshot

This also works (not what I want):
# mount -oro,rescue=all,subvol=subvol_snapshots /dev/sdb3 /mnt/


But this doesn't:

# mount -oro,rescue=all,subvolid=257 /dev/sdb3 /mnt/
mount: /mnt: can't read superblock on /dev/sdb3.

dmesg:
BTRFS error (device sdb3): bad tree block start, want 920748032 have 0



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11 19:33                       ` Konstantin Svist
@ 2021-08-11 21:51                         ` Qu Wenruo
  2021-08-11 22:34                           ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-11 21:51 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell



On 2021/8/12 上午3:33, Konstantin Svist wrote:
> On 8/10/21 22:49, Qu Wenruo wrote:
>>
>>
>> On 2021/8/11 下午1:34, Konstantin Svist wrote:
>>> On 8/10/21 22:24, Qu Wenruo wrote:
>>>>
>>>>
>>>> On 2021/8/11 下午1:22, Konstantin Svist wrote:
>>>>> On 8/10/21 16:54, Qu Wenruo wrote:
>>>>>>
>>>>>> Oh, that btrfs-map-logical is requiring unnecessary trees to
>>>>>> continue.
>>>>>>
>>>>>> Can you re-compile btrfs-progs with the attached patch?
>>>>>> Then the re-compiled btrfs-map-logical should work without problem.
>>>>>
>>>>>
>>>>>
>>>>> Awesome, that worked to map the sector & mount the partition.. but I
>>>>> still can't access subvol_root, where the recent data is:
>>>>
>>>> Is subvol_root a subvolume?
>>>>
>>>> If so, you can try to mount the subvolume using subvolume id.
>>>>
>>>> But in that case, it would be not much different than using
>>>> btrfs-restore with "-r" option.
>>>
>>>
>>> Yes it is.
>>>
>>> # mount -oro,rescue=all,subvol=subvol_root /dev/sdb3 /mnt/
>>> mount: /mnt: can't read superblock on /dev/sdb3.
>>
>> I mean using subvolid=<number>
>>
>> Using subvol= will still trigger the same path lookup code and get
>> aborted by the IO error.
>>
>> To get the number, I guess the regular tools are not helpful.
>>
>> You may want to manually exam the root tree:
>>
>> # btrfs ins dump-tree -t root <device>
>>
>> Then look for the keys like (<number> ROOT_ITEM <0 or number>), and try
>> passing the first number to "subvolid=" option.
>
> This works (and numbers seem to be the same as from dump-tree):
> # mount -oro,rescue=all /dev/sdb3 /mnt/
> # btrfs su li /mnt/
> ID 257 gen 166932 top level 5 path subvol_root
> ID 258 gen 56693 top level 5 path subvol_snapshots
> ID 498 gen 56479 top level 258 path subvol_snapshots/29/snapshot
> ID 499 gen 56642 top level 258 path subvol_snapshots/30/snapshot
> ID 500 gen 56691 top level 258 path subvol_snapshots/31/snapshot
>
> This also works (not what I want):
> # mount -oro,rescue=all,subvol=subvol_snapshots /dev/sdb3 /mnt/
>
>
> But this doesn't:
>
> # mount -oro,rescue=all,subvolid=257 /dev/sdb3 /mnt/
> mount: /mnt: can't read superblock on /dev/sdb3.
>
> dmesg:
> BTRFS error (device sdb3): bad tree block start, want 920748032 have 0
>
>
Then it means, the tree blocks of that subvolume is corrupted, thus no
way to read that subvolume, unfortunately.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11 21:51                         ` Qu Wenruo
@ 2021-08-11 22:34                           ` Konstantin Svist
  2021-08-12  1:18                             ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-11 22:34 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS, Zygo Blaxell

On 8/11/21 14:51, Qu Wenruo wrote:
>
>
> On 2021/8/12 上午3:33, Konstantin Svist wrote:
>> On 8/10/21 22:49, Qu Wenruo wrote:
>>>
>>>
>>> On 2021/8/11 下午1:34, Konstantin Svist wrote:
>>>> On 8/10/21 22:24, Qu Wenruo wrote:
>>>>>
>>>>>
>>>>> On 2021/8/11 下午1:22, Konstantin Svist wrote:
>>>>>> On 8/10/21 16:54, Qu Wenruo wrote:
>>>>>>>
>>>>>>> Oh, that btrfs-map-logical is requiring unnecessary trees to
>>>>>>> continue.
>>>>>>>
>>>>>>> Can you re-compile btrfs-progs with the attached patch?
>>>>>>> Then the re-compiled btrfs-map-logical should work without problem.
>>>>>>
>>>>>>
>>>>>>
>>>>>> Awesome, that worked to map the sector & mount the partition.. but I
>>>>>> still can't access subvol_root, where the recent data is:
>>>>>
>>>>> Is subvol_root a subvolume?
>>>>>
>>>>> If so, you can try to mount the subvolume using subvolume id.
>>>>>
>>>>> But in that case, it would be not much different than using
>>>>> btrfs-restore with "-r" option.
>>>>
>>>>
>>>> Yes it is.
>>>>
>>>> # mount -oro,rescue=all,subvol=subvol_root /dev/sdb3 /mnt/
>>>> mount: /mnt: can't read superblock on /dev/sdb3.
>>>
>>> I mean using subvolid=<number>
>>>
>>> Using subvol= will still trigger the same path lookup code and get
>>> aborted by the IO error.
>>>
>>> To get the number, I guess the regular tools are not helpful.
>>>
>>> You may want to manually exam the root tree:
>>>
>>> # btrfs ins dump-tree -t root <device>
>>>
>>> Then look for the keys like (<number> ROOT_ITEM <0 or number>), and try
>>> passing the first number to "subvolid=" option.
>>
>> This works (and numbers seem to be the same as from dump-tree):
>> # mount -oro,rescue=all /dev/sdb3 /mnt/
>> # btrfs su li /mnt/
>> ID 257 gen 166932 top level 5 path subvol_root
>> ID 258 gen 56693 top level 5 path subvol_snapshots
>> ID 498 gen 56479 top level 258 path subvol_snapshots/29/snapshot
>> ID 499 gen 56642 top level 258 path subvol_snapshots/30/snapshot
>> ID 500 gen 56691 top level 258 path subvol_snapshots/31/snapshot
>>
>> This also works (not what I want):
>> # mount -oro,rescue=all,subvol=subvol_snapshots /dev/sdb3 /mnt/
>>
>>
>> But this doesn't:
>>
>> # mount -oro,rescue=all,subvolid=257 /dev/sdb3 /mnt/
>> mount: /mnt: can't read superblock on /dev/sdb3.
>>
>> dmesg:
>> BTRFS error (device sdb3): bad tree block start, want 920748032 have 0
>>
>>
> Then it means, the tree blocks of that subvolume is corrupted, thus no
> way to read that subvolume, unfortunately.
>
> Thanks,
> Qu


Shouldn't there be an earlier generation of this subvolume's tree block
somewhere on the disk? Would all of them have gotten overwritten already?

Any hope for any individual files, if not for subvolume?





^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-11 22:34                           ` Konstantin Svist
@ 2021-08-12  1:18                             ` Qu Wenruo
  2021-08-21  2:56                               ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-12  1:18 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS, Zygo Blaxell



On 2021/8/12 上午6:34, Konstantin Svist wrote:
> On 8/11/21 14:51, Qu Wenruo wrote:
>>
>>
>> On 2021/8/12 上午3:33, Konstantin Svist wrote:
>>> On 8/10/21 22:49, Qu Wenruo wrote:
>>>>
>>>>
>>>> On 2021/8/11 下午1:34, Konstantin Svist wrote:
>>>>> On 8/10/21 22:24, Qu Wenruo wrote:
>>>>>>
>>>>>>
>>>>>> On 2021/8/11 下午1:22, Konstantin Svist wrote:
>>>>>>> On 8/10/21 16:54, Qu Wenruo wrote:
>>>>>>>>
>>>>>>>> Oh, that btrfs-map-logical is requiring unnecessary trees to
>>>>>>>> continue.
>>>>>>>>
>>>>>>>> Can you re-compile btrfs-progs with the attached patch?
>>>>>>>> Then the re-compiled btrfs-map-logical should work without problem.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Awesome, that worked to map the sector & mount the partition.. but I
>>>>>>> still can't access subvol_root, where the recent data is:
>>>>>>
>>>>>> Is subvol_root a subvolume?
>>>>>>
>>>>>> If so, you can try to mount the subvolume using subvolume id.
>>>>>>
>>>>>> But in that case, it would be not much different than using
>>>>>> btrfs-restore with "-r" option.
>>>>>
>>>>>
>>>>> Yes it is.
>>>>>
>>>>> # mount -oro,rescue=all,subvol=subvol_root /dev/sdb3 /mnt/
>>>>> mount: /mnt: can't read superblock on /dev/sdb3.
>>>>
>>>> I mean using subvolid=<number>
>>>>
>>>> Using subvol= will still trigger the same path lookup code and get
>>>> aborted by the IO error.
>>>>
>>>> To get the number, I guess the regular tools are not helpful.
>>>>
>>>> You may want to manually exam the root tree:
>>>>
>>>> # btrfs ins dump-tree -t root <device>
>>>>
>>>> Then look for the keys like (<number> ROOT_ITEM <0 or number>), and try
>>>> passing the first number to "subvolid=" option.
>>>
>>> This works (and numbers seem to be the same as from dump-tree):
>>> # mount -oro,rescue=all /dev/sdb3 /mnt/
>>> # btrfs su li /mnt/
>>> ID 257 gen 166932 top level 5 path subvol_root
>>> ID 258 gen 56693 top level 5 path subvol_snapshots
>>> ID 498 gen 56479 top level 258 path subvol_snapshots/29/snapshot
>>> ID 499 gen 56642 top level 258 path subvol_snapshots/30/snapshot
>>> ID 500 gen 56691 top level 258 path subvol_snapshots/31/snapshot
>>>
>>> This also works (not what I want):
>>> # mount -oro,rescue=all,subvol=subvol_snapshots /dev/sdb3 /mnt/
>>>
>>>
>>> But this doesn't:
>>>
>>> # mount -oro,rescue=all,subvolid=257 /dev/sdb3 /mnt/
>>> mount: /mnt: can't read superblock on /dev/sdb3.
>>>
>>> dmesg:
>>> BTRFS error (device sdb3): bad tree block start, want 920748032 have 0
>>>
>>>
>> Then it means, the tree blocks of that subvolume is corrupted, thus no
>> way to read that subvolume, unfortunately.
>>
>> Thanks,
>> Qu
>
>
> Shouldn't there be an earlier generation of this subvolume's tree block
> somewhere on the disk? Would all of them have gotten overwritten already?

Then it will be more complex and I can't ensure any good result.

Firstly you need to find an older root tree:

# btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
                 backup_tree_root:       30687232        gen: 2317
  level: 0
                 backup_tree_root:       30834688        gen: 2318
  level: 0
                 backup_tree_root:       30408704        gen: 2319
  level: 0
                 backup_tree_root:       31031296        gen: 2316
  level: 0

Then try the bytenr in their reverse generation order in btrfs ins
dump-tree:
(The latest one should be the current root, thus you can skip it)

# btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5

Then grab the bytenr of the subvolume 257, then pass the bytenr to
btrfs-restore:

# btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>

The chance is already pretty low, good luck.

Thanks,
Qu

>
> Any hope for any individual files, if not for subvolume?
>
>
>
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-12  1:18                             ` Qu Wenruo
@ 2021-08-21  2:56                               ` Konstantin Svist
  2021-08-28  5:57                                 ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-21  2:56 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

On 8/11/21 18:18, Qu Wenruo wrote:
>
>
> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>
>> Shouldn't there be an earlier generation of this subvolume's tree block
>> somewhere on the disk? Would all of them have gotten overwritten
>> already?
>
> Then it will be more complex and I can't ensure any good result.


It was already pretty complex and results were never guaranteed :)


>
> Firstly you need to find an older root tree:
>
> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>                 backup_tree_root:       30687232        gen: 2317
>  level: 0
>                 backup_tree_root:       30834688        gen: 2318
>  level: 0
>                 backup_tree_root:       30408704        gen: 2319
>  level: 0
>                 backup_tree_root:       31031296        gen: 2316
>  level: 0
>
> Then try the bytenr in their reverse generation order in btrfs ins
> dump-tree:
> (The latest one should be the current root, thus you can skip it)
>
> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5
>
> Then grab the bytenr of the subvolume 257, then pass the bytenr to
> btrfs-restore:
>
> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>
> The chance is already pretty low, good luck.
>
> Thanks,
> Qu 



When I run dump-tree, I get this:

# btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it

The same exact offset fails checksum for all 4 backup roots, any way
around this?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-21  2:56                               ` Konstantin Svist
@ 2021-08-28  5:57                                 ` Konstantin Svist
  2021-08-28  6:16                                   ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-28  5:57 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

On 8/20/21 19:56, Konstantin Svist wrote:
> On 8/11/21 18:18, Qu Wenruo wrote:
>>
>> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>> Shouldn't there be an earlier generation of this subvolume's tree block
>>> somewhere on the disk? Would all of them have gotten overwritten
>>> already?
>> Then it will be more complex and I can't ensure any good result.
>
> It was already pretty complex and results were never guaranteed :)
>
>
>> Firstly you need to find an older root tree:
>>
>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>                 backup_tree_root:       30687232        gen: 2317
>>  level: 0
>>                 backup_tree_root:       30834688        gen: 2318
>>  level: 0
>>                 backup_tree_root:       30408704        gen: 2319
>>  level: 0
>>                 backup_tree_root:       31031296        gen: 2316
>>  level: 0
>>
>> Then try the bytenr in their reverse generation order in btrfs ins
>> dump-tree:
>> (The latest one should be the current root, thus you can skip it)
>>
>> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5
>>
>> Then grab the bytenr of the subvolume 257, then pass the bytenr to
>> btrfs-restore:
>>
>> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>>
>> The chance is already pretty low, good luck.
>>
>> Thanks,
>> Qu 
>
>
> When I run dump-tree, I get this:
>
> # btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
>
> The same exact offset fails checksum for all 4 backup roots, any way
> around this?


*ping*

Any hope left here?


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-28  5:57                                 ` Konstantin Svist
@ 2021-08-28  6:16                                   ` Qu Wenruo
  2021-08-28 23:16                                     ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-28  6:16 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS



On 2021/8/28 下午1:57, Konstantin Svist wrote:
> On 8/20/21 19:56, Konstantin Svist wrote:
>> On 8/11/21 18:18, Qu Wenruo wrote:
>>>
>>> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>>> Shouldn't there be an earlier generation of this subvolume's tree block
>>>> somewhere on the disk? Would all of them have gotten overwritten
>>>> already?
>>> Then it will be more complex and I can't ensure any good result.
>>
>> It was already pretty complex and results were never guaranteed :)
>>
>>
>>> Firstly you need to find an older root tree:
>>>
>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>>                  backup_tree_root:       30687232        gen: 2317
>>>   level: 0
>>>                  backup_tree_root:       30834688        gen: 2318
>>>   level: 0
>>>                  backup_tree_root:       30408704        gen: 2319
>>>   level: 0
>>>                  backup_tree_root:       31031296        gen: 2316
>>>   level: 0
>>>
>>> Then try the bytenr in their reverse generation order in btrfs ins
>>> dump-tree:
>>> (The latest one should be the current root, thus you can skip it)
>>>
>>> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5
>>>
>>> Then grab the bytenr of the subvolume 257, then pass the bytenr to
>>> btrfs-restore:
>>>
>>> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>>>
>>> The chance is already pretty low, good luck.
>>>
>>> Thanks,
>>> Qu
>>
>>
>> When I run dump-tree, I get this:
>>
>> # btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM" -A 5
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> Csum didn't match
>> WARNING: could not setup extent tree, skipping it
>>
>> The same exact offset fails checksum for all 4 backup roots, any way
>> around this?

When without the grep, is there any output?

Thanks,
Qu

>
>
> *ping*
>
> Any hope left here?
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-28  6:16                                   ` Qu Wenruo
@ 2021-08-28 23:16                                     ` Konstantin Svist
  2021-08-28 23:30                                       ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-28 23:16 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

On 8/27/21 23:16, Qu Wenruo wrote:
>
>
> On 2021/8/28 下午1:57, Konstantin Svist wrote:
>> On 8/20/21 19:56, Konstantin Svist wrote:
>>> On 8/11/21 18:18, Qu Wenruo wrote:
>>>>
>>>> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>>>> Shouldn't there be an earlier generation of this subvolume's tree
>>>>> block
>>>>> somewhere on the disk? Would all of them have gotten overwritten
>>>>> already?
>>>> Then it will be more complex and I can't ensure any good result.
>>>
>>> It was already pretty complex and results were never guaranteed :)
>>>
>>>
>>>> Firstly you need to find an older root tree:
>>>>
>>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>>>                  backup_tree_root:       30687232        gen: 2317
>>>>   level: 0
>>>>                  backup_tree_root:       30834688        gen: 2318
>>>>   level: 0
>>>>                  backup_tree_root:       30408704        gen: 2319
>>>>   level: 0
>>>>                  backup_tree_root:       31031296        gen: 2316
>>>>   level: 0
>>>>
>>>> Then try the bytenr in their reverse generation order in btrfs ins
>>>> dump-tree:
>>>> (The latest one should be the current root, thus you can skip it)
>>>>
>>>> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>> -A 5
>>>>
>>>> Then grab the bytenr of the subvolume 257, then pass the bytenr to
>>>> btrfs-restore:
>>>>
>>>> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>>>>
>>>> The chance is already pretty low, good luck.
>>>>
>>>> Thanks,
>>>> Qu
>>>
>>>
>>> When I run dump-tree, I get this:
>>>
>>> # btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>> -A 5
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> Csum didn't match
>>> WARNING: could not setup extent tree, skipping it
>>>
>>> The same exact offset fails checksum for all 4 backup roots, any way
>>> around this?
>
> When without the grep, is there any output?


# btrfs ins dump-tree -b 787070976 /dev/sdb3
btrfs-progs v5.13.1
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
node 787070976 level 1 items 7 free space 486 generation 166932 owner
ROOT_TREE
node 787070976 flags 0x1(WRITTEN) backref revision 1
fs uuid 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
chunk uuid a8a06213-eebf-40d8-ab1a-914f621fbe1c
    key (EXTENT_TREE ROOT_ITEM 0) block 787087360 gen 166932
    key (277 INODE_ITEM 0) block 197491195904 gen 56511
    key (305 INODE_ITEM 0) block 778174464 gen 166929
    key (366 EXTENT_DATA 0) block 197491949568 gen 56511
    key (428 INODE_ITEM 0) block 36175872 gen 166829
    key (476 INODE_ITEM 0) block 787234816 gen 166932
    key (FREE_SPACE UNTYPED 99888398336) block 780812288 gen 166929

# btrfs ins dump-tree -b 778108928 /dev/sdb3
btrfs-progs v5.13.1
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
node 778108928 level 1 items 7 free space 486 generation 166929 owner
ROOT_TREE
node 778108928 flags 0x1(WRITTEN) backref revision 1
fs uuid 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
chunk uuid a8a06213-eebf-40d8-ab1a-914f621fbe1c
    key (EXTENT_TREE ROOT_ITEM 0) block 778125312 gen 166929
    key (277 INODE_ITEM 0) block 197491195904 gen 56511
    key (305 INODE_ITEM 0) block 778174464 gen 166929
    key (366 EXTENT_DATA 0) block 197491949568 gen 56511
    key (428 INODE_ITEM 0) block 36175872 gen 166829
    key (476 INODE_ITEM 0) block 780730368 gen 166929
    key (FREE_SPACE UNTYPED 99888398336) block 780812288 gen 166929

..and 2 more from other backup_tree_roots



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-28 23:16                                     ` Konstantin Svist
@ 2021-08-28 23:30                                       ` Qu Wenruo
  2021-08-29  6:34                                         ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-28 23:30 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS



On 2021/8/29 上午7:16, Konstantin Svist wrote:
> On 8/27/21 23:16, Qu Wenruo wrote:
>>
>>
>> On 2021/8/28 下午1:57, Konstantin Svist wrote:
>>> On 8/20/21 19:56, Konstantin Svist wrote:
>>>> On 8/11/21 18:18, Qu Wenruo wrote:
>>>>>
>>>>> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>>>>> Shouldn't there be an earlier generation of this subvolume's tree
>>>>>> block
>>>>>> somewhere on the disk? Would all of them have gotten overwritten
>>>>>> already?
>>>>> Then it will be more complex and I can't ensure any good result.
>>>>
>>>> It was already pretty complex and results were never guaranteed :)
>>>>
>>>>
>>>>> Firstly you need to find an older root tree:
>>>>>
>>>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>>>>                   backup_tree_root:       30687232        gen: 2317
>>>>>    level: 0
>>>>>                   backup_tree_root:       30834688        gen: 2318
>>>>>    level: 0
>>>>>                   backup_tree_root:       30408704        gen: 2319
>>>>>    level: 0
>>>>>                   backup_tree_root:       31031296        gen: 2316
>>>>>    level: 0
>>>>>
>>>>> Then try the bytenr in their reverse generation order in btrfs ins
>>>>> dump-tree:
>>>>> (The latest one should be the current root, thus you can skip it)
>>>>>
>>>>> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>>> -A 5
>>>>>
>>>>> Then grab the bytenr of the subvolume 257, then pass the bytenr to
>>>>> btrfs-restore:
>>>>>
>>>>> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>>>>>
>>>>> The chance is already pretty low, good luck.
>>>>>
>>>>> Thanks,
>>>>> Qu
>>>>
>>>>
>>>> When I run dump-tree, I get this:
>>>>
>>>> # btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>> -A 5
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> Csum didn't match
>>>> WARNING: could not setup extent tree, skipping it
>>>>
>>>> The same exact offset fails checksum for all 4 backup roots, any way
>>>> around this?
>>
>> When without the grep, is there any output?
>
>
> # btrfs ins dump-tree -b 787070976 /dev/sdb3
> btrfs-progs v5.13.1
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
> node 787070976 level 1 items 7 free space 486 generation 166932 owner
> ROOT_TREE
> node 787070976 flags 0x1(WRITTEN) backref revision 1
> fs uuid 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
> chunk uuid a8a06213-eebf-40d8-ab1a-914f621fbe1c
>      key (EXTENT_TREE ROOT_ITEM 0) block 787087360 gen 166932
>      key (277 INODE_ITEM 0) block 197491195904 gen 56511
>      key (305 INODE_ITEM 0) block 778174464 gen 166929
>      key (366 EXTENT_DATA 0) block 197491949568 gen 56511
>      key (428 INODE_ITEM 0) block 36175872 gen 166829
>      key (476 INODE_ITEM 0) block 787234816 gen 166932
>      key (FREE_SPACE UNTYPED 99888398336) block 780812288 gen 166929

My bad, I forgot to add "--follow" option for "btrfs ins dump-tree"

So the full command is:

$ btrfs ins dump-tree -b 787070976 --follow /dev/sdb3

Then you should be able to find "(257 ROOT_ITEM".

Thanks,
Qu

>
> # btrfs ins dump-tree -b 778108928 /dev/sdb3
> btrfs-progs v5.13.1
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
> node 778108928 level 1 items 7 free space 486 generation 166929 owner
> ROOT_TREE
> node 778108928 flags 0x1(WRITTEN) backref revision 1
> fs uuid 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
> chunk uuid a8a06213-eebf-40d8-ab1a-914f621fbe1c
>      key (EXTENT_TREE ROOT_ITEM 0) block 778125312 gen 166929
>      key (277 INODE_ITEM 0) block 197491195904 gen 56511
>      key (305 INODE_ITEM 0) block 778174464 gen 166929
>      key (366 EXTENT_DATA 0) block 197491949568 gen 56511
>      key (428 INODE_ITEM 0) block 36175872 gen 166829
>      key (476 INODE_ITEM 0) block 780730368 gen 166929
>      key (FREE_SPACE UNTYPED 99888398336) block 780812288 gen 166929
>
> ..and 2 more from other backup_tree_roots
>
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-28 23:30                                       ` Qu Wenruo
@ 2021-08-29  6:34                                         ` Konstantin Svist
  2021-08-29  7:19                                           ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-29  6:34 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

On 8/28/21 16:30, Qu Wenruo wrote:
>
>
> On 2021/8/29 上午7:16, Konstantin Svist wrote:
>> On 8/27/21 23:16, Qu Wenruo wrote:
>>>
>>>
>>> On 2021/8/28 下午1:57, Konstantin Svist wrote:
>>>> On 8/20/21 19:56, Konstantin Svist wrote:
>>>>> On 8/11/21 18:18, Qu Wenruo wrote:
>>>>>>
>>>>>> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>>>>>> Shouldn't there be an earlier generation of this subvolume's tree
>>>>>>> block
>>>>>>> somewhere on the disk? Would all of them have gotten overwritten
>>>>>>> already?
>>>>>> Then it will be more complex and I can't ensure any good result.
>>>>>
>>>>> It was already pretty complex and results were never guaranteed :)
>>>>>
>>>>>
>>>>>> Firstly you need to find an older root tree:
>>>>>>
>>>>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>>>>>                   backup_tree_root:       30687232        gen: 2317
>>>>>>    level: 0
>>>>>>                   backup_tree_root:       30834688        gen: 2318
>>>>>>    level: 0
>>>>>>                   backup_tree_root:       30408704        gen: 2319
>>>>>>    level: 0
>>>>>>                   backup_tree_root:       31031296        gen: 2316
>>>>>>    level: 0
>>>>>>
>>>>>> Then try the bytenr in their reverse generation order in btrfs ins
>>>>>> dump-tree:
>>>>>> (The latest one should be the current root, thus you can skip it)
>>>>>>
>>>>>> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>>>> -A 5
>>>>>>
>>>>>> Then grab the bytenr of the subvolume 257, then pass the bytenr to
>>>>>> btrfs-restore:
>>>>>>
>>>>>> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>>>>>>
>>>>>> The chance is already pretty low, good luck.
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>
>>>>>
>>>>> When I run dump-tree, I get this:
>>>>>
>>>>> # btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>>> -A 5
>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found
>>>>> 0xc375d6b6
>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found
>>>>> 0xc375d6b6
>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found
>>>>> 0xc375d6b6
>>>>> Csum didn't match
>>>>> WARNING: could not setup extent tree, skipping it
>>>>>
>>>>> The same exact offset fails checksum for all 4 backup roots, any way
>>>>> around this?
>>>
>>> When without the grep, is there any output?
>>
>>
>> # btrfs ins dump-tree -b 787070976 /dev/sdb3
>> btrfs-progs v5.13.1
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> Csum didn't match
>> WARNING: could not setup extent tree, skipping it
>> node 787070976 level 1 items 7 free space 486 generation 166932 owner
>> ROOT_TREE
>> node 787070976 flags 0x1(WRITTEN) backref revision 1
>> fs uuid 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
>> chunk uuid a8a06213-eebf-40d8-ab1a-914f621fbe1c
>>      key (EXTENT_TREE ROOT_ITEM 0) block 787087360 gen 166932
>>      key (277 INODE_ITEM 0) block 197491195904 gen 56511
>>      key (305 INODE_ITEM 0) block 778174464 gen 166929
>>      key (366 EXTENT_DATA 0) block 197491949568 gen 56511
>>      key (428 INODE_ITEM 0) block 36175872 gen 166829
>>      key (476 INODE_ITEM 0) block 787234816 gen 166932
>>      key (FREE_SPACE UNTYPED 99888398336) block 780812288 gen 166929
>
> My bad, I forgot to add "--follow" option for "btrfs ins dump-tree"
>
> So the full command is:
>
> $ btrfs ins dump-tree -b 787070976 --follow /dev/sdb3
>
> Then you should be able to find "(257 ROOT_ITEM".
>

# btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
ROOT_ITEM" -A 5
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
    item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
        generation 166932 root_dirid 256 bytenr 786726912 level 2 refs 1
        lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
        uuid 1ac60d28-6f11-2842-aca2-b1574b108336
        ctransid 166932 otransid 8 stransid 0 rtransid 0
        ctime 1627959592.718936423 (2021-08-02 19:59:52)


# btrfs restore -Divf 786726912 /dev/sdb3 .
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
This is a dry-run, no files are going to be restored
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
bad tree block 920748032, bytenr mismatch, want=920748032, have=0
ERROR: search for next directory entry failed: -5


1st set of "checksum verify failed" has different addresses, but the
last set always has 920748032



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-29  6:34                                         ` Konstantin Svist
@ 2021-08-29  7:19                                           ` Qu Wenruo
  2021-08-29 20:02                                             ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-29  7:19 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS



On 2021/8/29 下午2:34, Konstantin Svist wrote:
> On 8/28/21 16:30, Qu Wenruo wrote:
>>
>>
>> On 2021/8/29 上午7:16, Konstantin Svist wrote:
>>> On 8/27/21 23:16, Qu Wenruo wrote:
>>>>
>>>>
>>>> On 2021/8/28 下午1:57, Konstantin Svist wrote:
>>>>> On 8/20/21 19:56, Konstantin Svist wrote:
>>>>>> On 8/11/21 18:18, Qu Wenruo wrote:
>>>>>>>
>>>>>>> On 2021/8/12 上午6:34, Konstantin Svist wrote:
>>>>>>>> Shouldn't there be an earlier generation of this subvolume's tree
>>>>>>>> block
>>>>>>>> somewhere on the disk? Would all of them have gotten overwritten
>>>>>>>> already?
>>>>>>> Then it will be more complex and I can't ensure any good result.
>>>>>>
>>>>>> It was already pretty complex and results were never guaranteed :)
>>>>>>
>>>>>>
>>>>>>> Firstly you need to find an older root tree:
>>>>>>>
>>>>>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>>>>>>                    backup_tree_root:       30687232        gen: 2317
>>>>>>>     level: 0
>>>>>>>                    backup_tree_root:       30834688        gen: 2318
>>>>>>>     level: 0
>>>>>>>                    backup_tree_root:       30408704        gen: 2319
>>>>>>>     level: 0
>>>>>>>                    backup_tree_root:       31031296        gen: 2316
>>>>>>>     level: 0
>>>>>>>
>>>>>>> Then try the bytenr in their reverse generation order in btrfs ins
>>>>>>> dump-tree:
>>>>>>> (The latest one should be the current root, thus you can skip it)
>>>>>>>
>>>>>>> # btrfs ins dump-tree -b 30834688 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>>>>> -A 5
>>>>>>>
>>>>>>> Then grab the bytenr of the subvolume 257, then pass the bytenr to
>>>>>>> btrfs-restore:
>>>>>>>
>>>>>>> # btrfs-restore -f <bytenr> /dev/sdb3 <restore_path>
>>>>>>>
>>>>>>> The chance is already pretty low, good luck.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Qu
>>>>>>
>>>>>>
>>>>>> When I run dump-tree, I get this:
>>>>>>
>>>>>> # btrfs ins dump-tree -b 787070976 /dev/sdb3 | grep "(257 ROOT_ITEM"
>>>>>> -A 5
>>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found
>>>>>> 0xc375d6b6
>>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found
>>>>>> 0xc375d6b6
>>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found
>>>>>> 0xc375d6b6
>>>>>> Csum didn't match
>>>>>> WARNING: could not setup extent tree, skipping it
>>>>>>
>>>>>> The same exact offset fails checksum for all 4 backup roots, any way
>>>>>> around this?
>>>>
>>>> When without the grep, is there any output?
>>>
>>>
>>> # btrfs ins dump-tree -b 787070976 /dev/sdb3
>>> btrfs-progs v5.13.1
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> Csum didn't match
>>> WARNING: could not setup extent tree, skipping it
>>> node 787070976 level 1 items 7 free space 486 generation 166932 owner
>>> ROOT_TREE
>>> node 787070976 flags 0x1(WRITTEN) backref revision 1
>>> fs uuid 44a768e0-28ba-4c6a-8eef-18ffa8c27d1b
>>> chunk uuid a8a06213-eebf-40d8-ab1a-914f621fbe1c
>>>       key (EXTENT_TREE ROOT_ITEM 0) block 787087360 gen 166932
>>>       key (277 INODE_ITEM 0) block 197491195904 gen 56511
>>>       key (305 INODE_ITEM 0) block 778174464 gen 166929
>>>       key (366 EXTENT_DATA 0) block 197491949568 gen 56511
>>>       key (428 INODE_ITEM 0) block 36175872 gen 166829
>>>       key (476 INODE_ITEM 0) block 787234816 gen 166932
>>>       key (FREE_SPACE UNTYPED 99888398336) block 780812288 gen 166929
>>
>> My bad, I forgot to add "--follow" option for "btrfs ins dump-tree"
>>
>> So the full command is:
>>
>> $ btrfs ins dump-tree -b 787070976 --follow /dev/sdb3
>>
>> Then you should be able to find "(257 ROOT_ITEM".
>>
>
> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
> ROOT_ITEM" -A 5
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
>      item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>          generation 166932 root_dirid 256 bytenr 786726912 level 2 refs 1
>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
>          uuid 1ac60d28-6f11-2842-aca2-b1574b108336
>          ctransid 166932 otransid 8 stransid 0 rtransid 0
>          ctime 1627959592.718936423 (2021-08-02 19:59:52)
>
>
> # btrfs restore -Divf 786726912 /dev/sdb3 .
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
> This is a dry-run, no files are going to be restored
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
> ERROR: search for next directory entry failed: -5

This all zero means the data on-disk are wiped.

Either not reaching disk or discarded.

Neither is a good thing.

>
>
> 1st set of "checksum verify failed" has different addresses, but the
> last set always has 920748032

Have you tried other bytenrs from find-root?

Thanks,
Qu
>
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-29  7:19                                           ` Qu Wenruo
@ 2021-08-29 20:02                                             ` Konstantin Svist
  2021-08-30  0:22                                               ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-29 20:02 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

On 8/29/21 00:19, Qu Wenruo wrote:
>
>
> On 2021/8/29 下午2:34, Konstantin Svist wrote:
>>
>> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
>> ROOT_ITEM" -A 5
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> Csum didn't match
>> WARNING: could not setup extent tree, skipping it
>>      item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>          generation 166932 root_dirid 256 bytenr 786726912 level 2
>> refs 1
>>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>> 0x0(none)
>>          uuid 1ac60d28-6f11-2842-aca2-b1574b108336
>>          ctransid 166932 otransid 8 stransid 0 rtransid 0
>>          ctime 1627959592.718936423 (2021-08-02 19:59:52)
>>
>>
>> # btrfs restore -Divf 786726912 /dev/sdb3 .
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>> Csum didn't match
>> WARNING: could not setup extent tree, skipping it
>> This is a dry-run, no files are going to be restored
>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
>> ERROR: search for next directory entry failed: -5
>
> This all zero means the data on-disk are wiped.
>
> Either not reaching disk or discarded.
>
> Neither is a good thing.
>
>>
>>
>> 1st set of "checksum verify failed" has different addresses, but the
>> last set always has 920748032
>
> Have you tried other bytenrs from find-root?


Is it normal that they all fail on the same exact block? Sounds
suspicious to me.


The other 3 attempts:


# btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
        backup_tree_root:    787070976    gen: 166932    level: 1
        backup_tree_root:    778108928    gen: 166929    level: 1
        backup_tree_root:    781172736    gen: 166930    level: 1
        backup_tree_root:    786399232    gen: 166931    level: 1

# btrfs ins dump-tree -b 786399232 --follow /dev/sdb3 | grep "(257
ROOT_ITEM" -A 5
[...]
    item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
        generation 166931 root_dirid 256 bytenr 781467648 level 2 refs 1
        lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)

[...]

# btrfs restore -Divf 781467648 /dev/sdb3 .
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
This is a dry-run, no files are going to be restored
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
bad tree block 920748032, bytenr mismatch, want=920748032, have=0
ERROR: search for next directory entry failed: -5

# btrfs ins dump-tree -b 781172736 --follow /dev/sdb3 | grep "(257
ROOT_ITEM" -A 5
[...]
    item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
        generation 166930 root_dirid 256 bytenr 780828672 level 2 refs 1
        lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
[...]


# btrfs restore -Divf 780828672 /dev/sdb3 .
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
This is a dry-run, no files are going to be restored
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
bad tree block 920748032, bytenr mismatch, want=920748032, have=0
ERROR: search for next directory entry failed: -5

# btrfs ins dump-tree -b 778108928 --follow /dev/sdb3 | grep "(257
ROOT_ITEM" -A 5
[...]

   item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
        generation 166929 root_dirid 256 bytenr 102760448 level 2 refs 1
        lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
[...]


# btrfs restore -Divf 102760448 /dev/sdb3 .
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
This is a dry-run, no files are going to be restored
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
bad tree block 920748032, bytenr mismatch, want=920748032, have=0
ERROR: search for next directory entry failed: -5


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-29 20:02                                             ` Konstantin Svist
@ 2021-08-30  0:22                                               ` Qu Wenruo
  2021-08-30  3:48                                                 ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-30  0:22 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS



On 2021/8/30 上午4:02, Konstantin Svist wrote:
> On 8/29/21 00:19, Qu Wenruo wrote:
>>
>>
>> On 2021/8/29 下午2:34, Konstantin Svist wrote:
>>>
>>> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
>>> ROOT_ITEM" -A 5
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> Csum didn't match
>>> WARNING: could not setup extent tree, skipping it
>>>       item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>>           generation 166932 root_dirid 256 bytenr 786726912 level 2
>>> refs 1
>>>           lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>>> 0x0(none)
>>>           uuid 1ac60d28-6f11-2842-aca2-b1574b108336
>>>           ctransid 166932 otransid 8 stransid 0 rtransid 0
>>>           ctime 1627959592.718936423 (2021-08-02 19:59:52)
>>>
>>>
>>> # btrfs restore -Divf 786726912 /dev/sdb3 .
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> Csum didn't match
>>> WARNING: could not setup extent tree, skipping it
>>> This is a dry-run, no files are going to be restored
>>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>>> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
>>> ERROR: search for next directory entry failed: -5
>>
>> This all zero means the data on-disk are wiped.
>>
>> Either not reaching disk or discarded.
>>
>> Neither is a good thing.
>>
>>>
>>>
>>> 1st set of "checksum verify failed" has different addresses, but the
>>> last set always has 920748032
>>
>> Have you tried other bytenrs from find-root?
>
>
> Is it normal that they all fail on the same exact block? Sounds
> suspicious to me.

This means some higher tree block is corrupted.

Only manual inspection can determine.

But this is definite not a good thing for your data salvage...
>
>
> The other 3 attempts:
>
>
> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>          backup_tree_root:    787070976    gen: 166932    level: 1
>          backup_tree_root:    778108928    gen: 166929    level: 1
>          backup_tree_root:    781172736    gen: 166930    level: 1
>          backup_tree_root:    786399232    gen: 166931    level: 1
>
> # btrfs ins dump-tree -b 786399232 --follow /dev/sdb3 | grep "(257
> ROOT_ITEM" -A 5
> [...]
>      item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>          generation 166931 root_dirid 256 bytenr 781467648 level 2 refs 1
>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
>
> [...]

To manually inspect the tree, you can use btrfs-inspect to see what's
wrong with the tree blocks.

# btrfs ins dump-tree -b 781467648 --follow --bfs /dev/sdb3

This also means, even the remaining part is fine, a big chunk of data
can no longer be recovered.

Thanks,
Qu

>
> # btrfs restore -Divf 781467648 /dev/sdb3 .
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
> This is a dry-run, no files are going to be restored
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
> ERROR: search for next directory entry failed: -5
>
> # btrfs ins dump-tree -b 781172736 --follow /dev/sdb3 | grep "(257
> ROOT_ITEM" -A 5
> [...]
>      item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>          generation 166930 root_dirid 256 bytenr 780828672 level 2 refs 1
>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
> [...]
>
>
> # btrfs restore -Divf 780828672 /dev/sdb3 .
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
> This is a dry-run, no files are going to be restored
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
> ERROR: search for next directory entry failed: -5
>
> # btrfs ins dump-tree -b 778108928 --follow /dev/sdb3 | grep "(257
> ROOT_ITEM" -A 5
> [...]
>
>     item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>          generation 166929 root_dirid 256 bytenr 102760448 level 2 refs 1
>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
> [...]
>
>
> # btrfs restore -Divf 102760448 /dev/sdb3 .
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it
> This is a dry-run, no files are going to be restored
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
> ERROR: search for next directory entry failed: -5
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-30  0:22                                               ` Qu Wenruo
@ 2021-08-30  3:48                                                 ` Konstantin Svist
  2021-08-30  7:20                                                   ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-08-30  3:48 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

On 8/29/21 17:22, Qu Wenruo wrote:
>
>
> On 2021/8/30 上午4:02, Konstantin Svist wrote:
>> On 8/29/21 00:19, Qu Wenruo wrote:
>>>
>>>
>>> On 2021/8/29 下午2:34, Konstantin Svist wrote:
>>>>
>>>> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
>>>> ROOT_ITEM" -A 5
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> Csum didn't match
>>>> WARNING: could not setup extent tree, skipping it
>>>>       item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>>>           generation 166932 root_dirid 256 bytenr 786726912 level 2
>>>> refs 1
>>>>           lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>>>> 0x0(none)
>>>>           uuid 1ac60d28-6f11-2842-aca2-b1574b108336
>>>>           ctransid 166932 otransid 8 stransid 0 rtransid 0
>>>>           ctime 1627959592.718936423 (2021-08-02 19:59:52)
>>>>
>>>>
>>>> # btrfs restore -Divf 786726912 /dev/sdb3 .
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> Csum didn't match
>>>> WARNING: could not setup extent tree, skipping it
>>>> This is a dry-run, no files are going to be restored
>>>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>>>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>>>> checksum verify failed on 920748032 wanted 0x00000000 found 0xb6bde3e4
>>>> bad tree block 920748032, bytenr mismatch, want=920748032, have=0
>>>> ERROR: search for next directory entry failed: -5
>>>
>>> This all zero means the data on-disk are wiped.
>>>
>>> Either not reaching disk or discarded.
>>>
>>> Neither is a good thing.
>>>
>>>>
>>>>
>>>> 1st set of "checksum verify failed" has different addresses, but the
>>>> last set always has 920748032
>>>
>>> Have you tried other bytenrs from find-root?
>>
>>
>> Is it normal that they all fail on the same exact block? Sounds
>> suspicious to me.
>
> This means some higher tree block is corrupted.
>
> Only manual inspection can determine.
>
> But this is definite not a good thing for your data salvage...
>>
>>
>> The other 3 attempts:
>>
>>
>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>          backup_tree_root:    787070976    gen: 166932    level: 1
>>          backup_tree_root:    778108928    gen: 166929    level: 1
>>          backup_tree_root:    781172736    gen: 166930    level: 1
>>          backup_tree_root:    786399232    gen: 166931    level: 1
>>
>> # btrfs ins dump-tree -b 786399232 --follow /dev/sdb3 | grep "(257
>> ROOT_ITEM" -A 5
>> [...]
>>      item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>          generation 166931 root_dirid 256 bytenr 781467648 level 2
>> refs 1
>>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>> 0x0(none)
>>
>> [...]
>
> To manually inspect the tree, you can use btrfs-inspect to see what's
> wrong with the tree blocks.
>
> # btrfs ins dump-tree -b 781467648 --follow --bfs /dev/sdb3
>
> This also means, even the remaining part is fine, a big chunk of data
> can no longer be recovered. 


I'm hoping to find several important files at this point, definitely
don't need the whole FS..

So when I run this, I get about 190 lines like

    key (256 INODE_ITEM 0) block 920748032 gen 166878
    key (52607 DIR_ITEM 988524606) block 1078902784 gen 163454
    key (52607 DIR_INDEX 18179) block 189497344 gen 30
    key (174523 INODE_REF 52607) block 185942016 gen 30
    key (361729 EXTENT_DATA 0) block 785907712 gen 166931
    key (381042 XATTR_ITEM 3817753667) block 1027391488 gen 120910


I tried to pass these into restore, but it's not liking it:

# btrfs restore -Divf 196816535552 /dev/sdb3 .
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
Csum didn't match
WARNING: could not setup extent tree, skipping it
This is a dry-run, no files are going to be restored
Done searching




^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-30  3:48                                                 ` Konstantin Svist
@ 2021-08-30  7:20                                                   ` Qu Wenruo
       [not found]                                                     ` <aa33b83f-b822-b1d8-9fe4-5cf4ab45c3e1@gmail.com>
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-30  7:20 UTC (permalink / raw)
  To: Konstantin Svist; +Cc: Btrfs BTRFS



On 2021/8/30 上午11:48, Konstantin Svist wrote:
> On 8/29/21 17:22, Qu Wenruo wrote:
>>
>>
...
>>
>> But this is definite not a good thing for your data salvage...
>>>
>>>
>>> The other 3 attempts:
>>>
>>>
>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root
>>>           backup_tree_root:    787070976    gen: 166932    level: 1
>>>           backup_tree_root:    778108928    gen: 166929    level: 1
>>>           backup_tree_root:    781172736    gen: 166930    level: 1
>>>           backup_tree_root:    786399232    gen: 166931    level: 1
>>>
>>> # btrfs ins dump-tree -b 786399232 --follow /dev/sdb3 | grep "(257
>>> ROOT_ITEM" -A 5
>>> [...]
>>>       item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>>           generation 166931 root_dirid 256 bytenr 781467648 level 2
>>> refs 1
>>>           lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>>> 0x0(none)
>>>
>>> [...]
>>
>> To manually inspect the tree, you can use btrfs-inspect to see what's
>> wrong with the tree blocks.
>>
>> # btrfs ins dump-tree -b 781467648 --follow --bfs /dev/sdb3
>>
>> This also means, even the remaining part is fine, a big chunk of data
>> can no longer be recovered.
>
>
> I'm hoping to find several important files at this point, definitely
> don't need the whole FS..
>
> So when I run this, I get about 190 lines like
>
>      key (256 INODE_ITEM 0) block 920748032 gen 166878
>      key (52607 DIR_ITEM 988524606) block 1078902784 gen 163454
>      key (52607 DIR_INDEX 18179) block 189497344 gen 30
>      key (174523 INODE_REF 52607) block 185942016 gen 30
>      key (361729 EXTENT_DATA 0) block 785907712 gen 166931
>      key (381042 XATTR_ITEM 3817753667) block 1027391488 gen 120910

Can you provide the full output? (both stdout and stderr)

If you're concerning about the filenames, "btrfs ins dump-tree" has
--hide-names to mask all the file/dir names.

190 lines look too few than expected, thus means some tree blocks are
not read out properly.

You may want to try other bytenr to see which gives the most amount of
output (thus most possible to restore some data).
>
>
> I tried to pass these into restore, but it's not liking it:
>
> # btrfs restore -Divf 196816535552 /dev/sdb3 .

Where the bytenr 196816535552 is from?

> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
> Csum didn't match
> WARNING: could not setup extent tree, skipping it

This part is expected, it just tries to read extent tree which is
manually corrupted.

> This is a dry-run, no files are going to be restored
> Done searching

While this is not expected, as it doesn't even show any research
attempts, is the bytenr from the subtree of the subvolume 257?

Thanks,
Qu

>
>
>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
       [not found]                                                     ` <aa33b83f-b822-b1d8-9fe4-5cf4ab45c3e1@gmail.com>
@ 2021-08-31 11:05                                                       ` Qu Wenruo
  2021-09-01  1:38                                                         ` Konstantin Svist
  0 siblings, 1 reply; 30+ messages in thread
From: Qu Wenruo @ 2021-08-31 11:05 UTC (permalink / raw)
  To: Konstantin Svist, linux-btrfs



On 2021/8/31 下午2:25, Konstantin Svist wrote:
> On 8/30/21 00:20, Qu Wenruo wrote:
>>
>> On 2021/8/30 上午11:48, Konstantin Svist wrote:
>>>
>>> I'm hoping to find several important files at this point, definitely
>>> don't need the whole FS..
>>>
>>> So when I run this, I get about 190 lines like
>>>
>>>       key (256 INODE_ITEM 0) block 920748032 gen 166878
>>>       key (52607 DIR_ITEM 988524606) block 1078902784 gen 163454
>>>       key (52607 DIR_INDEX 18179) block 189497344 gen 30
>>>       key (174523 INODE_REF 52607) block 185942016 gen 30
>>>       key (361729 EXTENT_DATA 0) block 785907712 gen 166931
>>>       key (381042 XATTR_ITEM 3817753667) block 1027391488 gen 120910
>>
>> Can you provide the full output? (both stdout and stderr)
>>
>> If you're concerning about the filenames, "btrfs ins dump-tree" has
>> --hide-names to mask all the file/dir names.
>>
>> 190 lines look too few than expected, thus means some tree blocks are
>> not read out properly.
>>
>> You may want to try other bytenr to see which gives the most amount of
>> output (thus most possible to restore some data).
>
> ## Naming these BTR1..4
> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root | sort -rk 4
>          backup_tree_root:    787070976    gen: 166932    level: 1   ### BTR1
>          backup_tree_root:    786399232    gen: 166931    level: 1   ### BTR2
>          backup_tree_root:    781172736    gen: 166930    level: 1   ### BTR3
>          backup_tree_root:    778108928    gen: 166929    level: 1   ### BTR4
>
> ### BTR1:
> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
> ROOT_ITEM" -A 5
> ...
>     item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>          generation 166932 root_dirid 256 bytenr 786726912 level 2 refs
> 1      ### naming this RI1
>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags 0x0(none)
> ...
>
> BTR1 -> RI1 786726912
> BTR2 -> RI2 781467648
> BTR3 -> RI3 780828672
> BTR4 -> RI3 102760448
>
> ### inpsecting RI2
> # btrfs ins dump-tree -b 781467648 --follow --bfs /dev/sdb3
>> RI2.inspect.stdout 2>RI2.inspect.stderr
> <outputs attached>
>
> One of the lines of this output is
>          key (2334458 DIR_ITEM 3564787518) block 196816535552 gen 56498
>
>>> I tried to pass these into restore, but it's not liking it:
>>>
>>> # btrfs restore -Divf 196816535552 /dev/sdb3 .
>>
>> Where the bytenr 196816535552 is from?
>
> ^^^ output from inspect RI2 -> DIR_ITEM. Probably wrong usage? :)

OK, that seems to be out of the way btrfs-restore can handle.

>
>
>>
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>> Csum didn't match
>>> WARNING: could not setup extent tree, skipping it
>>
>> This part is expected, it just tries to read extent tree which is
>> manually corrupted.
>>
>>> This is a dry-run, no files are going to be restored
>>> Done searching
>>
>> While this is not expected, as it doesn't even show any research
>> attempts, is the bytenr from the subtree of the subvolume 257?
>
>
> Interestingly, I tried --dfs instead of --bfs and there are a lot more
> entries, including filenames
>

BTW, thanks to the output and stderr, it shows exactly what's going wrong.

The offending tree block, 920748032, is the first one.

If using --dfs, it will go through each child until reaches the leaves,
before going to next tree block.

And if the first child is corrupted, then it gives up immediately.

That's why I'm explicitly specifying --bfs, which will skip the
corrupted child (and its children) and go next tree blocks directly,
thus have the best chance to recovery the contents.

For the worst case, I guess you have to use "btrfs ins dump-tree" to
recovery your files, and then "btrfs-map-logical" to grab the data from
disk directly.

Meanwhile I guess I should put some time to enhance btrfs-restore to
handle the corruption you're hitting, so that we can continue to next
good tree block, without being bothered by early corrupted tree blocks.

Thanks,
Qu


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-08-31 11:05                                                       ` Qu Wenruo
@ 2021-09-01  1:38                                                         ` Konstantin Svist
  2021-09-01  1:47                                                           ` Qu Wenruo
  0 siblings, 1 reply; 30+ messages in thread
From: Konstantin Svist @ 2021-09-01  1:38 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

On 8/31/21 04:05, Qu Wenruo wrote:
>
>
> On 2021/8/31 下午2:25, Konstantin Svist wrote:
>> On 8/30/21 00:20, Qu Wenruo wrote:
>>>
>>> On 2021/8/30 上午11:48, Konstantin Svist wrote:
>>>>
>>>> I'm hoping to find several important files at this point, definitely
>>>> don't need the whole FS..
>>>>
>>>> So when I run this, I get about 190 lines like
>>>>
>>>>       key (256 INODE_ITEM 0) block 920748032 gen 166878
>>>>       key (52607 DIR_ITEM 988524606) block 1078902784 gen 163454
>>>>       key (52607 DIR_INDEX 18179) block 189497344 gen 30
>>>>       key (174523 INODE_REF 52607) block 185942016 gen 30
>>>>       key (361729 EXTENT_DATA 0) block 785907712 gen 166931
>>>>       key (381042 XATTR_ITEM 3817753667) block 1027391488 gen 120910
>>>
>>> Can you provide the full output? (both stdout and stderr)
>>>
>>> If you're concerning about the filenames, "btrfs ins dump-tree" has
>>> --hide-names to mask all the file/dir names.
>>>
>>> 190 lines look too few than expected, thus means some tree blocks are
>>> not read out properly.
>>>
>>> You may want to try other bytenr to see which gives the most amount of
>>> output (thus most possible to restore some data).
>>
>> ## Naming these BTR1..4
>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root | sort -rk 4
>>          backup_tree_root:    787070976    gen: 166932    level: 1  
>> ### BTR1
>>          backup_tree_root:    786399232    gen: 166931    level: 1  
>> ### BTR2
>>          backup_tree_root:    781172736    gen: 166930    level: 1  
>> ### BTR3
>>          backup_tree_root:    778108928    gen: 166929    level: 1  
>> ### BTR4
>>
>> ### BTR1:
>> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
>> ROOT_ITEM" -A 5
>> ...
>>     item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>          generation 166932 root_dirid 256 bytenr 786726912 level 2 refs
>> 1      ### naming this RI1
>>          lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>> 0x0(none)
>> ...
>>
>> BTR1 -> RI1 786726912
>> BTR2 -> RI2 781467648
>> BTR3 -> RI3 780828672
>> BTR4 -> RI3 102760448
>>
>> ### inpsecting RI2
>> # btrfs ins dump-tree -b 781467648 --follow --bfs /dev/sdb3
>>> RI2.inspect.stdout 2>RI2.inspect.stderr
>> <outputs attached>
>>
>> One of the lines of this output is
>>          key (2334458 DIR_ITEM 3564787518) block 196816535552 gen 56498
>>
>>>> I tried to pass these into restore, but it's not liking it:
>>>>
>>>> # btrfs restore -Divf 196816535552 /dev/sdb3 .
>>>
>>> Where the bytenr 196816535552 is from?
>>
>> ^^^ output from inspect RI2 -> DIR_ITEM. Probably wrong usage? :)
>
> OK, that seems to be out of the way btrfs-restore can handle.
>
>>
>>
>>>
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>> Csum didn't match
>>>> WARNING: could not setup extent tree, skipping it
>>>
>>> This part is expected, it just tries to read extent tree which is
>>> manually corrupted.
>>>
>>>> This is a dry-run, no files are going to be restored
>>>> Done searching
>>>
>>> While this is not expected, as it doesn't even show any research
>>> attempts, is the bytenr from the subtree of the subvolume 257?
>>
>>
>> Interestingly, I tried --dfs instead of --bfs and there are a lot more
>> entries, including filenames
>>
>
> BTW, thanks to the output and stderr, it shows exactly what's going
> wrong.
>
> The offending tree block, 920748032, is the first one.
>
> If using --dfs, it will go through each child until reaches the leaves,
> before going to next tree block.
>
> And if the first child is corrupted, then it gives up immediately.
>
> That's why I'm explicitly specifying --bfs, which will skip the
> corrupted child (and its children) and go next tree blocks directly,
> thus have the best chance to recovery the contents.
>
> For the worst case, I guess you have to use "btrfs ins dump-tree" to
> recovery your files, and then "btrfs-map-logical" to grab the data from
> disk directly.
>
> Meanwhile I guess I should put some time to enhance btrfs-restore to
> handle the corruption you're hitting, so that we can continue to next
> good tree block, without being bothered by early corrupted tree blocks. 


Thanks again for looking into this!

Should I wait for a patch or is there something else I can do meanwhile?



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: Trying to recover data from SSD
  2021-09-01  1:38                                                         ` Konstantin Svist
@ 2021-09-01  1:47                                                           ` Qu Wenruo
  0 siblings, 0 replies; 30+ messages in thread
From: Qu Wenruo @ 2021-09-01  1:47 UTC (permalink / raw)
  To: Konstantin Svist, Qu Wenruo, linux-btrfs



On 2021/9/1 上午9:38, Konstantin Svist wrote:
> On 8/31/21 04:05, Qu Wenruo wrote:
>>
>>
>> On 2021/8/31 下午2:25, Konstantin Svist wrote:
>>> On 8/30/21 00:20, Qu Wenruo wrote:
>>>>
>>>> On 2021/8/30 上午11:48, Konstantin Svist wrote:
>>>>>
>>>>> I'm hoping to find several important files at this point, definitely
>>>>> don't need the whole FS..
>>>>>
>>>>> So when I run this, I get about 190 lines like
>>>>>
>>>>>        key (256 INODE_ITEM 0) block 920748032 gen 166878
>>>>>        key (52607 DIR_ITEM 988524606) block 1078902784 gen 163454
>>>>>        key (52607 DIR_INDEX 18179) block 189497344 gen 30
>>>>>        key (174523 INODE_REF 52607) block 185942016 gen 30
>>>>>        key (361729 EXTENT_DATA 0) block 785907712 gen 166931
>>>>>        key (381042 XATTR_ITEM 3817753667) block 1027391488 gen 120910
>>>>
>>>> Can you provide the full output? (both stdout and stderr)
>>>>
>>>> If you're concerning about the filenames, "btrfs ins dump-tree" has
>>>> --hide-names to mask all the file/dir names.
>>>>
>>>> 190 lines look too few than expected, thus means some tree blocks are
>>>> not read out properly.
>>>>
>>>> You may want to try other bytenr to see which gives the most amount of
>>>> output (thus most possible to restore some data).
>>>
>>> ## Naming these BTR1..4
>>> # btrfs ins dump-super -f /dev/sdb3 | grep backup_tree_root | sort -rk 4
>>>           backup_tree_root:    787070976    gen: 166932    level: 1
>>> ### BTR1
>>>           backup_tree_root:    786399232    gen: 166931    level: 1
>>> ### BTR2
>>>           backup_tree_root:    781172736    gen: 166930    level: 1
>>> ### BTR3
>>>           backup_tree_root:    778108928    gen: 166929    level: 1
>>> ### BTR4
>>>
>>> ### BTR1:
>>> # btrfs ins dump-tree -b 787070976 --follow /dev/sdb3 | grep "(257
>>> ROOT_ITEM" -A 5
>>> ...
>>>      item 13 key (257 ROOT_ITEM 0) itemoff 13147 itemsize 439
>>>           generation 166932 root_dirid 256 bytenr 786726912 level 2 refs
>>> 1      ### naming this RI1
>>>           lastsnap 56690 byte_limit 0 bytes_used 1013104640 flags
>>> 0x0(none)
>>> ...
>>>
>>> BTR1 -> RI1 786726912
>>> BTR2 -> RI2 781467648
>>> BTR3 -> RI3 780828672
>>> BTR4 -> RI3 102760448
>>>
>>> ### inpsecting RI2
>>> # btrfs ins dump-tree -b 781467648 --follow --bfs /dev/sdb3
>>>> RI2.inspect.stdout 2>RI2.inspect.stderr
>>> <outputs attached>
>>>
>>> One of the lines of this output is
>>>           key (2334458 DIR_ITEM 3564787518) block 196816535552 gen 56498
>>>
>>>>> I tried to pass these into restore, but it's not liking it:
>>>>>
>>>>> # btrfs restore -Divf 196816535552 /dev/sdb3 .
>>>>
>>>> Where the bytenr 196816535552 is from?
>>>
>>> ^^^ output from inspect RI2 -> DIR_ITEM. Probably wrong usage? :)
>>
>> OK, that seems to be out of the way btrfs-restore can handle.
>>
>>>
>>>
>>>>
>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>>> checksum verify failed on 786939904 wanted 0xcdcdcdcd found 0xc375d6b6
>>>>> Csum didn't match
>>>>> WARNING: could not setup extent tree, skipping it
>>>>
>>>> This part is expected, it just tries to read extent tree which is
>>>> manually corrupted.
>>>>
>>>>> This is a dry-run, no files are going to be restored
>>>>> Done searching
>>>>
>>>> While this is not expected, as it doesn't even show any research
>>>> attempts, is the bytenr from the subtree of the subvolume 257?
>>>
>>>
>>> Interestingly, I tried --dfs instead of --bfs and there are a lot more
>>> entries, including filenames
>>>
>>
>> BTW, thanks to the output and stderr, it shows exactly what's going
>> wrong.
>>
>> The offending tree block, 920748032, is the first one.
>>
>> If using --dfs, it will go through each child until reaches the leaves,
>> before going to next tree block.
>>
>> And if the first child is corrupted, then it gives up immediately.
>>
>> That's why I'm explicitly specifying --bfs, which will skip the
>> corrupted child (and its children) and go next tree blocks directly,
>> thus have the best chance to recovery the contents.
>>
>> For the worst case, I guess you have to use "btrfs ins dump-tree" to
>> recovery your files, and then "btrfs-map-logical" to grab the data from
>> disk directly.
>>
>> Meanwhile I guess I should put some time to enhance btrfs-restore to
>> handle the corruption you're hitting, so that we can continue to next
>> good tree block, without being bothered by early corrupted tree blocks.
> 
> 
> Thanks again for looking into this!
> 
> Should I wait for a patch or is there something else I can do meanwhile?
> 
> 
Yes, you can use "btrfs ins dump-tree -b <bytenr> --follow --bfs" to 
dump the tree and look for the desired files. (aka, manual salvage).

The filename is included in INODE_REF key:

         item 55 key (258 INODE_REF 256) itemoff 11848 itemsize 15
                 index 3 namelen 5 name: reg_0

258 is the inode number, which can you look for its data using (<ino> 
EXTENT_DATA <offset>) key.

         item 56 key (258 EXTENT_DATA 0) itemoff 11795 itemsize 53
                 generation 19 type 1 (regular)
                 extent data disk byte 13697024 nr 4096
                 extent data offset 0 nr 4096 ram 4096
                 extent compression 0 (none)

Above (258 EXTENT_DATA 0) means, inode number 258, file data at file 
offset 0.

The logical bytenr on-disk is 13697024, size is 4096.

Then you can go "btrfs-map-logical -b 13697024" to find where the data 
is on real disks, and use dd to grab them and assemble your file.

It's time consuming and only feasible for a dozen of files.

Thanks,
Qu


^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2021-09-01  1:47 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-10  4:41 Trying to recover data from SSD Konstantin Svist
2021-08-10  5:24 ` Qu Wenruo
     [not found]   ` <CADQtc0=GDa-v_byewDmUHqr-TrX_S734ezwhLYL9OSkX-jcNOw@mail.gmail.com>
2021-08-10  6:56     ` Qu Wenruo
2021-08-10 16:12       ` Konstantin Svist
2021-08-10 22:24         ` Qu Wenruo
2021-08-10 23:21           ` Konstantin Svist
2021-08-10 23:54             ` Qu Wenruo
2021-08-11  5:22               ` Konstantin Svist
2021-08-11  5:24                 ` Qu Wenruo
2021-08-11  5:34                   ` Konstantin Svist
2021-08-11  5:49                     ` Qu Wenruo
2021-08-11 19:33                       ` Konstantin Svist
2021-08-11 21:51                         ` Qu Wenruo
2021-08-11 22:34                           ` Konstantin Svist
2021-08-12  1:18                             ` Qu Wenruo
2021-08-21  2:56                               ` Konstantin Svist
2021-08-28  5:57                                 ` Konstantin Svist
2021-08-28  6:16                                   ` Qu Wenruo
2021-08-28 23:16                                     ` Konstantin Svist
2021-08-28 23:30                                       ` Qu Wenruo
2021-08-29  6:34                                         ` Konstantin Svist
2021-08-29  7:19                                           ` Qu Wenruo
2021-08-29 20:02                                             ` Konstantin Svist
2021-08-30  0:22                                               ` Qu Wenruo
2021-08-30  3:48                                                 ` Konstantin Svist
2021-08-30  7:20                                                   ` Qu Wenruo
     [not found]                                                     ` <aa33b83f-b822-b1d8-9fe4-5cf4ab45c3e1@gmail.com>
2021-08-31 11:05                                                       ` Qu Wenruo
2021-09-01  1:38                                                         ` Konstantin Svist
2021-09-01  1:47                                                           ` Qu Wenruo
2021-08-11  0:30       ` Zygo Blaxell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.