On 2019/10/23 上午6:56, Christian Pernegger wrote: > [Please CC me, I'm not on the list.] > > Am Mo., 21. Okt. 2019 um 15:34 Uhr schrieb Qu Wenruo : >> [...] just fstrim wiped some old tree blocks. But maybe it's some unfortunate race, that fstrim trimmed some tree blocks still in use. > > Forgive me for asking, but assuming that's what happened, why are the > backup blocks "not in use" from fstrim's perspective in the first > place? I'd consider backup (meta)data to be valuable payload data, > something to be stored extra carefully. No use making them if they're > no goo when you need them, after all. In other words, does fstrim by > default trim btrfs metadata (in which case fstrim's broken) or does > btrfs in effect store backup data in "unused" space (in which case > btrfs is broken)? Even backup roots are not trimmed, they has no use, it's just a pointer to older tree blocks. The older tree blocks are still trimmed, since they are not in use. Btrfs has its protection of not trimming tree blocks in use, but I don't know why it doesn't work. BTW, to make it clear, here "used" block group just means it has space being used. Not means all its space is being used. Trimming "used" block group is only trimming the unused space (like all other fs). And to your last question, yes, backup roots are in unused space, thus they get trimmed. But the timing is, only after current transaction is fully committed, to ensure crash won't cause any problem. (all in theory though) > >> [...] One good compromise is, only trim unallocated space. > > It had never occurred to me that anything would purposely try to trim > allocated space ... > >> As your corruption is only in extent tree. With my patchset, you should be able to mount it, so it's not that screwed up. > > To be clear, we're talking data recovery, not (progress towards) fs > repair, even if I manage to boot your rescue patchset? Then you should have all your fs accessible, although only read-only. (Isn't that obvious since the skipbg mount option is under rescue= group?) Btrfs-progs won't really help here, as it just like kernel, needs to read extent tree to go on. But in fact, extent tree is only needed for write operations. That's exactly what my patchset is doing, skip extent tree completely for rescue=skipbg mount option. > > A few more random observations from playing with the drive image: > $ btrfs check --init-extent-tree patient > Opening filesystem to check... > Checking filesystem on patient > UUID: c2bd83d6-2261-47bb-8d18-5aba949651d7 > repair mode will force to clear out log tree, are you sure? [y/N]: y > ERROR: Corrupted fs, no valid METADATA block group found > ERROR: failed to zero log tree: -117 > ERROR: attempt to start transaction over already running one > # rollback > > $ btrfs rescue zero-log patient > checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000 > checksum verify failed on 284041084928 found E4E3BDB6 wanted 00000000 > bad tree block 284041084928, bytenr mismatch, want=284041084928, have=0 > ERROR: could not open ctree > # rollback > > # hm, super 0 has log_root 284056535040, super 1 and 2 have log_root 0 ... > $ btrfs check -s1 --init-extent-tree patient > [...] > ERROR: errors found in fs roots > No device size related problem found > cache and super generation don't match, space cache will be invalidated > found 431478808576 bytes used, error(s) found > total csum bytes: 417926772 > total tree bytes: 2203549696 > total fs tree bytes: 1754415104 > total extent tree bytes: 49152 > btree space waste bytes: 382829965 > file data blocks allocated: 1591388033024 > referenced 539237134336 > > That ran a good while, generating a couple of hundred MB of output > (available on request, of course). In any case, it didn't help. > > $ ~/local/bin/btrfs check -s1 --repair patient > using SB copy 1, bytenr 67108864 > enabling repair mode > Opening filesystem to check... > checksum verify failed on 427311104 found 000000C8 wanted FFFFFF99 > checksum verify failed on 427311104 found 000000C8 wanted FFFFFF99 > Csum didn't match > ERROR: cannot open file system > > I don't suppose the roots found by btrfs-find-root and/or subvolumes > identified by btrfs restore -l would be any help? No help at all, especially for trimmed fs. > It's not like the > real fs root contained anything, just @ [/], @home [/home], and the > Timeshift subvolumes. If btrfs restore -D is to be believed, the > casualties under @home, for example, are inconsequential, caches and > the like, stuff that was likely open for writing at the time. btrfs restore is the skip_bg equivalent in btrfs-progs. It doesn't read extent tree at all, purely use fs trees to read the data. The only disadvantage is, you can't access the fs like regular fs, but only through btrfs restore. Thanks, Qu > > I don't know, it just seems strange that with all the (meta)data > that's obviously still there, it shouldn't be possible to restore the > fs to some sort of consistent state. > > Good night, > Christian > > > > > > > > > > > >> >> But extent tree update is really somehow trickier than I thought. >> >> Thanks, >> Qu >> >>> >>> Will keep you posted. >>> >>> Cheers, >>> C. >>> >>