On 2019/1/17 下午10:38, Christian Schneider wrote: > May I ask for a little technical details, about what happened/was wrong? (This may be pretty similar to what I explained before) parent transid verify failed on 448888832 wanted 68773 found 68768 parent transid verify failed on 448888832 wanted 68773 found 68771 These two lines are the root cause. Your tree block at 448888832 doesn't has the transid its parent expects. Normally this means either a) One tree block overwrites an existing tree block This means btrfs metadata CoW is screwed up completely. Possible causes are bad free space cache/tree or corrupted extent tree. (Thus metadata backup profile like DUP/RAID1/RAID10/RAID5/6 provides no help at all) b) The tree block at 448888832 is heavily damaged Normally this means the generation should be some garbage, but not for your case. So a) should be your case. But unlike normal a) case, your two metadata copies points to 2 different tree blocks, as the generation is completely different. So this looks like there is a power loss happened after one metadata copy written. And since the powerloss happened, one of the 3 generations should be the problem. My guess is, the last transaction 68773 who is writting the parent of 448888832 is causing the problem. But that doesn't explain everything, especially why one copy differs from the other. So I'm saying your fs may be totally corrupted, but as long as no power loss happen, the seed of destruction doesn't grow. But when power loss happens, the already screwed-up extent tree/space cache/space tree could destroy the full fs, as btrfs is way too dependent on metadata CoW to protect itself, and if the basis of metadata CoW is screwed up, nothing you can do but salvaging your data. Thanks, Qu > I don't know really anything about internal btrfs stuff, but would like > to gain a little insight. Also, if there is a explanation online, where > you can point me to would be nice. > BR, CHristian > > Am 17.01.19 um 15:12 schrieb Qu Wenruo: >> >> >> On 2019/1/17 下午9:54, Christian Schneider wrote: >>>>> >>>>> Do you know, which kernel is needed as base for the patch? Can I apply >>>>> it to 4.19 or do I need more recent? If you don't know, I can just try >>>>> it out. >>>> >>>> My base is v5.0-rc1. >>>> >>>> Although I think there shouldn't be too many conflicts for older >>>> kernels. >>>> >>> I could apply the patch on 4.19, but compilation failed. So I went >>> straight to master, where it worked, and I could even mount the fs now. >>> >>> Your patch also has a positive impact on free space: >>> >>>   df -h /home >>> Filesystem      Size  Used Avail Use% Mounted on >>> /dev/md42       7.3T  1.9T  1.8P   1% /home >>> >>> 1.8PB available space should be enough for the next few years :D >>> >>> Thank you very much so far!!! >>> >>> So, for further steps: As far as I understood, no possibility to repair >>> the fs? >> >> Unfortunately, no possibility. >> >> The corruption of extent tree is pretty nasty. >> Your metadata CoW is completely broken. >> It really doesn't make much sense to repair, and I don't really believe >> the repaired result could be any good. >> >>> I just get the data I can and create it new? >> >> Yep. >> >> And just a general tip, for any unexpected power loss, do a btrfs check >> --readonly before doing RW mount. >> >> It would help us to detect and locate possible cause of any corruption >> before it cause more damage. >> >> Thanks, >> Qu >> >>> >>> BR, CHristian >>