On 2020/9/30 上午9:44, Eric Levy wrote: > I recently upgraded a Linux system running on btrfs from a 5.3.x > kernel to a 5.4.x version. The system failed to run for more than a > few minutes after the upgrade, because the root mount degraded to a > read-only state. I continued to use the system by booting using the > 5.3.x kernel. Dmesg please. But according to your btrfs check result, I think it's already caused by bad extent generation from older kernels. > > Some time later, I attempted to migrate the root subvolume using a > send-receive command pairing, and noticed that the operation would > invariably abort before completion. I also noticed that a full file > walk of the mounted volume was impossible, because operations on some > files generated errors from the file-system level. > > Upon investigating using a check command, I learned that the file > system had errors. > > Examining the error report (not saved), I noticed that overall my > situation had rather clear similarities to one described in an earlier > discussion [1]. > > Unfortunately, it appears that the differences in the kernels may have > corrupted the file system. Nope, your fs is still fine. > > Based on eagerness for a resolution, and on an optimistic comment > toward the end of the discussion, I chose to run a check operation on > the partition with the --repair flag included. And obviously it won't help. Since we don't have extent item repair functionality yet. There is an off-tree branch to do the repair: https://github.com/adam900710/btrfs-progs/tree/extent_gen_repair You could try that to see if it works. Thanks, Qu > > Perhaps not surprisingly to some, the result of a read-only check > operation after the attempted repair gave a much more discouraging > report, suggesting that the damage to the file system was made worse > not better by the operation. I realize that this possibility is > explained in the documentation. > > At the moment, the full report appears as below. > > Presently, the file system mounts, but the ability to successfully > read files degrades the longer the system is mounted and the more > files are read during a continuous mount. Experiments involving > unmounting and then mounting again give some indication that this > degradation is not entirely permanent. > > What possibility is open to recover all or part of the file system? > After such a rescue attempt, would I have any way to know what is lost > versus saved? Might I expect corruption within the file contents that > would not be detected by the rescue effort? > > I would be thankful for any guidance that might lead to restoring the data > > > [1] https://www.spinics.net/lists/linux-btrfs/msg96735.html > --- > > Opening filesystem to check... > Checking filesystem on /dev/sda5 > UUID: 9a4da0b6-7e39-4a5f-85eb-74acd11f5b94 > [1/7] checking root items > [2/7] checking extents > ERROR: invalid generation for extent 4064026624, have 94810718697136 > expect (0, 33469925] > ERROR: invalid generation for extent 16323178496, have 94811372174048 > expect (0, 33469925] > ERROR: invalid generation for extent 79980945408, have 94811372219744 > expect (0, 33469925] > ERROR: invalid generation for extent 318963990528, have 94810111593504 > expect (0, 33469925] > ERROR: invalid generation for extent 319650189312, have 14758526976 > expect (0, 33469925] > ERROR: invalid generation for extent 319677259776, have 414943019007 > expect (0, 33469925] > ERROR: errors found in extent allocation tree or chunk allocation > [3/7] checking free space cache > block group 71962722304 has wrong amount of free space, free space > cache has 266420224 block group has 266354688 > ERROR: free space cache has more free space than block group item, > this could leads to serious corruption, please contact btrfs > developers > failed to load free space cache for block group 71962722304 > [4/7] checking fs roots > [5/7] checking only csums items (without verifying data) > [6/7] checking root refs > [7/7] checking quota groups > found 399845548032 bytes used, error(s) found > total csum bytes: 349626220 > total tree bytes: 5908873216 > total fs tree bytes: 4414324736 > total extent tree bytes: 879493120 > btree space waste bytes: 1122882578 > file data blocks allocated: 550505705472 > referenced 512080416768 >