On 2019/12/11 上午5:25, Christian Wimmer wrote: > Hi Qu and all others, > > thanks a lot for your help and patience! > > Unfortunately I could not get out any file from the arrive yet but I can survive. > > I would like just one more answer from you. Do you think with the newest version of btrfs it would not have happened? From the result, it looks like either btrfs is doing wrong trim, or the storage stack below (including the Parallels, the apple fs, and the apple drivers) is blowing up data. In the latter case, it doesn't matter whatever kernel version you're using, if it happens, it will take your data along with it. But for the former case, newer kernel has improved trim check to prevent bad trim, so at least newer kernel is a little more safer. > Should I update to the newest version? Not always the newest, although we're trying our best to prevent bugs, but sometimes we still have some bugs sneaking into latest kernel. > > I have many partitions with btrfs and I like them a lot. Very nice file system indeed but am I safe with the version that I have (4.19.1)? Can't say it's unsafe, since SUSE has all necessary backports and quite some customers are using (testing) it. As long as you're using the latest SUSE updates, it should be safe and all found bugs should have fixes backported. Thanks, Qu > > BTW, you are welcome to suggest any command or try anything with my broken file system that I still have backed up in case that you want to experiment with it. > > Thanks > > Chris > > >> On 7. Dec 2019, at 22:21, Qu WenRuo wrote: >> >> >> >> On 2019/12/8 上午12:44, Christian Wimmer wrote: >>> Hi Qu, >>> >>> I was reading about chunk-recover. Do you think this could be worth a try? >> >> Nope, your chunk tree is good, so that makes no sense. >> >>> >>> Is there any other command that can search for files that make sense to recover? >> >> The only sane behavior here is to search the whole disk and grab >> anything looks like a tree block, and then extract data from it. >> >> This is not something supported by btrfs-progs yet, so really not much >> more can be done here. >> >> Thanks, >> Qu >> >>> >>> Regards, >>> >>> Chris >>> >