On 2018/12/4 上午11:32, Mike Javorski wrote: > Need a bit of advice here ladies / gents. I am running into an issue > which Qu Wenruo seems to have posted a patch for several weeks ago > (see https://patchwork.kernel.org/patch/10694997/). > > Here is the relevant dmesg output which led me to Qu's patch. > ---- > [ 10.032475] BTRFS critical (device sdb): corrupt leaf: root=2 > block=24655027060736 slot=20 bg_start=13188988928 bg_len=10804527104, > invalid block group size, have 10804527104 expect (0, 10737418240] > [ 10.032493] BTRFS error (device sdb): failed to read block groups: -5 > [ 10.053365] BTRFS error (device sdb): open_ctree failed > ---- Exactly the same symptom. > > This server has a 16 disk btrfs filesystem (RAID6) which I boot > periodically to btrfs-send snapshots to. This machine is running > ArchLinux and I had just updated to their latest 4.19.4 kernel > package (from 4.18.10 which was working fine). I've tried updating to > the 4.19.6 kernel that is in testing, but that doesn't seem to resolve > the issue. From what I can see on kernel.org, the patch above is not > pushed to stable or to Linus' tree. > > At this point the question is what to do. Is my FS toast? If there is no other problem at all, your fs is just fine. It's my original patch too sensitive (the excuse for not checking chunk allocator carefully enough). But since you have the down time, it's never a bad idea to run a btrfs check --readonly to see if your fs is really OK. > Could I > revert to the 4.18.10 kernel and boot safely? If your btrfs check --readonly doesn't report any problem, then you're completely fine to do so. Although I still recommend to go RAID10 other than RAID5/6. Thanks, Qu > I don't know if the 4.19 > boot process may have flipped some bits which would make reverting > problematic. > > Thanks much, > > - mike >