On Fri, Nov 08, 2019 at 11:31:22PM +0100, Richard Weinberger wrote: > ----- Ursprüngliche Mail ----- > > Von: "Zygo Blaxell" > > An: "richard" > > CC: "linux-btrfs" > > Gesendet: Freitag, 8. November 2019 23:25:57 > > Betreff: Re: Decoding "unable to fixup (regular)" errors > > > On Fri, Nov 08, 2019 at 11:21:56PM +0100, Richard Weinberger wrote: > >> ----- Ursprüngliche Mail ----- > >> > btrfs found corrupted data on md1. You appear to be using btrfs > >> > -dsingle on a single mdadm raid1 device, so no recovery is possible > >> > ("unable to fixup"). > >> > > >> >> The system has ECC memory with md1 being a RAID1 which passes all health checks. > >> > > >> > mdadm doesn't have any way to repair data corruption--it can find > >> > differences, but it cannot identify which version of the data is correct. > >> > If one of your drives is corrupting data without reporting IO errors, > >> > mdadm will simply copy the corruption to the other drive. If one > >> > drive is failing by intermittently injecting corrupted bits into reads > >> > (e.g. because of a failure in the RAM on the drive control board), > >> > this behavior may not show up in mdadm health checks. > >> > >> Well, this is not cheap hardware... > >> Possible, but not very likely IMHO > > > > Even the disks? We see RAM failures in disk drive embedded boards from > > time to time. > > Yes. Enterprise-Storage RAID-Edition disks (sorry for the marketing buzzwords). Can you share the model numbers and firmware revisions? There are a lot of enterprise RE disks. Not all of them work. At least one vendor has the same firmware in their enterprise RE disks as in their consumer drives, and it's unusually bad. Apart from the identical firmware revision string, the consumer and RE disks have indistinguishable behavior in our failure mode testing, e.g. they both have write caching bugs on power failures, they both silently corrupt a few blocks of data once or twice a drive-year... > Even if one disk is silently corrupting data, having the bad block copied to > the second disk is even more less likely to happen. > And I run the RAID-Health check often. Your setup is not able to detect this kind of failure very well. We've had problems with mdadm health-check failing to report errors even in deliberate data corruption tests. If a resync is triggered, all data on one drive is blindly copied to the other. You also have nothing checking for integrity failures between mdadm health checks (other than btrfs csum failures when the corruption propagates to the filesystem layer, as shown above in your log). We do a regression test where we corrupt every block on one disk in a btrfs raid1 (even the superblocks) and check to ensure they are all correctly reported and repaired without interrupting applications running on the filesystem. btrfs has a separate csum so it knows which version of the block is wrong, and it checks on every read so it will detect and report errors that occur between scrubs. The most striking thing about the description of your setup is that you have ECC RAM and you have a scrub regime to detect errors...but you have both a huge gap in error detection coverage and a mechanism to propagate errors across what is supposed to be a fault isolation boundary because you're using mdadm raid1 instead of btrfs raid1. If one of your disks goes bad, not only will it break your filesystem, but you won't know which disk you need to replace. > > Thanks, > //richard