12.02.2019 10:47, Qu Wenruo пишет: > > > On 2019/2/12 下午3:43, Remi Gauvin wrote: >> On 2019-02-12 2:22 a.m., Qu Wenruo wrote: >> >>>> Does this mean you would rely on scrub/CSUM to repair the missing data >>>> if device is restored? >>> >>> Yes, just as btrfs usually does. >>> >> >> I don't really understand the implications of the problems with mounting >> fs when single/dup data chunk are allocated on raid1, > > Consider this use case: > > One btrfs with 2 devices, RAID1 for data and metadata. > > One day devid 2 got failure, and before replacement arrives, user can > only use devid 1 alone. (Maybe that's the root fs). > > Then new disk arrived, user replaced the missing device, caused SINGLE > or DUP chunks on devid 1, and more importantly, some metadata/data is > already in DUP/SINGLE chunks. > > Then some days later, devid 1 get failure too, now user is unable to > mount the fs degraded RW any more, since SINGLE/DUP chunks are all on > devid 1, and no way to replace devid 1. > But if I understand what happens after your patch correctly, replacement device still does not contain valid data until someone does scrub. So in either case manual step is required to restore full redundancy. Or does "btrfs replace" restore content on replacement device automatically? > Thanks, > Qu > >> but I would think >> that would actually be a preferable situation than filling a drive with >> 'data' we know is completely bogus... converting single/dup data to raid >> should be much faster than tripping on CSUM errors, and less prone to >> missed errors? >> >> >