On 2019/12/2 上午11:22, Zygo Blaxell wrote: > On Tue, Nov 19, 2019 at 07:32:26AM +0800, Qu Wenruo wrote: >> >> >> On 2019/11/19 上午4:18, David Sterba wrote: >>> On Thu, Nov 07, 2019 at 02:27:07PM +0800, Qu Wenruo wrote: >>>> This patchset will make btrfs degraded mount more intelligent and >>>> provide more consistent profile keeping function. >>>> >>>> One of the most problematic aspect of degraded mount is, btrfs may >>>> create unwanted profiles. >>>> >>>> # mkfs.btrfs -f /dev/test/scratch[12] -m raid1 -d raid1 >>>> # wipefs -fa /dev/test/scratch2 >>>> # mount -o degraded /dev/test/scratch1 /mnt/btrfs >>>> # fallocate -l 1G /mnt/btrfs/foobar >>>> # btrfs ins dump-tree -t chunk /dev/test/scratch1 >>>> item 7 key (FIRST_CHUNK_TREE CHUNK_ITEM 1674575872) itemoff 15511 itemsize 80 >>>> length 536870912 owner 2 stripe_len 65536 type DATA >>>> New data chunk will fallback to SINGLE or DUP. >>>> >>>> >>>> The cause is pretty simple, when mounted degraded, missing devices can't >>>> be used for chunk allocation. >>>> Thus btrfs has to fall back to SINGLE profile. >>>> >>>> This patchset will make btrfs to consider missing devices as last resort if >>>> current rw devices can't fulfil the profile request. >>>> >>>> This should provide a good balance between considering all missing >>>> device as RW and completely ruling out missing devices (current mainline >>>> behavior). >>> >>> Thanks. This is going to change the behaviour with a missing device, so >>> the question is if we should make this configurable first and then >>> switch the default. >> >> Configurable then switch makes sense for most cases, but for this >> degraded chunk case, IIRC the new behavior is superior in all cases. >> >> For 2 devices RAID1 with one missing device (the main concern), old >> behavior will create SINGLE/DUP chunk, which has no tolerance for extra >> missing devices. >> >> The new behavior will create degraded RAID1, which still lacks tolerance >> for extra missing devices. >> >> The difference is, for degraded chunk, if we have the device back, and >> do proper scrub, then we're completely back to proper RAID1. >> No need to do extra balance/convert, only scrub is needed. > > I think you meant to say "replace" instead of "scrub" above. "scrub" for missing-then-back case. As at the time of write, I didn't even take the replace case into consideration... > >> So the new behavior is kinda of a super set of old behavior, using the >> new behavior by default should not cause extra concern. > > It sounds OK to me, provided that the missing device is going away > permanently, and a new device replaces it. > > If the missing device comes back, we end up relying on scrub and 32-bit > CRCs to figure out which disk has correct data, and it will be wrong > 1/2^32 of the time. For nodatasum files there are no CRCs so the data > will be wrong much more often. This patch doesn't change that, but > maybe another patch should. Yep, the patchset won't change it. But this also remind me, so far we are all talking about "degraded" mount option. Under most case, user is only using "degraded" when they completely understands that device is missing, not using that option as a daily option. So that shouldn't be a big problem so far. Thanks, Qu > >>> How does this work with scrub? Eg. if there are 2 devices in RAID1, one >>> goes missing and then scrub is started. It makes no sense to try to >>> repair the missing blocks, but given the logic in the patches all the >>> data will be rewritten, right? >> >> Scrub is unchanged at all. >> >> Missing device will not go through scrub at all, as scrub is per-device >> based, missing device will be ruled out at very beginning of scrub. >> >> Thanks, >> Qu >>> >> > > >