On Sun, 2016-01-03 at 09:37 +0800, Qu Wenruo wrote: > And since you are making the stripe size configurable, then user is  > responsible for any too large or too small stripe size setting. That pops up the questions, which raid chunk sizes the kernel, respectively the userland tools should allow for btrfs... I'd guess only powers of 2, some minimum, some maximum. Are there any concerns/constraints with too small/too big chunks when these play together with lower block layers (I'd guess not). Can one use any device topology information from lower block layers and make according warnings/suggestions? > Your only concern would be the default value, but IMHO current 64K  > stripe size is good enough as a default value. IIRC mdadm's default was 512K... Also in my benchmarks the observation was rather that for most IO patterns, higher chunk sizes perform better (that is at least in MD RAID and especially HW RAID). For all our HW RAIDs at the Tier-2 I use the respective maximum nowadays (1MiB on the newer controllers, 512KiB on some olders, 64KiB on the ones which are still steam powered). Only on some special nodes that run a DB I use something lower (IIRC 512KiB or 256KiB). Best is probably, that once there actually is variable chunk size support in btrfs, and the different RAID levels have stabilised and been optimised, and RAID1 renamed to something better (SCNR, ;-P ),... so probably in some 5-10 years... one makes some more extensive benchmarks and then picks a reasonable default :-) Cheers, Chris.