From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Daniel Taylor" Subject: RE: Btrfs: broken file system design (was Unbound(?) internal fragmentation in Btrfs) Date: Wed, 23 Jun 2010 20:43:01 -0700 Message-ID: <469D2D911E4BF043BFC8AD32E8E30F5B24AEBA@wdscexbe07.sc.wdc.com> References: <4C07C321.8010000@redhat.com> <4C1B7560.1000806@gmail.com> <4C1BA3E5.7020400@gmail.com> <20100623234031.GF7058@shareable.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: "Daniel J Blueman" , "Mat" , "LKML" , , "Chris Mason" , "Ric Wheeler" , "Andrew Morton" , "Linus Torvalds" , "The development of BTRFS" To: unlisted-recipients:; (no To-header on input) Return-path: In-Reply-To: <20100623234031.GF7058@shareable.org> List-ID: Just an FYI reminder. The original test (2K files) is utterly pathological for disk drives with 4K physical sectors, such as those now shipping from WD, Seagate, and others. Some of the SSDs have larger (16K0 or smaller blocks (2K). There is also the issue of btrfs over RAID (which I know is not entirely sensible, but which will happen). The absolute minimum allocation size for data should be the same as, and aligned with, the underlying disk block size. If that results in underutilization, I think that's a good thing for performance, compared to read-modify-write cycles to update partial disk blocks.