From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christian Stroetmann Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) Date: Thu, 30 Apr 2015 20:22:41 +0200 Message-ID: <554272F1.80801@ontolab.com> References: <8f886f13-6550-4322-95be-93244ae61045@phunq.net> <1430274071.3363.4.camel@gmail.com> <1906f271-aa23-404b-9776-a4e2bce0c6aa@phunq.net> <1430289213.3693.3.camel@gmail.com> <1430325763.19371.41.camel@gmail.com> <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, Theodore Ts'o , OGAWA Hirofumi To: Daniel Phillips Return-path: Received: from mout.kundenserver.de ([212.227.17.13]:62272 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751072AbbD3SXF (ORCPT ); Thu, 30 Apr 2015 14:23:05 -0400 In-Reply-To: <554246D7.40105@phunq.net> Sender: linux-fsdevel-owner@vger.kernel.org List-ID: On the 30th of April 2015 17:14, Daniel Phillips wrote: Hallo hardcore coders > On 04/30/2015 07:28 AM, Howard Chu wrote: >> Daniel Phillips wrote: >>> >>> On 04/30/2015 06:48 AM, Mike Galbraith wrote: >>>> On Thu, 2015-04-30 at 05:58 -0700, Daniel Phillips wrote: >>>>> On Thursday, April 30, 2015 5:07:21 AM PDT, Mike Galbraith wrote: >>>>>> On Thu, 2015-04-30 at 04:14 -0700, Daniel Phillips wrote: >>>>>> >>>>>>> Lovely sounding argument, but it is wrong because Tux3 still beats XFS >>>>>>> even with seek time factored out of the equation. >>>>>> Hm. Do you have big-storage comparison numbers to back that? I'm no >>>>>> storage guy (waiting for holographic crystal arrays to obsolete all this >>>>>> crap;), but Dave's big-storage guy words made sense to me. >>>>> This has nothing to do with big storage. The proposition was that seek >>>>> time is the reason for Tux3's fsync performance. That claim was easily >>>>> falsified by removing the seek time. >>>>> >>>>> Dave's big storage words are there to draw attention away from the fact >>>>> that XFS ran the Git tests four times slower than Tux3 and three times >>>>> slower than Ext4. Whatever the big storage excuse is for that, the fact >>>>> is, XFS obviously sucks at little storage. >>>> If you allocate spanning the disk from start of life, you're going to >>>> eat seeks that others don't until later. That seemed rather obvious and >>>> straight forward. >>> It is a logical falacy. It mixes a grain of truth (spreading all over the >>> disk causes extra seeks) with an obvious falsehood (it is not necessarily >>> the only possible way to avoid long term fragmentation). >> You're reading into it what isn't there. Spreading over the disk isn't (just) about avoiding >> fragmentation - it's about delivering consistent and predictable latency. It is undeniable that if >> you start by only allocating from the fastest portion of the platter, you are going to see >> performance slow down over time. If you start by spreading allocations across the entire platter, >> you make the worst-case and average-case latency equal, which is exactly what a lot of folks are >> looking for. > Another fallacy: intentionally running slower than necessary is not necessarily > the only way to deliver consistent and predictable latency. Not only that, but > intentionally running slower than necessary does not necessarily guarantee > performing better than some alternate strategy later. > > Anyway, let's not be silly. Everybody in the room who wants Git to run 4 times > slower with no guarantee of any benefit in the future, please raise your hand. > >>>> He flat stated that xfs has passable performance on >>>> single bit of rust, and openly explained why. I see no misdirection, >>>> only some evidence of bad blood between you two. >>> Raising the spectre of theoretical fragmentation issues when we have not >>> even begun that work is a straw man and intellectually dishonest. You have >>> to wonder why he does it. It is destructive to our community image and >>> harmful to progress. >> It is a fact of life that when you change one aspect of an intimately interconnected system, >> something else will change as well. You have naive/nonexistent free space management now; when you >> design something workable there it is going to impact everything else you've already done. It's an >> easy bet that the impact will be negative, the only question is to what degree. > You might lose that bet. For example, suppose we do strictly linear allocation > each delta, and just leave nice big gaps between the deltas for future > expansion. Clearly, we run at similar or identical speed to the current naive > strategy until we must start filling in the gaps, and at that point our layout > is not any worse than XFS, which started bad and stayed that way. > > Now here is where you lose the bet: we already know that linear allocation > with wrap ends horribly right? However, as above, we start linear, without > compromise, but because of the gaps we leave, we are able to switch to a > slower strategy, but not nearly as slow as the ugly tangle we get with > simple wrap. So impact over the lifetime of the filesystem is positive, not > negative, and what seemed to be self evident to you turns out to be wrong. > > In short, we would rather deliver as much performance as possible, all the > time. I really don't need to think about it very hard to know that is what I > want, and what most users want. > > I will make you a bet in return: when we get to doing that part properly, the > quality of the work will be just as high as everything else we have completed > so far. Why would we suddenly get lazy? > > Regards, > > Daniel > -- > How? Maybe this is explained and discussed in a new thread about allocation or so. Thanks Best Regards Have fun C.S.