From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753348AbbELEf4 (ORCPT ); Tue, 12 May 2015 00:35:56 -0400 Received: from mail.phunq.net ([184.71.0.62]:45459 "EHLO starbase.phunq.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751366AbbELEfw (ORCPT ); Tue, 12 May 2015 00:35:52 -0400 Message-ID: <55518332.10009@phunq.net> Date: Mon, 11 May 2015 21:36:02 -0700 From: Daniel Phillips User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: David Lang CC: Pavel Machek , Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, "Theodore Ts'o" , OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) References: <1430325763.19371.41.camel@gmail.com> <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> <555140E6.1070409@phunq.net> In-Reply-To: Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi David, On 05/11/2015 05:12 PM, David Lang wrote: > On Mon, 11 May 2015, Daniel Phillips wrote: > >> On 05/11/2015 03:12 PM, Pavel Machek wrote: >>>>> It is a fact of life that when you change one aspect of an intimately interconnected system, >>>>> something else will change as well. You have naive/nonexistent free space management now; when you >>>>> design something workable there it is going to impact everything else you've already done. It's an >>>>> easy bet that the impact will be negative, the only question is to what degree. >>>> >>>> You might lose that bet. For example, suppose we do strictly linear allocation >>>> each delta, and just leave nice big gaps between the deltas for future >>>> expansion. Clearly, we run at similar or identical speed to the current naive >>>> strategy until we must start filling in the gaps, and at that point our layout >>>> is not any worse than XFS, which started bad and stayed that way. >>> >>> Umm, are you sure. If "some areas of disk are faster than others" is >>> still true on todays harddrives, the gaps will decrease the >>> performance (as you'll "use up" the fast areas more quickly). >> >> That's why I hedged my claim with "similar or identical". The >> difference in media speed seems to be a relatively small effect >> compared to extra seeks. It seems that XFS puts big spaces between >> new directories, and suffers a lot of extra seeks because of it. >> I propose to batch new directories together initially, then change >> the allocation goal to a new, relatively empty area if a big batch >> of files lands on a directory in a crowded region. The "big" gaps >> would be on the order of delta size, so not really very big. > > This is an interesting idea, but what happens if the files don't arrive as a big batch, but rather > trickle in over time (think a logserver that if putting files into a bunch of directories at a > fairly modest rate per directory) If files are trickling in then we can afford to spend a lot more time finding nice places to tuck them in. Log server files are an especially irksome problem for a redirect-on-write filesystem because the final block tends to be rewritten many times and we must move it to a new location each time, so every extent ends up as one block. Oh well. If we just make sure to have some free space at the end of the file that only that file can use (until everywhere else is full) then the long term result will be slightly ravelled blocks that nonetheless tend to be on the same track or flash block as their logically contiguous neighbours. There will be just zero or one empty data blocks mixed into the file tail as we commit the tail block over and over with the same allocation goal. Sometimes there will be a block or two of metadata as well, which will eventually bake themselves into the middle of contiguous data and stop moving around. Putting this together, we have: * At delta flush, break out all the log type files * Dedicate some block groups to append type files * Leave lots of space between files in those block groups * Peek at the last block of the file to set the allocation goal Something like that. What we don't want is to throw those files into the middle of a lot of rewrite-all files, messing up both kinds of file. We don't care much about keeping these files near the parent directory because one big seek per log file in a grep is acceptable, we just need to avoid thousands of big seeks within the file, and not dribble single blocks all over the disk. It would also be nice to merge together extents somehow as the final block is rewritten. One idea is to retain the final block dirty until the next delta, and write it again into a contiguous position, so the final block is always flushed twice. We already have the opportunistic merge logic, but the redirty behavior and making sure it only happens to log files would be a bit fiddly. We will also play the incremental defragmentation card at some point, but first we should try hard to control fragmentation in the first place. Tux3 is well suited to online defragmentation because the delta commit model makes it easy to move things around efficiently and safely, but it does generate extra IO, so as a basic mechanism it is not ideal. When we get to piling on features, that will be high on the list, because it is relatively easy, and having that fallback gives a certain sense of security. > And when you then decide that you have to move the directory/file info, doesn't that create a > potentially large amount of unexpected IO that could end up interfering with what the user is trying > to do? Right, we don't like that and don't plan to rely on it. What we hope for is behavior that, when you slowly stir the pot, tends to improve the layout just as often as it degrades it. It may indeed become harder to find ideal places to put things as time goes by, but we also gain more information to base decisions on. Regards, Daniel