From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030435AbbD2Qmu (ORCPT ); Wed, 29 Apr 2015 12:42:50 -0400 Received: from mail-wi0-f171.google.com ([209.85.212.171]:37022 "EHLO mail-wi0-f171.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751647AbbD2Qmr (ORCPT ); Wed, 29 Apr 2015 12:42:47 -0400 Message-ID: <1430325763.19371.41.camel@gmail.com> Subject: Re: Tux3 Report: How fast can we fsync? From: Mike Galbraith To: Daniel Phillips Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, "Theodore Ts'o" , OGAWA Hirofumi Date: Wed, 29 Apr 2015 18:42:43 +0200 In-Reply-To: References: <8f886f13-6550-4322-95be-93244ae61045@phunq.net> <1430274071.3363.4.camel@gmail.com> <1906f271-aa23-404b-9776-a4e2bce0c6aa@phunq.net> <1430289213.3693.3.camel@gmail.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.12.11 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 2015-04-29 at 00:23 -0700, Daniel Phillips wrote: > On Tuesday, April 28, 2015 11:33:33 PM PDT, Mike Galbraith wrote: > > On Tue, 2015-04-28 at 23:01 -0700, Daniel Phillips wrote: > >> On Tuesday, April 28, 2015 7:21:11 PM PDT, Mike Galbraith wrote: > >>> Where does tux3 live? What I found looked abandoned. > >> > >> Current work is here: > >> > >> https://github.com/OGAWAHirofumi/linux-tux3 > >> > >> Note, the new fsync code isn't pushed to that tree yet, however Hirofumi's > >> optimized syncfs is already in there, which isn't a lot slower. > > > > Ah, I did find the right spot, it's just been idle a while. Where does > > one find mkfs.tux3? > > Hi Mike, > > See my reply to Richard. You are right, we have been developing on > Hirofumi's > branch and master is getting old. Short version: > > checkout hirofumi-user > cd fs/tux3/user > make > ./tux3 mkfs Ok, thanks. I was curious about horrible looking plain ole dbench numbers you posted, as when I used to play with it, default looked like a kinda silly non-io test most frequently used to pile threads on a box to see when the axles started bending. Seems default load has changed. With dbench v4.00, tux3 seems to be king of the max_latency hill, but btrfs took throughput on my box. With v3.04, tux3 took 1st place at splashing about in pagecache, but last place at dbench -S. Hohum, curiosity satisfied. /usr/local/bin/dbench -t 30 (version 4.00) ext4 Throughput 31.6148 MB/sec 8 clients 8 procs max_latency=1696.854 ms xfs Throughput 26.4005 MB/sec 8 clients 8 procs max_latency=1508.581 ms btrfs Throughput 82.3654 MB/sec 8 clients 8 procs max_latency=1274.960 ms tux3 Throughput 93.0047 MB/sec 8 clients 8 procs max_latency=99.712 ms ext4 Throughput 49.9795 MB/sec 16 clients 16 procs max_latency=2180.108 ms xfs Throughput 35.038 MB/sec 16 clients 16 procs max_latency=3107.321 ms btrfs Throughput 148.894 MB/sec 16 clients 16 procs max_latency=618.070 ms tux3 Throughput 130.532 MB/sec 16 clients 16 procs max_latency=141.743 ms ext4 Throughput 69.2642 MB/sec 32 clients 32 procs max_latency=3166.374 ms xfs Throughput 55.3805 MB/sec 32 clients 32 procs max_latency=4921.660 ms btrfs Throughput 230.488 MB/sec 32 clients 32 procs max_latency=3673.387 ms tux3 Throughput 179.473 MB/sec 32 clients 32 procs max_latency=194.046 ms /usr/local/bin/dbench -B fileio -t 30 (version 4.00) ext4 Throughput 84.7361 MB/sec 32 clients 32 procs max_latency=1401.683 ms xfs Throughput 57.9369 MB/sec 32 clients 32 procs max_latency=1397.910 ms btrfs Throughput 268.738 MB/sec 32 clients 32 procs max_latency=639.411 ms tux3 Throughput 186.172 MB/sec 32 clients 32 procs max_latency=167.389 ms /usr/bin/dbench -t 30 32 (version 3.04) ext4 Throughput 7920.95 MB/sec 32 procs xfs Throughput 674.993 MB/sec 32 procs btrfs Throughput 1910.63 MB/sec 32 procs tux3 Throughput 8262.68 MB/sec 32 procs /usr/bin/dbench -S -t 30 32 (version 3.04) ext4 Throughput 87.2774 MB/sec (sync dirs) 32 procs xfs Throughput 89.3977 MB/sec (sync dirs) 32 procs btrfs Throughput 101.888 MB/sec (sync dirs) 32 procs tux3 Throughput 78.7463 MB/sec (sync dirs) 32 procs