All of lore.kernel.org
 help / color / mirror / Atom feed
From: Ric Wheeler <rwheeler@redhat.com>
To: Eric Sandeen <sandeen@redhat.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>,
	Ric Wheeler <ricwheeler@gmail.com>, Fredrick <fjohnber@zoho.com>,
	linux-ext4@vger.kernel.org, Andreas Dilger <adilger@dilger.ca>,
	wenqing.lz@taobao.com
Subject: Re: ext4_fallocate
Date: Thu, 28 Jun 2012 07:27:38 -0400	[thread overview]
Message-ID: <4FEC3FAA.1060503@redhat.com> (raw)
In-Reply-To: <4FEB9115.6090309@redhat.com>

On 06/27/2012 07:02 PM, Eric Sandeen wrote:
> On 6/27/12 3:30 PM, Theodore Ts'o wrote:
>> On Tue, Jun 26, 2012 at 04:44:08PM -0400, Eric Sandeen wrote:
>>>> I tried running this fio recipe on v3.3, which I think does a decent job of
>>>> emulating the situation (fallocate 1G, do random 1M writes into it, with
>>>> fsyncs after each):
>>>>
>>>> [test]
>>>> filename=testfile
>>>> rw=randwrite
>>>> size=1g
>>>> filesize=1g
>>>> bs=1024k
>>>> ioengine=sync
>>>> fallocate=1
>>>> fsync=1
>> A better workload would be to use a blocksize of 4k.  By using a
>> blocksize of 1024k, it's not surprising that the metadata overhead is
>> in the noise.
>>
>> Try something like this; this will cause the extent tree overhead to
>> be roughly equal to the data block I/O.
>>
>> [global]
>> rw=randwrite
>> size=128m
>> filesize=1g
>> bs=4k
>> ioengine=sync
>> fallocate=1
>> fsync=1
>>
>> [thread1]
>> filename=testfile
> Well, ok ... TBH I changed it to size=16m to finish in under 20m.... so here are the results:
>
> fallocate 1g, do 16m of 4k random IOs, sync after each:
>
> # for I in a b c; do rm -f testfile; echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=154KB/s, minb=158KB/s, maxb=158KB/s, mint=105989msec, maxt=105989msec
>    WRITE: io=16384KB, aggrb=163KB/s, minb=167KB/s, maxb=167KB/s, mint=99906msec, maxt=99906msec
>    WRITE: io=16384KB, aggrb=176KB/s, minb=180KB/s, maxb=180KB/s, mint=92791msec, maxt=92791msec
>
> same, but overwrite pre-written 1g file (same as the expose-my-data option ;)
>
> # dd if=/dev/zero of=testfile bs=1M count=1024
> # for I in a b c; do echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=164KB/s, minb=168KB/s, maxb=168KB/s, mint=99515msec, maxt=99515msec
>    WRITE: io=16384KB, aggrb=164KB/s, minb=168KB/s, maxb=168KB/s, mint=99371msec, maxt=99371msec
>    WRITE: io=16384KB, aggrb=164KB/s, minb=168KB/s, maxb=168KB/s, mint=99677msec, maxt=99677msec
>
> so no great surprise, small synchronous 4k writes have terrible performance, but I'm still not seeing a lot of fallocate overhead.
>
> xfs, FWIW:
>
> # for I in a b c; do rm -f testfile; echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=202KB/s, minb=207KB/s, maxb=207KB/s, mint=80980msec, maxt=80980msec
>    WRITE: io=16384KB, aggrb=203KB/s, minb=208KB/s, maxb=208KB/s, mint=80508msec, maxt=80508msec
>    WRITE: io=16384KB, aggrb=204KB/s, minb=208KB/s, maxb=208KB/s, mint=80291msec, maxt=80291msec
>
> # dd if=/dev/zero of=testfile bs=1M count=1024
> # for I in a b c; do echo 3 > /proc/sys/vm/drop_caches; fio tytso.fio | grep 2>&1 WRITE; done
>
>    WRITE: io=16384KB, aggrb=197KB/s, minb=202KB/s, maxb=202KB/s, mint=82869msec, maxt=82869msec
>    WRITE: io=16384KB, aggrb=203KB/s, minb=208KB/s, maxb=208KB/s, mint=80348msec, maxt=80348msec
>    WRITE: io=16384KB, aggrb=202KB/s, minb=207KB/s, maxb=207KB/s, mint=80827msec, maxt=80827msec
>
> Again, I think this is just a diabolical workload ;)
>
> -Eric

We need to keep in mind what the goal of pre-allocation is (should be?) - spend 
a bit of extra time doing the allocation call so we get really good, contiguous 
layout on disk which ultimately will help in streaming read/write workloads.

If you have a reasonably small file, pre-allocation is probably simply a waste 
of time - you would be better off overwriting the maximum file size with all 
zeros (even a 1GB file would take only a few seconds).

If the file is large enough to be interesting, I think that we might want to 
think about a scheme that would bring small random IO's more into line with the 
1MB results Eric saw.

One way to do that might be to have a minimum "chunk" that we would zero out for 
any IO to an allocated but unwritten extent. You write 4KB to the middle of said 
region, we pad up and zero out to the nearest MB with zeros.

Note for the target class of drives (S-ATA) that Ted mentioned earlier, doing a 
random 4KB write vs a 1MB write is not that much slower (you need to pay the 
head movement costs already).  Of course, the sweet spot might turn out to be a 
bit smaller or larger.

Ric



  reply	other threads:[~2012-06-28 11:27 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-25  6:42 ext4_fallocate Fredrick
2012-06-25  7:33 ` ext4_fallocate Andreas Dilger
2012-06-28 15:12   ` ext4_fallocate Phillip Susi
2012-06-28 15:23     ` ext4_fallocate Eric Sandeen
2012-06-25  8:51 ` ext4_fallocate Zheng Liu
2012-06-25 19:04   ` ext4_fallocate Fredrick
2012-06-25 19:17   ` ext4_fallocate Theodore Ts'o
2012-06-26  1:23     ` ext4_fallocate Fredrick
2012-06-26 13:13     ` ext4_fallocate Ric Wheeler
2012-06-26 17:30       ` ext4_fallocate Theodore Ts'o
2012-06-26 18:06         ` ext4_fallocate Fredrick
2012-06-26 18:21         ` ext4_fallocate Ric Wheeler
2012-06-26 18:57           ` ext4_fallocate Ted Ts'o
2012-06-26 19:22             ` ext4_fallocate Ric Wheeler
2012-06-26 18:05       ` ext4_fallocate Fredrick
2012-06-26 18:59         ` ext4_fallocate Ted Ts'o
2012-06-26 19:30         ` ext4_fallocate Ric Wheeler
2012-06-26 19:57           ` ext4_fallocate Eric Sandeen
2012-06-26 20:44             ` ext4_fallocate Eric Sandeen
2012-06-27 15:14               ` ext4_fallocate Eric Sandeen
2012-06-27 19:30               ` ext4_fallocate Theodore Ts'o
2012-06-27 23:02                 ` ext4_fallocate Eric Sandeen
2012-06-28 11:27                   ` Ric Wheeler [this message]
2012-06-29 19:02                     ` ext4_fallocate Andreas Dilger
2012-07-02  3:03                       ` ext4_fallocate Zheng Liu
2012-06-28 12:48                   ` ext4_fallocate Theodore Ts'o
2012-07-02  3:16                   ` ext4_fallocate Zheng Liu
2012-07-02 16:33                     ` ext4_fallocate Eric Sandeen
2012-07-02 17:44                       ` ext4_fallocate Jan Kara
2012-07-02 17:48                         ` ext4_fallocate Ric Wheeler
2012-07-03 17:41                           ` ext4_fallocate Zheng Liu
2012-07-03 17:57                             ` ext4_fallocate Zach Brown
2012-07-04  2:23                               ` ext4_fallocate Zheng Liu
2012-07-02 18:01                         ` ext4_fallocate Theodore Ts'o
2012-07-03  9:30                           ` ext4_fallocate Jan Kara
2012-07-04  1:15                         ` ext4_fallocate Phillip Susi
2012-07-04  2:36                           ` ext4_fallocate Zheng Liu
2012-07-04  3:06                             ` ext4_fallocate Phillip Susi
2012-07-04  3:48                               ` ext4_fallocate Zheng Liu
2012-07-04 12:20                               ` ext4_fallocate Ric Wheeler
2012-07-04 13:25                                 ` ext4_fallocate Zheng Liu
2012-06-26 13:06 ` ext4_fallocate Eric Sandeen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FEC3FAA.1060503@redhat.com \
    --to=rwheeler@redhat.com \
    --cc=adilger@dilger.ca \
    --cc=fjohnber@zoho.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=ricwheeler@gmail.com \
    --cc=sandeen@redhat.com \
    --cc=tytso@mit.edu \
    --cc=wenqing.lz@taobao.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.