All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Neto, Antonio Jose Rodrigues" <Antonio.Jose.Rodrigues.Neto@netapp.com>
To: zhu.yanhai@gmail.com, "Rettl, Matthias" <Matthias.Rettl@netapp.com>
Cc: axboe@kernel.dk, fio@vger.kernel.org
Subject: Re: Bad performance when reads and writes on same LUN
Date: Fri, 2 Sep 2011 16:50:19 -0700	[thread overview]
Message-ID: <116197A0179A3D4A95B135E5F0341FC11041B4D2@SACMVEXC2-PRD.hq.netapp.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 6067 bytes --]

What do you suggest?

Direct=1 for both?

----- Original Message -----
From: Zhu Yanhai <zhu.yanhai@gmail.com>
To: Rettl, Matthias
Cc: Jens Axboe <axboe@kernel.dk>; Neto, Antonio Jose Rodrigues; fio@vger.kernel.org <fio@vger.kernel.org>
Sent: Fri Sep 02 16:43:54 2011
Subject: Re: Bad performance when reads and writes on same LUN

Hi,
Why did you mix buffered aio read with unbuffered (direct io) aio write?
Buffered aio will fall back to sync io, and it's known that mixing
with buffered io with direct io will hurt performance.

Regards,
Zhu Yanhai

2011/9/2 Rettl, Matthias <Matthias.Rettl@netapp.com>:
> Hi fio users and developers,
>
> I am working on a setup for a customer with a very specific FIO
> configuration:
> We have to use 8 LUNs (from a very performant storage system, by the
> way), doing three readers and one writer per LUN.
>
> So we use 8 fio config files, each of them look the same, except the
> directory= entry:
>
> [global]
> directory=/n1-dm-0
> rw=randread
> ioengine=libaio
> iodepth=4
> size=1024m
> invalidate=1
> direct=0
> runtime=60
> time_based
>
> [Reader-1]
>
> [Reader-2]
>
> [Reader-3]
>
> [Writer]
> rw=randwrite
> ioengine=libaio
> iodepth=32
> direct=1
>
>
> When running this configurations (8 fio's in parallel) I receive these
> results (grep'ed iops from the 8 out-files):
>  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
>  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
>  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
>  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
>  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
>  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
>  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
>  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
>  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
>  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
>  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
>  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
>  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
>  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
>  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
>  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
>  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
>  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
>  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
>  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
>  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
>  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
>  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
>  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
>  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
>  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
>  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
>  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
>  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
>  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
>  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
>  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec
>
>
> As you can see, read-performance is very bad! Writes are fine, but reads
> are not acceptable!
> By the way, for reads the parameter "direct=0" is set and I can see that
> the reads are not even hitting the storage system!
>
> I have then tried to remove the writers from the fio config files, and
> the read-results are as we would expect it:
>
>  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
>  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
>  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
>  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
>  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
>  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
>  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
>  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
>  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
>  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
>  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
>  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
>  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
>  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
>  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
>  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
>  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
>  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
>  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
>  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec
>
> We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
> 180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).
>
> We have also tried to use separate read- and write config files. But as
> soon as we run the tests against the same LUN, read-performance drops!
>
> Any feedback is highly appreciated!
>
> Cheers,
> Matt
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #2: Type: text/html, Size: 7478 bytes --]

             reply	other threads:[~2011-09-02 23:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-02 23:50 Neto, Antonio Jose Rodrigues [this message]
2011-09-03  0:00 ` Bad performance when reads and writes on same LUN Zhu Yanhai
  -- strict thread matches above, loose matches on Subject: below --
2011-09-03  3:24 Neto, Antonio Jose Rodrigues
2011-09-03  3:28 ` Jens Axboe
2011-09-03  3:06 Neto, Antonio Jose Rodrigues
2011-09-03  3:09 ` Jens Axboe
2011-09-02 15:32 Rettl, Matthias
2011-09-02 23:43 ` Zhu Yanhai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=116197A0179A3D4A95B135E5F0341FC11041B4D2@SACMVEXC2-PRD.hq.netapp.com \
    --to=antonio.jose.rodrigues.neto@netapp.com \
    --cc=Matthias.Rettl@netapp.com \
    --cc=axboe@kernel.dk \
    --cc=fio@vger.kernel.org \
    --cc=zhu.yanhai@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.