All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Bad performance when reads and writes on same LUN
@ 2011-09-02 23:50 Neto, Antonio Jose Rodrigues
  2011-09-03  0:00 ` Zhu Yanhai
  0 siblings, 1 reply; 8+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2011-09-02 23:50 UTC (permalink / raw)
  To: zhu.yanhai, Rettl, Matthias; +Cc: axboe, fio

[-- Attachment #1: Type: text/plain, Size: 6067 bytes --]

What do you suggest?

Direct=1 for both?

----- Original Message -----
From: Zhu Yanhai <zhu.yanhai@gmail.com>
To: Rettl, Matthias
Cc: Jens Axboe <axboe@kernel.dk>; Neto, Antonio Jose Rodrigues; fio@vger.kernel.org <fio@vger.kernel.org>
Sent: Fri Sep 02 16:43:54 2011
Subject: Re: Bad performance when reads and writes on same LUN

Hi,
Why did you mix buffered aio read with unbuffered (direct io) aio write?
Buffered aio will fall back to sync io, and it's known that mixing
with buffered io with direct io will hurt performance.

Regards,
Zhu Yanhai

2011/9/2 Rettl, Matthias <Matthias.Rettl@netapp.com>:
> Hi fio users and developers,
>
> I am working on a setup for a customer with a very specific FIO
> configuration:
> We have to use 8 LUNs (from a very performant storage system, by the
> way), doing three readers and one writer per LUN.
>
> So we use 8 fio config files, each of them look the same, except the
> directory= entry:
>
> [global]
> directory=/n1-dm-0
> rw=randread
> ioengine=libaio
> iodepth=4
> size=1024m
> invalidate=1
> direct=0
> runtime=60
> time_based
>
> [Reader-1]
>
> [Reader-2]
>
> [Reader-3]
>
> [Writer]
> rw=randwrite
> ioengine=libaio
> iodepth=32
> direct=1
>
>
> When running this configurations (8 fio's in parallel) I receive these
> results (grep'ed iops from the 8 out-files):
>  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
>  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
>  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
>  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
>  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
>  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
>  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
>  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
>  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
>  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
>  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
>  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
>  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
>  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
>  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
>  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
>  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
>  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
>  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
>  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
>  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
>  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
>  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
>  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
>  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
>  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
>  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
>  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
>  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
>  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
>  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
>  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec
>
>
> As you can see, read-performance is very bad! Writes are fine, but reads
> are not acceptable!
> By the way, for reads the parameter "direct=0" is set and I can see that
> the reads are not even hitting the storage system!
>
> I have then tried to remove the writers from the fio config files, and
> the read-results are as we would expect it:
>
>  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
>  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
>  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
>  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
>  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
>  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
>  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
>  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
>  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
>  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
>  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
>  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
>  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
>  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
>  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
>  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
>  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
>  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
>  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
>  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec
>
> We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
> 180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).
>
> We have also tried to use separate read- and write config files. But as
> soon as we run the tests against the same LUN, read-performance drops!
>
> Any feedback is highly appreciated!
>
> Cheers,
> Matt
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #2: Type: text/html, Size: 7478 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bad performance when reads and writes on same LUN
  2011-09-02 23:50 Bad performance when reads and writes on same LUN Neto, Antonio Jose Rodrigues
@ 2011-09-03  0:00 ` Zhu Yanhai
  0 siblings, 0 replies; 8+ messages in thread
From: Zhu Yanhai @ 2011-09-03  0:00 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: Rettl, Matthias, axboe, fio

Yes, I think so.

2011/9/3 Neto, Antonio Jose Rodrigues <Antonio.Jose.Rodrigues.Neto@netapp.com>:
> What do you suggest?
>
> Direct=1 for both?
>
> ----- Original Message -----
> From: Zhu Yanhai <zhu.yanhai@gmail.com>
> To: Rettl, Matthias
> Cc: Jens Axboe <axboe@kernel.dk>; Neto, Antonio Jose Rodrigues;
> fio@vger.kernel.org <fio@vger.kernel.org>
> Sent: Fri Sep 02 16:43:54 2011
> Subject: Re: Bad performance when reads and writes on same LUN
>
> Hi,
> Why did you mix buffered aio read with unbuffered (direct io) aio write?
> Buffered aio will fall back to sync io, and it's known that mixing
> with buffered io with direct io will hurt performance.
>
> Regards,
> Zhu Yanhai
>
> 2011/9/2 Rettl, Matthias <Matthias.Rettl@netapp.com>:
>> Hi fio users and developers,
>>
>> I am working on a setup for a customer with a very specific FIO
>> configuration:
>> We have to use 8 LUNs (from a very performant storage system, by the
>> way), doing three readers and one writer per LUN.
>>
>> So we use 8 fio config files, each of them look the same, except the
>> directory= entry:
>>
>> [global]
>> directory=/n1-dm-0
>> rw=randread
>> ioengine=libaio
>> iodepth=4
>> size=1024m
>> invalidate=1
>> direct=0
>> runtime=60
>> time_based
>>
>> [Reader-1]
>>
>> [Reader-2]
>>
>> [Reader-3]
>>
>> [Writer]
>> rw=randwrite
>> ioengine=libaio
>> iodepth=32
>> direct=1
>>
>>
>> When running this configurations (8 fio's in parallel) I receive these
>> results (grep'ed iops from the 8 out-files):
>>  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
>>  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
>>  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
>>  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
>>  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
>>  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
>>  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
>>  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
>>  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
>>  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
>>  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
>>  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
>>  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
>>  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
>>  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
>>  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
>>  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
>>  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
>>  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
>>  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
>>  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
>>  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
>>  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
>>  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
>>  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
>>  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
>>  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
>>  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
>>  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
>>  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
>>  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
>>  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec
>>
>>
>> As you can see, read-performance is very bad! Writes are fine, but reads
>> are not acceptable!
>> By the way, for reads the parameter "direct=0" is set and I can see that
>> the reads are not even hitting the storage system!
>>
>> I have then tried to remove the writers from the fio config files, and
>> the read-results are as we would expect it:
>>
>>  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
>>  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
>>  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
>>  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
>>  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
>>  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
>>  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
>>  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
>>  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
>>  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
>>  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
>>  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
>>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>>  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
>>  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
>>  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
>>  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
>>  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
>>  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
>>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>>  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
>>  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
>>  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
>>  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec
>>
>> We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
>> 180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).
>>
>> We have also tried to use separate read- and write config files. But as
>> soon as we run the tests against the same LUN, read-performance drops!
>>
>> Any feedback is highly appreciated!
>>
>> Cheers,
>> Matt
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe fio" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bad performance when reads and writes on same LUN
  2011-09-03  3:24 Neto, Antonio Jose Rodrigues
@ 2011-09-03  3:28 ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2011-09-03  3:28 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: zhu.yanhai, Rettl, Matthias, fio

On 2011-09-02 21:24, Neto, Antonio Jose Rodrigues wrote:
> Thanks Jens, but why when we run READS only (not mixed) we don't have performance issues? (We run with direct=0)?

Let me repeat: please don't top post! Etiquette on most open source
lists are to bottom post, like you can see other people are doind.

Apparently you don't have performance problems with buffered reads since
you don't need a large queue depth to get good performance on reads
alone.

Since you seem a little lost, let me suggest that you start by
diagnosing the raw read vs write performance of the device. Get rid of
the file system, use filename=/dev/XXX directly to access that device.
Beware that it will eat the data that is currently on the device. Once
you get an idea for what the device can actually do, then you can start
to consider what impact the file system has on that (if any, I looked at
your job file, and each job would use separate files. so you should not
have any buffered vs direct unhappiness going on).


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bad performance when reads and writes on same LUN
@ 2011-09-03  3:24 Neto, Antonio Jose Rodrigues
  2011-09-03  3:28 ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2011-09-03  3:24 UTC (permalink / raw)
  To: axboe; +Cc: zhu.yanhai, Rettl, Matthias, fio

[-- Attachment #1: Type: text/plain, Size: 831 bytes --]

Thanks Jens, but why when we run READS only (not mixed) we don't have performance issues? (We run with direct=0)?

----- Original Message -----
From: Jens Axboe <axboe@kernel.dk>
To: Neto, Antonio Jose Rodrigues
Cc: zhu.yanhai@gmail.com <zhu.yanhai@gmail.com>; Rettl, Matthias; fio@vger.kernel.org <fio@vger.kernel.org>
Sent: Fri Sep 02 20:09:26 2011
Subject: Re: Bad performance when reads and writes on same LUN

On 2011-09-02 21:06, Neto, Antonio Jose Rodrigues wrote:
> The question that I have is:
> 
> "Buffered aio will fall back to sync io, and it's known that mixing"
> 
> Why did buffered aio will fall back to sync io?

Please don't top post, reply beneat the original text.

Buffered aio will degenerate to normal sync IO, because the kernel does
not support async buffered io.

-- 
Jens Axboe


[-- Attachment #2: Type: text/html, Size: 1357 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bad performance when reads and writes on same LUN
  2011-09-03  3:06 Neto, Antonio Jose Rodrigues
@ 2011-09-03  3:09 ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2011-09-03  3:09 UTC (permalink / raw)
  To: Neto, Antonio Jose Rodrigues; +Cc: zhu.yanhai, Rettl, Matthias, fio

On 2011-09-02 21:06, Neto, Antonio Jose Rodrigues wrote:
> The question that I have is:
> 
> "Buffered aio will fall back to sync io, and it's known that mixing"
> 
> Why did buffered aio will fall back to sync io?

Please don't top post, reply beneat the original text.

Buffered aio will degenerate to normal sync IO, because the kernel does
not support async buffered io.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bad performance when reads and writes on same LUN
@ 2011-09-03  3:06 Neto, Antonio Jose Rodrigues
  2011-09-03  3:09 ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Neto, Antonio Jose Rodrigues @ 2011-09-03  3:06 UTC (permalink / raw)
  To: zhu.yanhai, Rettl, Matthias; +Cc: axboe, fio

[-- Attachment #1: Type: text/plain, Size: 6196 bytes --]

The question that I have is:

"Buffered aio will fall back to sync io, and it's known that mixing"

Why did buffered aio will fall back to sync io?

neto from Brazil

----- Original Message -----
From: Zhu Yanhai <zhu.yanhai@gmail.com>
To: Rettl, Matthias
Cc: Jens Axboe <axboe@kernel.dk>; Neto, Antonio Jose Rodrigues; fio@vger.kernel.org <fio@vger.kernel.org>
Sent: Fri Sep 02 16:43:54 2011
Subject: Re: Bad performance when reads and writes on same LUN

Hi,
Why did you mix buffered aio read with unbuffered (direct io) aio write?
Buffered aio will fall back to sync io, and it's known that mixing
with buffered io with direct io will hurt performance.

Regards,
Zhu Yanhai

2011/9/2 Rettl, Matthias <Matthias.Rettl@netapp.com>:
> Hi fio users and developers,
>
> I am working on a setup for a customer with a very specific FIO
> configuration:
> We have to use 8 LUNs (from a very performant storage system, by the
> way), doing three readers and one writer per LUN.
>
> So we use 8 fio config files, each of them look the same, except the
> directory= entry:
>
> [global]
> directory=/n1-dm-0
> rw=randread
> ioengine=libaio
> iodepth=4
> size=1024m
> invalidate=1
> direct=0
> runtime=60
> time_based
>
> [Reader-1]
>
> [Reader-2]
>
> [Reader-3]
>
> [Writer]
> rw=randwrite
> ioengine=libaio
> iodepth=32
> direct=1
>
>
> When running this configurations (8 fio's in parallel) I receive these
> results (grep'ed iops from the 8 out-files):
>  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
>  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
>  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
>  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
>  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
>  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
>  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
>  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
>  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
>  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
>  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
>  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
>  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
>  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
>  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
>  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
>  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
>  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
>  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
>  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
>  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
>  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
>  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
>  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
>  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
>  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
>  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
>  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
>  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
>  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
>  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
>  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec
>
>
> As you can see, read-performance is very bad! Writes are fine, but reads
> are not acceptable!
> By the way, for reads the parameter "direct=0" is set and I can see that
> the reads are not even hitting the storage system!
>
> I have then tried to remove the writers from the fio config files, and
> the read-results are as we would expect it:
>
>  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
>  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
>  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
>  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
>  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
>  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
>  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
>  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
>  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
>  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
>  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
>  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
>  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
>  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
>  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
>  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
>  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
>  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
>  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
>  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec
>
> We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
> 180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).
>
> We have also tried to use separate read- and write config files. But as
> soon as we run the tests against the same LUN, read-performance drops!
>
> Any feedback is highly appreciated!
>
> Cheers,
> Matt
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #2: Type: text/html, Size: 7633 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Bad performance when reads and writes on same LUN
  2011-09-02 15:32 Rettl, Matthias
@ 2011-09-02 23:43 ` Zhu Yanhai
  0 siblings, 0 replies; 8+ messages in thread
From: Zhu Yanhai @ 2011-09-02 23:43 UTC (permalink / raw)
  To: Rettl, Matthias; +Cc: Jens Axboe, Neto, Antonio Jose Rodrigues, fio

Hi,
Why did you mix buffered aio read with unbuffered (direct io) aio write?
Buffered aio will fall back to sync io, and it's known that mixing
with buffered io with direct io will hurt performance.

Regards,
Zhu Yanhai

2011/9/2 Rettl, Matthias <Matthias.Rettl@netapp.com>:
> Hi fio users and developers,
>
> I am working on a setup for a customer with a very specific FIO
> configuration:
> We have to use 8 LUNs (from a very performant storage system, by the
> way), doing three readers and one writer per LUN.
>
> So we use 8 fio config files, each of them look the same, except the
> directory= entry:
>
> [global]
> directory=/n1-dm-0
> rw=randread
> ioengine=libaio
> iodepth=4
> size=1024m
> invalidate=1
> direct=0
> runtime=60
> time_based
>
> [Reader-1]
>
> [Reader-2]
>
> [Reader-3]
>
> [Writer]
> rw=randwrite
> ioengine=libaio
> iodepth=32
> direct=1
>
>
> When running this configurations (8 fio's in parallel) I receive these
> results (grep'ed iops from the 8 out-files):
>  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
>  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
>  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
>  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
>  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
>  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
>  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
>  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
>  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
>  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
>  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
>  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
>  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
>  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
>  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
>  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
>  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
>  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
>  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
>  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
>  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
>  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
>  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
>  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
>  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
>  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
>  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
>  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
>  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
>  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
>  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
>  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec
>
>
> As you can see, read-performance is very bad! Writes are fine, but reads
> are not acceptable!
> By the way, for reads the parameter "direct=0" is set and I can see that
> the reads are not even hitting the storage system!
>
> I have then tried to remove the writers from the fio config files, and
> the read-results are as we would expect it:
>
>  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
>  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
>  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
>  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
>  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
>  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
>  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
>  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
>  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
>  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
>  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
>  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
>  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
>  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
>  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
>  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
>  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
>  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
>  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
>  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
>  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
>  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec
>
> We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
> 180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).
>
> We have also tried to use separate read- and write config files. But as
> soon as we run the tests against the same LUN, read-performance drops!
>
> Any feedback is highly appreciated!
>
> Cheers,
> Matt
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Bad performance when reads and writes on same LUN
@ 2011-09-02 15:32 Rettl, Matthias
  2011-09-02 23:43 ` Zhu Yanhai
  0 siblings, 1 reply; 8+ messages in thread
From: Rettl, Matthias @ 2011-09-02 15:32 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Neto, Antonio Jose Rodrigues, fio

Hi fio users and developers,

I am working on a setup for a customer with a very specific FIO
configuration:
We have to use 8 LUNs (from a very performant storage system, by the
way), doing three readers and one writer per LUN.

So we use 8 fio config files, each of them look the same, except the
directory= entry:

[global] 
directory=/n1-dm-0
rw=randread  
ioengine=libaio  
iodepth=4 
size=1024m  
invalidate=1 
direct=0 
runtime=60  
time_based  
 
[Reader-1] 
 
[Reader-2] 
 
[Reader-3] 
 
[Writer]
rw=randwrite
ioengine=libaio
iodepth=32
direct=1


When running this configurations (8 fio's in parallel) I receive these
results (grep'ed iops from the 8 out-files):
  read : io=128520KB, bw=2141.1KB/s, iops=535 , runt= 60001msec
  read : io=127048KB, bw=2117.5KB/s, iops=529 , runt= 60001msec
  read : io=126816KB, bw=2113.6KB/s, iops=528 , runt= 60001msec
  write: io=3811.8MB, bw=65052KB/s, iops=16263 , runt= 60001msec
  read : io=123704KB, bw=2061.7KB/s, iops=515 , runt= 60004msec
  read : io=123656KB, bw=2060.9KB/s, iops=515 , runt= 60002msec
  read : io=122924KB, bw=2048.7KB/s, iops=512 , runt= 60004msec
  write: io=3761.8MB, bw=64187KB/s, iops=16046 , runt= 60002msec
  read : io=127636KB, bw=2127.3KB/s, iops=531 , runt= 60001msec
  read : io=126440KB, bw=2107.4KB/s, iops=526 , runt= 60001msec
  read : io=125612KB, bw=2093.6KB/s, iops=523 , runt= 60001msec
  write: io=3832.4MB, bw=65403KB/s, iops=16350 , runt= 60002msec
  read : io=125344KB, bw=2089.4KB/s, iops=522 , runt= 60001msec
  read : io=125284KB, bw=2088.4KB/s, iops=522 , runt= 60001msec
  read : io=125080KB, bw=2084.7KB/s, iops=521 , runt= 60001msec
  write: io=3784.8MB, bw=64592KB/s, iops=16147 , runt= 60001msec
  read : io=127656KB, bw=2127.6KB/s, iops=531 , runt= 60001msec
  read : io=127144KB, bw=2119.4KB/s, iops=529 , runt= 60001msec
  read : io=126248KB, bw=2104.1KB/s, iops=526 , runt= 60001msec
  write: io=3828.9MB, bw=65343KB/s, iops=16335 , runt= 60002msec
  read : io=124236KB, bw=2070.6KB/s, iops=517 , runt= 60001msec
  read : io=123908KB, bw=2065.2KB/s, iops=516 , runt= 60001msec
  read : io=123352KB, bw=2055.9KB/s, iops=513 , runt= 60001msec
  write: io=3764.1MB, bw=64253KB/s, iops=16063 , runt= 60001msec
  read : io=127784KB, bw=2129.8KB/s, iops=532 , runt= 60001msec
  read : io=127276KB, bw=2121.3KB/s, iops=530 , runt= 60001msec
  read : io=127240KB, bw=2120.7KB/s, iops=530 , runt= 60001msec
  write: io=3839.5MB, bw=65524KB/s, iops=16380 , runt= 60002msec
  read : io=124864KB, bw=2081.4KB/s, iops=520 , runt= 60001msec
  read : io=124008KB, bw=2066.8KB/s, iops=516 , runt= 60002msec
  read : io=124068KB, bw=2067.8KB/s, iops=516 , runt= 60001msec
  write: io=3748.9MB, bw=63979KB/s, iops=15994 , runt= 60001msec


As you can see, read-performance is very bad! Writes are fine, but reads
are not acceptable!
By the way, for reads the parameter "direct=0" is set and I can see that
the reads are not even hitting the storage system!

I have then tried to remove the writers from the fio config files, and
the read-results are as we would expect it:

  read : io=1522.1MB, bw=25977KB/s, iops=6494 , runt= 60001msec
  read : io=1526.3MB, bw=26047KB/s, iops=6511 , runt= 60000msec
  read : io=1526.9MB, bw=26058KB/s, iops=6514 , runt= 60001msec
  read : io=990.98MB, bw=16912KB/s, iops=4227 , runt= 60001msec
  read : io=990.85MB, bw=16910KB/s, iops=4227 , runt= 60001msec
  read : io=1107.5MB, bw=18900KB/s, iops=4724 , runt= 60001msec
  read : io=1006.8MB, bw=17181KB/s, iops=4295 , runt= 60001msec
  read : io=1011.2MB, bw=17256KB/s, iops=4314 , runt= 60001msec
  read : io=1046.2MB, bw=17854KB/s, iops=4463 , runt= 60001msec
  read : io=987.33MB, bw=16850KB/s, iops=4212 , runt= 60001msec
  read : io=991.72MB, bw=16925KB/s, iops=4231 , runt= 60000msec
  read : io=1102.4MB, bw=18813KB/s, iops=4703 , runt= 60001msec
  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
  read : io=1012.1MB, bw=17287KB/s, iops=4321 , runt= 60001msec
  read : io=1128.7MB, bw=19252KB/s, iops=4813 , runt= 60001msec
  read : io=1186.1MB, bw=20257KB/s, iops=5064 , runt= 60001msec
  read : io=996.34MB, bw=17004KB/s, iops=4250 , runt= 60001msec
  read : io=1051.6MB, bw=17945KB/s, iops=4486 , runt= 60001msec
  read : io=1012.9MB, bw=17286KB/s, iops=4321 , runt= 60001msec
  read : io=1014.8MB, bw=17318KB/s, iops=4329 , runt= 60001msec
  read : io=1125.5MB, bw=19208KB/s, iops=4801 , runt= 60001msec
  read : io=1125.3MB, bw=19204KB/s, iops=4800 , runt= 60001msec
  read : io=1005.2MB, bw=17155KB/s, iops=4288 , runt= 60001msec
  read : io=1016.7MB, bw=17351KB/s, iops=4337 , runt= 60001msec

We are on a RHEL 5.5 Box, 8 Gb FC. With another tool I receive about
180k 4k direct (!) random-read IOPS (with 64 threads to 4 LUNs).

We have also tried to use separate read- and write config files. But as
soon as we run the tests against the same LUN, read-performance drops!

Any feedback is highly appreciated!

Cheers,
Matt




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-09-03  3:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-02 23:50 Bad performance when reads and writes on same LUN Neto, Antonio Jose Rodrigues
2011-09-03  0:00 ` Zhu Yanhai
  -- strict thread matches above, loose matches on Subject: below --
2011-09-03  3:24 Neto, Antonio Jose Rodrigues
2011-09-03  3:28 ` Jens Axboe
2011-09-03  3:06 Neto, Antonio Jose Rodrigues
2011-09-03  3:09 ` Jens Axboe
2011-09-02 15:32 Rettl, Matthias
2011-09-02 23:43 ` Zhu Yanhai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.