All of lore.kernel.org
 help / color / mirror / Atom feed
* LIO: FILEIO vs IBLOCK read performance
@ 2017-02-01  8:18 Timofey Titovets
  2017-02-01 19:55 ` Bart Van Assche
  0 siblings, 1 reply; 2+ messages in thread
From: Timofey Titovets @ 2017-02-01  8:18 UTC (permalink / raw)
  To: linux-scsi

Hi, we want move to iSCSI from NFS storage.
So our data servers have a big ram cache, for utilize it we want use
fileio backend (use file on fs for LUN).

So, question, we did some tests, and have and get strange results with
random read performance.

What we have:
Old dell server in lab 4 cpu, 16GB RAM, 6x2TB SATA HDD (RAID10)
3 backends on storage server:
/dev/sdb
/storage/LUN/1 (reside on fs, on /dev/sdb)
/dev/loop0 -> /storage/LUN/1

For testing on the client side, we use fio:
directio=1, libaio, iodepth=32, bs=4k
Before and after every test we do vm.drop_caches (with results are
more interesting) on both servers.
1 fronend on test server
/dev/sdb

We try do fio with NFS and get ~ 500 iops

so, short results (random read on /dev/sdb on client):
block + /dev/sdb ~ 500 iops (emulate_write_cache=0)
fileio + /dev/sdb ~ 90 iops (emulate_write_cache=0)
fileio + /dev/sdb ~ 90 iops (emulate_write_cache=1)
fileio + /storage/LUN/1 ~90 iops (emulate_write_cache=0)
fileio + /storage/LUN/1 ~90 iops (emulate_write_cache=1)
block + /dev/loop0 ~ 90 iops loop directio=0
block + /dev/loop0 ~ 500 iops loop directio=0

So, if i understand correctly, it's a some problem with buffering
mode, can you give some explain for that?

Thank you for any help.

P.S.
By iostat i see what with target_mod_iblock i have a ~32 queue size to
disk, with target_mod_file, i see ~ 1 queue size to disk.

P.S.S.

Kernel 4.9.6
-- 
Have a nice day,
Timofey.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: LIO: FILEIO vs IBLOCK read performance
  2017-02-01  8:18 LIO: FILEIO vs IBLOCK read performance Timofey Titovets
@ 2017-02-01 19:55 ` Bart Van Assche
  0 siblings, 0 replies; 2+ messages in thread
From: Bart Van Assche @ 2017-02-01 19:55 UTC (permalink / raw)
  To: linux-scsi, nefelim4ag

On Wed, 2017-02-01 at 11:18 +0300, Timofey Titovets wrote:
> so, short results (random read on /dev/sdb on client):
> block + /dev/sdb ~ 500 iops (emulate_write_cache=0)
> fileio + /dev/sdb ~ 90 iops (emulate_write_cache=0)
> fileio + /dev/sdb ~ 90 iops (emulate_write_cache=1)
> fileio + /storage/LUN/1 ~90 iops (emulate_write_cache=0)
> fileio + /storage/LUN/1 ~90 iops (emulate_write_cache=1)
> block + /dev/loop0 ~ 90 iops loop directio=0
> block + /dev/loop0 ~ 500 iops loop directio=0

The address of the LIO mailing list is target-devel@vger.kernel.org instead
of linux-scsi@vger.kernel.org. Anyway, have you tried to disable readahead
on /dev/sdb (blockdev --setra / --setfra)? Readahead can have a significant
negative impact on random I/O. There is a heuristic in the kernel for
automatically disabling readahead for random I/O but maybe that algorithm
did not recognize your workload properly.

Bart.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-02-01 19:55 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-01  8:18 LIO: FILEIO vs IBLOCK read performance Timofey Titovets
2017-02-01 19:55 ` Bart Van Assche

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.