All of lore.kernel.org
 help / color / mirror / Atom feed
* ZFS Testing Methodology
@ 2017-05-12 17:45 Spencer Hayes
  2017-05-12 19:26 ` Sitsofe Wheeler
  0 siblings, 1 reply; 2+ messages in thread
From: Spencer Hayes @ 2017-05-12 17:45 UTC (permalink / raw)
  To: fio

Howdy all,

We're attempting to run some performance tests on a Solaris 11.2
system with ZFS. The system has a sizable amount of memory which is
currently more than the available space left in ZFS. Given that ZFS
does not provide for direct IO, we're looking for options on how to
exhaust the caching and actually test the throughput of the underlying
disks.

I've looked through several of the fio options but so far have not
been able to find a mix that would let us do more total IO than is
actually used as space on disk. The idea we had was to lay down say a
100GB file, but do 200-300GB of actual IO within that single (or
multiple) file(s).

Does anyone have experience perf testing ZFS and might have some
suggestions to tackle this scenario?

Thanks!

-Spencer

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: ZFS Testing Methodology
  2017-05-12 17:45 ZFS Testing Methodology Spencer Hayes
@ 2017-05-12 19:26 ` Sitsofe Wheeler
  0 siblings, 0 replies; 2+ messages in thread
From: Sitsofe Wheeler @ 2017-05-12 19:26 UTC (permalink / raw)
  To: Spencer Hayes; +Cc: fio

On 12 May 2017 at 18:45, Spencer Hayes <spencer@jshayesinc.com> wrote:
>
> We're attempting to run some performance tests on a Solaris 11.2
> system with ZFS. The system has a sizable amount of memory which is
> currently more than the available space left in ZFS. Given that ZFS
> does not provide for direct IO, we're looking for options on how to
> exhaust the caching and actually test the throughput of the underlying
> disks.

If you're doing writes you can wait for synchronisation on stable
media either at the end of periodically by using the sync / fsync
options. For reads... well I guess you can start from an empty cache
by unmounting/remounting the filesystem before starting your tests?
I'm not sure you want to test a filesystem for non-cached I/O
performance unless your application is going to somehow depend on it
though.

> I've looked through several of the fio options but so far have not
> been able to find a mix that would let us do more total IO than is
> actually used as space on disk. The idea we had was to lay down say a
> 100GB file, but do 200-300GB of actual IO within that single (or
> multiple) file(s).

You probably want to use size with offset and io_size/loops (see
http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-size ,
http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-offset
, http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-io-size
, http://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-arg-loops
)

> Does anyone have experience perf testing ZFS and might have some
> suggestions to tackle this scenario?

You might get some good suggestions on a Solaris related ZFS mailing
list - https://illumos.topicbox.com/groups/zfs/discussions .

-- 
Sitsofe | http://sucs.org/~sits/

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-05-12 19:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-12 17:45 ZFS Testing Methodology Spencer Hayes
2017-05-12 19:26 ` Sitsofe Wheeler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.