All of lore.kernel.org
 help / color / mirror / Atom feed
* Fio Multiple Nvme Disks
@ 2016-10-14 19:09 Tarek
  2016-10-15  1:03 ` Matthew Eaton
  0 siblings, 1 reply; 2+ messages in thread
From: Tarek @ 2016-10-14 19:09 UTC (permalink / raw)
  To: fio

I want to run Fio on 10 NVME drive sending each iodepth of 128, having the
results combined as an aggregate result, and also have logging every
1000ms for BW and IO. I did the following below, combining all drives in
one 'filename' separated by colon under global, then set my job below.
However, I don't think it's doing what I intended and sending each drive
128 iodepth. How can I achieve what I am after?

Example of what I was doing
Example below: 10 NVME drives are each getting iodepth of 128, or just
12.8 (roughly)?
[global]
thread
ioengine=libaio
direct=1
buffered=0
log_avg_msec=1000
group_reporting=1
filename=/dev/nvme0n1:/dev/nvme1n1:/dev/nvme2n1:/dev/nvme3n1:/dev/nvme4n1:
/dev/nvme5n1:/dev/nvme6n1:/dev/nvme7n1:/dev/nvme8n1:/dev/nvme9n1

[sequential-write_4k_qd128_jobs_1]
description=Sequential write precondition
rw=write
bs=4k
iodepth=128
numjobs=1
size=100%
stonewall

thank you

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Fio Multiple Nvme Disks
  2016-10-14 19:09 Fio Multiple Nvme Disks Tarek
@ 2016-10-15  1:03 ` Matthew Eaton
  0 siblings, 0 replies; 2+ messages in thread
From: Matthew Eaton @ 2016-10-15  1:03 UTC (permalink / raw)
  To: Tarek; +Cc: fio

On Fri, Oct 14, 2016 at 12:09 PM, Tarek <cashew250@gmail.com> wrote:
> I want to run Fio on 10 NVME drive sending each iodepth of 128, having the
> results combined as an aggregate result, and also have logging every
> 1000ms for BW and IO. I did the following below, combining all drives in
> one 'filename' separated by colon under global, then set my job below.
> However, I don't think it's doing what I intended and sending each drive
> 128 iodepth. How can I achieve what I am after?
>
> Example of what I was doing
> Example below: 10 NVME drives are each getting iodepth of 128, or just
> 12.8 (roughly)?
> [global]
> thread
> ioengine=libaio
> direct=1
> buffered=0
> log_avg_msec=1000
> group_reporting=1
> filename=/dev/nvme0n1:/dev/nvme1n1:/dev/nvme2n1:/dev/nvme3n1:/dev/nvme4n1:
> /dev/nvme5n1:/dev/nvme6n1:/dev/nvme7n1:/dev/nvme8n1:/dev/nvme9n1
>
> [sequential-write_4k_qd128_jobs_1]
> description=Sequential write precondition
> rw=write
> bs=4k
> iodepth=128
> numjobs=1
> size=100%
> stonewall
>
> thank you
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

For testing multiple devices I usually create a new job per device, in
your case:

[global]
thread
ioengine=libaio
direct=1
buffered=0
log_avg_msec=1000
group_reporting=1
rw=write
bs=4k
iodepth=128
numjobs=1
size=100%

[job1]
filename=/dev/nvme0n1

[job2]
filename=/dev/nvme1n1

[job3]
filename=/dev/nvme2n1
...

Fio will write logs per job but you might use the fiologparser script
to get what you want:

# fiologparser.py
#
# This tool lets you parse multiple fio log files and look at interaval
# statistics even when samples are non-uniform. For instance:
#
# fiologparser.py -s *bw*
#
# to see per-interval sums for all bandwidth logs or:
#
# fiologparser.py -a *clat*
#
# to see per-interval average completion latency.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-10-15  1:04 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-10-14 19:09 Fio Multiple Nvme Disks Tarek
2016-10-15  1:03 ` Matthew Eaton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.