All of lore.kernel.org
 help / color / mirror / Atom feed
* request for job files
@ 2009-04-22 13:22 Jens Axboe
  2009-04-22 16:58 ` Girish Satihal
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Jens Axboe @ 2009-04-22 13:22 UTC (permalink / raw)
  To: fio

Hi,

The sample job files shipped with fio are (generally) pretty weak, and
I'd really love for the selection to be better. In my experience, that
is the first place you look when trying out something like fio. It
really helps you getting a (previously) unknown job format going
quickly.

So if any of you have "interesting" job files that you use for testing
or performance analysis, please do send them to me so I can include them
with fio.

Thanks!

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-22 13:22 request for job files Jens Axboe
@ 2009-04-22 16:58 ` Girish Satihal
  2009-04-22 17:36   ` Jens Axboe
  2009-04-22 20:32 ` Chris Worley
  2009-04-24  5:00 ` Gurudas Pai
  2 siblings, 1 reply; 8+ messages in thread
From: Girish Satihal @ 2009-04-22 16:58 UTC (permalink / raw)
  To: fio, Jens Axboe


[-- Attachment #1.1: Type: text/plain, Size: 1022 bytes --]

Hi Jens,
Here is one job files which we regularly use as a Standard_Workload for our fio testing.
 
Thanks,
 Girish

--- On Wed, 4/22/09, Jens Axboe <jens.axboe@oracle.com> wrote:

From: Jens Axboe <jens.axboe@oracle.com>
Subject: request for job files
To: fio@vger.kernel.org
Date: Wednesday, April 22, 2009, 6:52 PM

Hi,

The sample job files shipped with fio are (generally) pretty weak, and
I'd really love for the selection to be better. In my experience, that
is the first place you look when trying out something like fio. It
really helps you getting a (previously) unknown job format going
quickly.

So if any of you have "interesting" job files that you use for
testing
or performance analysis, please do send them to me so I can include them
with fio.

Thanks!

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



      

[-- Attachment #1.2: Type: text/html, Size: 1363 bytes --]

[-- Attachment #2: StandardWorkload.fio --]
[-- Type: application/octet-stream, Size: 7355 bytes --]

[global]
; ----------------------------------------------------------
; number of test targets
nrfiles=1
; ----------------------------------------------------------
; this is the test target
filename=/dev/cciss/c1d0
; ----------------------------------------------------------
; algorithm for selecting test targets (if multiple)
file_service_type=roundrobin
; ----------------------------------------------------------
; select the I/O engine
ioengine=libaio
; ioengine=posixaio
; ioengine=solarisaio
; ----------------------------------------------------------
; use non-buffered I/O
direct=1
; ----------------------------------------------------------
time_based

; ----------------------------------------------------------
;
; set up Random test parameters
;
; ----------------------------------------------------------
[global]
runtime=60
ramp_time=10
rw=randrw

[global]
bs=4k
[global]
rwmixread=100
[4KB_Random_Read]
stonewall
iodepth=1
[4KB_Random_Read]
stonewall
iodepth=2
[4KB_Random_Read]
stonewall
iodepth=4
[4KB_Random_Read]
stonewall
iodepth=8
[4KB_Random_Read]
stonewall
iodepth=16
[4KB_Random_Read]
stonewall
iodepth=32
[4KB_Random_Read]
stonewall
iodepth=64
[4KB_Random_Read]
stonewall
iodepth=128
[4KB_Random_Read]
stonewall
iodepth=256
[global]
rwmixread=0
[4KB_Random_Write]
stonewall
iodepth=1
[4KB_Random_Write]
stonewall
iodepth=2
[4KB_Random_Write]
stonewall
iodepth=4
[4KB_Random_Write]
stonewall
iodepth=8
[4KB_Random_Write]
stonewall
iodepth=16
[4KB_Random_Write]
stonewall
iodepth=32
[4KB_Random_Write]
stonewall
iodepth=64
[4KB_Random_Write]
stonewall
iodepth=128
[4KB_Random_Write]
stonewall
iodepth=256
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
rwmixread=67
ramp_time=60
[4KB_OLTP]
stonewall
iodepth=1
[4KB_OLTP]
stonewall
iodepth=2
[4KB_OLTP]
stonewall
iodepth=4
[4KB_OLTP]
stonewall
iodepth=8
[4KB_OLTP]
stonewall
iodepth=16
[4KB_OLTP]
stonewall
iodepth=32
[4KB_OLTP]
stonewall
iodepth=64
[4KB_OLTP]
stonewall
iodepth=128
[4KB_OLTP]
stonewall
iodepth=256
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
bs=8k
[global]
rwmixread=100
[8KB_Random_Read]
stonewall
iodepth=1
[8KB_Random_Read]
stonewall
iodepth=2
[8KB_Random_Read]
stonewall
iodepth=4
[8KB_Random_Read]
stonewall
iodepth=8
[8KB_Random_Read]
stonewall
iodepth=16
[8KB_Random_Read]
stonewall
iodepth=32
[8KB_Random_Read]
stonewall
iodepth=64
[8KB_Random_Read]
stonewall
iodepth=128
[8KB_Random_Read]
stonewall
iodepth=256
[global]
rwmixread=0
[8KB_Random_Write]
stonewall
iodepth=1
[8KB_Random_Write]
stonewall
iodepth=2
[8KB_Random_Write]
stonewall
iodepth=4
[8KB_Random_Write]
stonewall
iodepth=8
[8KB_Random_Write]
stonewall
iodepth=16
[8KB_Random_Write]
stonewall
iodepth=32
[8KB_Random_Write]
stonewall
iodepth=64
[8KB_Random_Write]
stonewall
iodepth=128
[8KB_Random_Write]
stonewall
iodepth=256
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
rwmixread=67
ramp_time=60
[8KB_OLTP]
stonewall
iodepth=1
[8KB_OLTP]
stonewall
iodepth=2
[8KB_OLTP]
stonewall
iodepth=4
[8KB_OLTP]
stonewall
iodepth=8
[8KB_OLTP]
stonewall
iodepth=16
[8KB_OLTP]
stonewall
iodepth=32
[8KB_OLTP]
stonewall
iodepth=64
[8KB_OLTP]
stonewall
iodepth=128
[8KB_OLTP]
stonewall
iodepth=256
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
bs=64k
[global]
rwmixread=100
[64KB_Random_Read]
stonewall
iodepth=1
[64KB_Random_Read]
stonewall
iodepth=2
[64KB_Random_Read]
stonewall
iodepth=4
[64KB_Random_Read]
stonewall
iodepth=8
[64KB_Random_Read]
stonewall
iodepth=16
[64KB_Random_Read]
stonewall
iodepth=32
[64KB_Random_Read]
stonewall
iodepth=64
[64KB_Random_Read]
stonewall
iodepth=128
[64KB_Random_Read]
stonewall
iodepth=256
[global]
rwmixread=0
[64KB_Random_Write]
stonewall
iodepth=1
[64KB_Random_Write]
stonewall
iodepth=2
[64KB_Random_Write]
stonewall
iodepth=4
[64KB_Random_Write]
stonewall
iodepth=8
[64KB_Random_Write]
stonewall
iodepth=16
[64KB_Random_Write]
stonewall
iodepth=32
[64KB_Random_Write]
stonewall
iodepth=64
[64KB_Random_Write]
stonewall
iodepth=128
[64KB_Random_Write]
stonewall
iodepth=256
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
; ----------------------------------------------------------
;
; set up Sequential test parameters
;
; ----------------------------------------------------------
[global]
runtime=30
; ramp_time=5

[global]
bs=512
[global]
rw=read
[512B_MaxIO_Read]
stonewall
iodepth=1
[512B_MaxIO_Read]
stonewall
iodepth=2
[512B_MaxIO_Read]
stonewall
iodepth=4
[512B_MaxIO_Read]
stonewall
iodepth=8
[512B_MaxIO_Read]
stonewall
iodepth=16
[512B_MaxIO_Read]
stonewall
iodepth=32
[512B_MaxIO_Read]
stonewall
iodepth=64
[global]
rw=write
[512B_MaxIO_Write]
stonewall
iodepth=1
[512B_MaxIO_Write]
stonewall
iodepth=2
[512B_MaxIO_Write]
stonewall
iodepth=4
[512B_MaxIO_Write]
stonewall
iodepth=8
[512B_MaxIO_Write]
stonewall
iodepth=16
[512B_MaxIO_Write]
stonewall
iodepth=32
[512B_MaxIO_Write]
stonewall
iodepth=64
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
bs=4K
[global]
rw=read
[4KB_Sequential_Read]
stonewall
iodepth=1
[4KB_Sequential_Read]
stonewall
iodepth=2
[4KB_Sequential_Read]
stonewall
iodepth=4
[4KB_Sequential_Read]
stonewall
iodepth=8
[4KB_Sequential_Read]
stonewall
iodepth=16
[4KB_Sequential_Read]
stonewall
iodepth=32
[4KB_Sequential_Read]
stonewall
iodepth=64
[global]
rw=write
[4KB_Sequential_Write]
stonewall
iodepth=1
[4KB_Sequential_Write]
stonewall
iodepth=2
[4KB_Sequential_Write]
stonewall
iodepth=4
[4KB_Sequential_Write]
stonewall
iodepth=8
[4KB_Sequential_Write]
stonewall
iodepth=16
[4KB_Sequential_Write]
stonewall
iodepth=32
[4KB_Sequential_Write]
stonewall
iodepth=64
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
bs=64K
[global]
rw=read
[64KB_Sequential_Read]
stonewall
iodepth=1
[64KB_Sequential_Read]
stonewall
iodepth=2
[64KB_Sequential_Read]
stonewall
iodepth=4
[64KB_Sequential_Read]
stonewall
iodepth=8
[64KB_Sequential_Read]
stonewall
iodepth=16
[64KB_Sequential_Read]
stonewall
iodepth=32
[64KB_Sequential_Read]
stonewall
iodepth=64
[global]
rw=write
[64KB_Sequential_Write]
stonewall
iodepth=1
[64KB_Sequential_Write]
stonewall
iodepth=2
[64KB_Sequential_Write]
stonewall
iodepth=4
[64KB_Sequential_Write]
stonewall
iodepth=8
[64KB_Sequential_Write]
stonewall
iodepth=16
[64KB_Sequential_Write]
stonewall
iodepth=32
[64KB_Sequential_Write]
stonewall
iodepth=64
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
bs=512K
[global]
rw=read
[512KB_Sequential_Read]
stonewall
iodepth=1
[512KB_Sequential_Read]
stonewall
iodepth=2
[512KB_Sequential_Read]
stonewall
iodepth=4
[512KB_Sequential_Read]
stonewall
iodepth=8
[512KB_Sequential_Read]
stonewall
iodepth=16
[512KB_Sequential_Read]
stonewall
iodepth=32
[512KB_Sequential_Read]
stonewall
iodepth=64
[global]
rw=write
[512KB_Sequential_Write]
stonewall
iodepth=1
[512KB_Sequential_Write]
stonewall
iodepth=2
[512KB_Sequential_Write]
stonewall
iodepth=4
[512KB_Sequential_Write]
stonewall
iodepth=8
[512KB_Sequential_Write]
stonewall
iodepth=16
[512KB_Sequential_Write]
stonewall
iodepth=32
[512KB_Sequential_Write]
stonewall
iodepth=64
[Idle]
stonewall
rwmixread=100
bs=64k
rate=1
[global]
bs=1M
[global]
rw=read
[1MB_Sequential_Read]
stonewall
iodepth=1
[1MB_Sequential_Read]
stonewall
iodepth=2
[1MB_Sequential_Read]
stonewall
iodepth=4
[1MB_Sequential_Read]
stonewall
iodepth=8
[1MB_Sequential_Read]
stonewall
iodepth=16
[1MB_Sequential_Read]
stonewall
iodepth=32
[1MB_Sequential_Read]
stonewall
iodepth=64

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-22 16:58 ` Girish Satihal
@ 2009-04-22 17:36   ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2009-04-22 17:36 UTC (permalink / raw)
  To: Girish Satihal; +Cc: fio

On Wed, Apr 22 2009, Girish Satihal wrote:
> Hi Jens,
> Here is one job files which we regularly use as a Standard_Workload
> for our fio testing.

Thanks. If you could attach a few words to describe the job file as
well, it'll be more useful. Just something like "used to test xxx" or
"give the device a good thrashing", or whatever is appropriate for the
job file in question.

> �
> Thanks,
> �Girish
> 
> --- On Wed, 4/22/09, Jens Axboe <jens.axboe@oracle.com> wrote:
> 
> From: Jens Axboe <jens.axboe@oracle.com>
> Subject: request for job files
> To: fio@vger.kernel.org
> Date: Wednesday, April 22, 2009, 6:52 PM
> 
> Hi,
> 
> The sample job files shipped with fio are (generally) pretty weak, and
> I'd really love for the selection to be better. In my experience, that
> is the first place you look when trying out something like fio. It
> really helps you getting a (previously) unknown job format going
> quickly.
> 
> So if any of you have "interesting" job files that you use for
> testing
> or performance analysis, please do send them to me so I can include them
> with fio.
> 
> Thanks!
> 
> -- 
> Jens Axboe
> 
> --
> To unsubscribe from this list: send the line "unsubscribe fio" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> 
>       


-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-22 13:22 request for job files Jens Axboe
  2009-04-22 16:58 ` Girish Satihal
@ 2009-04-22 20:32 ` Chris Worley
  2009-04-23  5:50   ` Jens Axboe
  2009-04-24  5:00 ` Gurudas Pai
  2 siblings, 1 reply; 8+ messages in thread
From: Chris Worley @ 2009-04-22 20:32 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

On Wed, Apr 22, 2009 at 7:22 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> Hi,
>
> The sample job files shipped with fio are (generally) pretty weak, and
> I'd really love for the selection to be better. In my experience, that
> is the first place you look when trying out something like fio. It
> really helps you getting a (previously) unknown job format going
> quickly.
>
> So if any of you have "interesting" job files that you use for testing
> or performance analysis, please do send them to me so I can include them
> with fio.

Jens,

I normally use scripts to run I/O benchmarks, and pretty much use fio
exclusively.

Hopefully, in sharing the scripts, you can see usage, and feeback
anything I may be doing wrong.

In one incarnation, I put all the devices to be tested on the script's
command line, then concatenate a fio-ready list of these devices along
with a sum of 10% of all the disks with:

    filesize=0
    fiolist=""
    for i in $*
    do fiolist=$fiolist" --filename="$i
       t=`basename $i`
       let filesize=$filesize+`cat /proc/partitions | grep $t  | awk
'{ printf "%d\n", $3*1024/10 }'`
    done

Rather than a "job file", In this case I do everything on the command
line for power of 2 block sizes from 1MB down to 512B:

  for i in 1m 512k 256k 128k 64k 32k 16k 8k 4k 2k 1k 512
  do
    for k in 0 25 50 75 100
    do
      fio  --rw=randrw --bs=$i --rwmixread=$k --numjobs=32
--iodepth=64 --sync=0 --direct=1 --randrepeat=0 --softrandommap=1 \
              --ioengine=libaio $fiolist --name=test --loops=10000
--size=$filesize  --runtime=$runtime
    done
  done

So the above "fiolist" is going to look like "--filename=/dev/sda
--filename=/dev/sdb", and the "filesize" is going to be the sum of 10%
of each disks size.  I only use this with disks of the same size, and
assume that fio will exercise 10% of each disk.  That assumption seems
to pan out in the resulting data, but I've never traced the code to
verify that this is what it will do.

Then I moved to a process-pinning strategy that has some number of
pinned fio threads running per disk.  I still calculate the
"filesize", but just uses 10% of one disk, and assume they are all the
same.   Much of the affinity settings have to do with specific bus-CPU
affinity, but for a simple example, lets say I just round-robin the
files on the command line to the available processors, and create
arrays "files" and "pl" consisting of block devices and processor
numbers:

totproc=`cat /proc/cpuinfo | grep processor | wc -l`
p=0
for i in $*
do
    files[$p]="filename="$i
    pl[$p]=$p
    let p=$p+1
    if [ $p -eq $totproc ]
    then break
    fi
done
let totproc=$p-1

Then generate "job files" and run fio with:

  for i in 1m 512k 256k 128k 64k 32k 16k 8k 4k 2k 1k 512
  do
    for k in 0 25 50 75 100
    do  echo "" >fio-rand-script.$$
      for p in `seq 0 $totproc`
      do
         echo -e
"[cpu${p}]\ncpus_allowed=${pl[$p]}\nnumjobs=$jobsperproc\n${files[$p]}\ngroup_reporting\nbs=$i
\nrw=randrw\nrwmixread=$k \nsoftrandommap=1\nruntime=$runtime
\nsync=0\ndirect=1\niodepth=64\nioengine=libaio\nloops=10000\nexitall\nsize=$filesi
e \n" >>fio-rand-script.$$
      done
      fio fio-rand-script.$$
    done
  done

The scripts look like:

# cat fio-rand-script.8625
[cpu0]
cpus_allowed=0
numjobs=8
 filename=/dev/sda
group_reporting
bs=4k
rw=randrw
rwmixread=0
softrandommap=1
runtime=600
sync=0
direct=1
iodepth=64
ioengine=libaio
loops=10000
exitall
size=16091503001

[cpu1]
cpus_allowed=1
numjobs=8
 filename=/dev/sdb
group_reporting
bs=4k
rw=randrw
rwmixread=0
softrandommap=1
runtime=600
sync=0
direct=1
iodepth=64
ioengine=libaio
loops=10000
exitall
size=16091503001

[cpu2]
cpus_allowed=2
numjobs=8
 filename=/dev/sdc
group_reporting
bs=4k
rw=randrw
rwmixread=0
softrandommap=1
runtime=600
sync=0
direct=1
iodepth=64
ioengine=libaio
loops=10000
exitall
size=16091503001

[cpu3]
cpus_allowed=3
numjobs=8
 filename=/dev/sdd
group_reporting
bs=4k
rw=randrw
rwmixread=0
softrandommap=1
runtime=600
sync=0
direct=1
iodepth=64
ioengine=libaio
loops=10000
exitall
size=16091503001

I would sure rather do that on the command line and not create a file,
but the groups never worked out for me on the command line... hints
would be appreciated.

Thanks,

Chris


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-22 20:32 ` Chris Worley
@ 2009-04-23  5:50   ` Jens Axboe
  2009-04-23 15:48     ` Chris Worley
  0 siblings, 1 reply; 8+ messages in thread
From: Jens Axboe @ 2009-04-23  5:50 UTC (permalink / raw)
  To: Chris Worley; +Cc: fio

On Wed, Apr 22 2009, Chris Worley wrote:
> On Wed, Apr 22, 2009 at 7:22 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
> > Hi,
> >
> > The sample job files shipped with fio are (generally) pretty weak, and
> > I'd really love for the selection to be better. In my experience, that
> > is the first place you look when trying out something like fio. It
> > really helps you getting a (previously) unknown job format going
> > quickly.
> >
> > So if any of you have "interesting" job files that you use for testing
> > or performance analysis, please do send them to me so I can include them
> > with fio.
> 
> Jens,
> 
> I normally use scripts to run I/O benchmarks, and pretty much use fio
> exclusively.
> 
> Hopefully, in sharing the scripts, you can see usage, and feeback
> anything I may be doing wrong.
> 
> In one incarnation, I put all the devices to be tested on the script's
> command line, then concatenate a fio-ready list of these devices along
> with a sum of 10% of all the disks with:
> 
>     filesize=0
>     fiolist=""
>     for i in $*
>     do fiolist=$fiolist" --filename="$i
>        t=`basename $i`
>        let filesize=$filesize+`cat /proc/partitions | grep $t  | awk
> '{ printf "%d\n", $3*1024/10 }'`
>     done
> 
> Rather than a "job file", In this case I do everything on the command
> line for power of 2 block sizes from 1MB down to 512B:
> 
>   for i in 1m 512k 256k 128k 64k 32k 16k 8k 4k 2k 1k 512
>   do
>     for k in 0 25 50 75 100
>     do
>       fio  --rw=randrw --bs=$i --rwmixread=$k --numjobs=32
> --iodepth=64 --sync=0 --direct=1 --randrepeat=0 --softrandommap=1 \
>               --ioengine=libaio $fiolist --name=test --loops=10000
> --size=$filesize  --runtime=$runtime
>     done
>   done
> 
> So the above "fiolist" is going to look like "--filename=/dev/sda
> --filename=/dev/sdb", and the "filesize" is going to be the sum of 10%
> of each disks size.  I only use this with disks of the same size, and
> assume that fio will exercise 10% of each disk.  That assumption seems
> to pan out in the resulting data, but I've never traced the code to
> verify that this is what it will do.
> 
> Then I moved to a process-pinning strategy that has some number of
> pinned fio threads running per disk.  I still calculate the
> "filesize", but just uses 10% of one disk, and assume they are all the
> same.   Much of the affinity settings have to do with specific bus-CPU
> affinity, but for a simple example, lets say I just round-robin the
> files on the command line to the available processors, and create
> arrays "files" and "pl" consisting of block devices and processor
> numbers:
> 
> totproc=`cat /proc/cpuinfo | grep processor | wc -l`
> p=0
> for i in $*
> do
>     files[$p]="filename="$i
>     pl[$p]=$p
>     let p=$p+1
>     if [ $p -eq $totproc ]
>     then break
>     fi
> done
> let totproc=$p-1
> 
> Then generate "job files" and run fio with:
> 
>   for i in 1m 512k 256k 128k 64k 32k 16k 8k 4k 2k 1k 512
>   do
>     for k in 0 25 50 75 100
>     do  echo "" >fio-rand-script.$$
>       for p in `seq 0 $totproc`
>       do
>          echo -e
> "[cpu${p}]\ncpus_allowed=${pl[$p]}\nnumjobs=$jobsperproc\n${files[$p]}\ngroup_reporting\nbs=$i
> \nrw=randrw\nrwmixread=$k \nsoftrandommap=1\nruntime=$runtime
> \nsync=0\ndirect=1\niodepth=64\nioengine=libaio\nloops=10000\nexitall\nsize=$filesi
> e \n" >>fio-rand-script.$$
>       done
>       fio fio-rand-script.$$
>     done
>   done
> 
> The scripts look like:
> 
> # cat fio-rand-script.8625
> [cpu0]
> cpus_allowed=0
> numjobs=8
>  filename=/dev/sda
> group_reporting
> bs=4k
> rw=randrw
> rwmixread=0
> softrandommap=1
> runtime=600
> sync=0
> direct=1
> iodepth=64
> ioengine=libaio
> loops=10000
> exitall
> size=16091503001
> 
> [cpu1]
> cpus_allowed=1
> numjobs=8
>  filename=/dev/sdb
> group_reporting
> bs=4k
> rw=randrw
> rwmixread=0
> softrandommap=1
> runtime=600
> sync=0
> direct=1
> iodepth=64
> ioengine=libaio
> loops=10000
> exitall
> size=16091503001
> 
> [cpu2]
> cpus_allowed=2
> numjobs=8
>  filename=/dev/sdc
> group_reporting
> bs=4k
> rw=randrw
> rwmixread=0
> softrandommap=1
> runtime=600
> sync=0
> direct=1
> iodepth=64
> ioengine=libaio
> loops=10000
> exitall
> size=16091503001
> 
> [cpu3]
> cpus_allowed=3
> numjobs=8
>  filename=/dev/sdd
> group_reporting
> bs=4k
> rw=randrw
> rwmixread=0
> softrandommap=1
> runtime=600
> sync=0
> direct=1
> iodepth=64
> ioengine=libaio
> loops=10000
> exitall
> size=16091503001
> 
> I would sure rather do that on the command line and not create a file,
> but the groups never worked out for me on the command line... hints
> would be appreciated.

This is good stuff! Just a quick comment that may improve your situation
- you do know that you can include environment variables job files? Say
for this a sample section:

[cpu3]
cpus_allowed=3
numjobs=8
filename=${CPU3FN}
group_reporting
bs=4k
rw=randrw
rwmixread=0
softrandommap=1
runtime=600
sync=0
direct=1
iodepth=64
ioengine=libaio
loops=10000
exitall
size=${CPU3SZ}

(if those two are the only unique ones) and set the CPU3FN and CPU3SZ
environment variables before running fio ala:

$ CPU3FN=/dev/sdd CPU3SZ=16091503001 fio my-job-file

repeat for the extra ones you need. It also looks like you can put a lot
of that into the [global] section, which applies to all your jobs in the
job file.

As to doing it on the command line, you should be able to just set the
shared parameters first, then start continue

fio --bs=4k ... --name=cpu3 --filename=/dev/sdd --size=16091503001
--name=cpu2 --filename=/dev/sdc --size=xxxx

and so on. Does that not work properly? I must say that I never use the
command line myself, I always write a job file. Matter of habbit, I
guess. Anyway, if we condense your job file a bit, it ends up like this:

[global]
numjobs=8
group_reporting
bs=4k
rwmixread=0
rw=randrw
runtime=600
softrandommap=1
sync=0
direct=1
iodepth=64
ioengine=libaio
loops=10000
exitall

[cpu0]
cpus_allowed=0
filename=/dev/sda
size=16091503001

[cpu1]
cpus_allowed=1
 filename=/dev/sdb
size=16091503001

[cpu2]
cpus_allowed=2
 filename=/dev/sdc
size=16091503001

[cpu3]
cpus_allowed=3
filename=/dev/sdd
size=16091503001

Running that through fio --showcmd, it gives us:

fio --numjobs=8 --group_reporting --bs=4k --rwmixread=0 --rw=randrw
--runtime=600 --softrandommap=1 --sync=0 --direct=1 --iodepth=64
--ioengine=libaio --loops=10000 --exitall --name=cpu0
--filename=/dev/sda --cpus_allowed=0 --size=16091503001 --name=cpu1
--filename=/dev/sdb --cpus_allowed=1 --size=16091503001 --name=cpu2
--filename=/dev/sdc --cpus_allowed=2 --size=16091503001 --name=cpu3
--filename=/dev/sdd --cpus_allowed=3 --size=16091503001

And as a final note, using rw=randrw with rwmixread=0, then you should
probably just use rw=randwrite instead :-)

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-23  5:50   ` Jens Axboe
@ 2009-04-23 15:48     ` Chris Worley
  2009-04-23 18:34       ` Jens Axboe
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Worley @ 2009-04-23 15:48 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

Jens,

Thanks for the tips!

One other question, in the first case where I use only one group with
multiple file names, where I'm summing 10% of the disks sizes, is fio
going to evenly distribute that size among the disks/files?  That
seems to be the case, but I'm not sure.

Thanks,

Chris
On Wed, Apr 22, 2009 at 11:50 PM, Jens Axboe <jens.axboe@oracle.com> wrote:
> On Wed, Apr 22 2009, Chris Worley wrote:
>> On Wed, Apr 22, 2009 at 7:22 AM, Jens Axboe <jens.axboe@oracle.com> wrote:
>> > Hi,
>> >
>> > The sample job files shipped with fio are (generally) pretty weak, and
>> > I'd really love for the selection to be better. In my experience, that
>> > is the first place you look when trying out something like fio. It
>> > really helps you getting a (previously) unknown job format going
>> > quickly.
>> >
>> > So if any of you have "interesting" job files that you use for testing
>> > or performance analysis, please do send them to me so I can include them
>> > with fio.
>>
>> Jens,
>>
>> I normally use scripts to run I/O benchmarks, and pretty much use fio
>> exclusively.
>>
>> Hopefully, in sharing the scripts, you can see usage, and feeback
>> anything I may be doing wrong.
>>
>> In one incarnation, I put all the devices to be tested on the script's
>> command line, then concatenate a fio-ready list of these devices along
>> with a sum of 10% of all the disks with:
>>
>>     filesize=0
>>     fiolist=""
>>     for i in $*
>>     do fiolist=$fiolist" --filename="$i
>>        t=`basename $i`
>>        let filesize=$filesize+`cat /proc/partitions | grep $t  | awk
>> '{ printf "%d\n", $3*1024/10 }'`
>>     done
>>
>> Rather than a "job file", In this case I do everything on the command
>> line for power of 2 block sizes from 1MB down to 512B:
>>
>>   for i in 1m 512k 256k 128k 64k 32k 16k 8k 4k 2k 1k 512
>>   do
>>     for k in 0 25 50 75 100
>>     do
>>       fio  --rw=randrw --bs=$i --rwmixread=$k --numjobs=32
>> --iodepth=64 --sync=0 --direct=1 --randrepeat=0 --softrandommap=1 \
>>               --ioengine=libaio $fiolist --name=test --loops=10000
>> --size=$filesize  --runtime=$runtime
>>     done
>>   done
>>
>> So the above "fiolist" is going to look like "--filename=/dev/sda
>> --filename=/dev/sdb", and the "filesize" is going to be the sum of 10%
>> of each disks size.  I only use this with disks of the same size, and
>> assume that fio will exercise 10% of each disk.  That assumption seems
>> to pan out in the resulting data, but I've never traced the code to
>> verify that this is what it will do.
>>
>> Then I moved to a process-pinning strategy that has some number of
>> pinned fio threads running per disk.  I still calculate the
>> "filesize", but just uses 10% of one disk, and assume they are all the
>> same.   Much of the affinity settings have to do with specific bus-CPU
>> affinity, but for a simple example, lets say I just round-robin the
>> files on the command line to the available processors, and create
>> arrays "files" and "pl" consisting of block devices and processor
>> numbers:
>>
>> totproc=`cat /proc/cpuinfo | grep processor | wc -l`
>> p=0
>> for i in $*
>> do
>>     files[$p]="filename="$i
>>     pl[$p]=$p
>>     let p=$p+1
>>     if [ $p -eq $totproc ]
>>     then break
>>     fi
>> done
>> let totproc=$p-1
>>
>> Then generate "job files" and run fio with:
>>
>>   for i in 1m 512k 256k 128k 64k 32k 16k 8k 4k 2k 1k 512
>>   do
>>     for k in 0 25 50 75 100
>>     do  echo "" >fio-rand-script.$$
>>       for p in `seq 0 $totproc`
>>       do
>>          echo -e
>> "[cpu${p}]\ncpus_allowed=${pl[$p]}\nnumjobs=$jobsperproc\n${files[$p]}\ngroup_reporting\nbs=$i
>> \nrw=randrw\nrwmixread=$k \nsoftrandommap=1\nruntime=$runtime
>> \nsync=0\ndirect=1\niodepth=64\nioengine=libaio\nloops=10000\nexitall\nsize=$filesi
>> e \n" >>fio-rand-script.$$
>>       done
>>       fio fio-rand-script.$$
>>     done
>>   done
>>
>> The scripts look like:
>>
>> # cat fio-rand-script.8625
>> [cpu0]
>> cpus_allowed=0
>> numjobs=8
>>  filename=/dev/sda
>> group_reporting
>> bs=4k
>> rw=randrw
>> rwmixread=0
>> softrandommap=1
>> runtime=600
>> sync=0
>> direct=1
>> iodepth=64
>> ioengine=libaio
>> loops=10000
>> exitall
>> size=16091503001
>>
>> [cpu1]
>> cpus_allowed=1
>> numjobs=8
>>  filename=/dev/sdb
>> group_reporting
>> bs=4k
>> rw=randrw
>> rwmixread=0
>> softrandommap=1
>> runtime=600
>> sync=0
>> direct=1
>> iodepth=64
>> ioengine=libaio
>> loops=10000
>> exitall
>> size=16091503001
>>
>> [cpu2]
>> cpus_allowed=2
>> numjobs=8
>>  filename=/dev/sdc
>> group_reporting
>> bs=4k
>> rw=randrw
>> rwmixread=0
>> softrandommap=1
>> runtime=600
>> sync=0
>> direct=1
>> iodepth=64
>> ioengine=libaio
>> loops=10000
>> exitall
>> size=16091503001
>>
>> [cpu3]
>> cpus_allowed=3
>> numjobs=8
>>  filename=/dev/sdd
>> group_reporting
>> bs=4k
>> rw=randrw
>> rwmixread=0
>> softrandommap=1
>> runtime=600
>> sync=0
>> direct=1
>> iodepth=64
>> ioengine=libaio
>> loops=10000
>> exitall
>> size=16091503001
>>
>> I would sure rather do that on the command line and not create a file,
>> but the groups never worked out for me on the command line... hints
>> would be appreciated.
>
> This is good stuff! Just a quick comment that may improve your situation
> - you do know that you can include environment variables job files? Say
> for this a sample section:
>
> [cpu3]
> cpus_allowed=3
> numjobs=8
> filename=${CPU3FN}
> group_reporting
> bs=4k
> rw=randrw
> rwmixread=0
> softrandommap=1
> runtime=600
> sync=0
> direct=1
> iodepth=64
> ioengine=libaio
> loops=10000
> exitall
> size=${CPU3SZ}
>
> (if those two are the only unique ones) and set the CPU3FN and CPU3SZ
> environment variables before running fio ala:
>
> $ CPU3FN=/dev/sdd CPU3SZ=16091503001 fio my-job-file
>
> repeat for the extra ones you need. It also looks like you can put a lot
> of that into the [global] section, which applies to all your jobs in the
> job file.
>
> As to doing it on the command line, you should be able to just set the
> shared parameters first, then start continue
>
> fio --bs=4k ... --name=cpu3 --filename=/dev/sdd --size=16091503001
> --name=cpu2 --filename=/dev/sdc --size=xxxx
>
> and so on. Does that not work properly? I must say that I never use the
> command line myself, I always write a job file. Matter of habbit, I
> guess. Anyway, if we condense your job file a bit, it ends up like this:
>
> [global]
> numjobs=8
> group_reporting
> bs=4k
> rwmixread=0
> rw=randrw
> runtime=600
> softrandommap=1
> sync=0
> direct=1
> iodepth=64
> ioengine=libaio
> loops=10000
> exitall
>
> [cpu0]
> cpus_allowed=0
> filename=/dev/sda
> size=16091503001
>
> [cpu1]
> cpus_allowed=1
>  filename=/dev/sdb
> size=16091503001
>
> [cpu2]
> cpus_allowed=2
>  filename=/dev/sdc
> size=16091503001
>
> [cpu3]
> cpus_allowed=3
> filename=/dev/sdd
> size=16091503001
>
> Running that through fio --showcmd, it gives us:
>
> fio --numjobs=8 --group_reporting --bs=4k --rwmixread=0 --rw=randrw
> --runtime=600 --softrandommap=1 --sync=0 --direct=1 --iodepth=64
> --ioengine=libaio --loops=10000 --exitall --name=cpu0
> --filename=/dev/sda --cpus_allowed=0 --size=16091503001 --name=cpu1
> --filename=/dev/sdb --cpus_allowed=1 --size=16091503001 --name=cpu2
> --filename=/dev/sdc --cpus_allowed=2 --size=16091503001 --name=cpu3
> --filename=/dev/sdd --cpus_allowed=3 --size=16091503001
>
> And as a final note, using rw=randrw with rwmixread=0, then you should
> probably just use rw=randwrite instead :-)
>
> --
> Jens Axboe
>
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-23 15:48     ` Chris Worley
@ 2009-04-23 18:34       ` Jens Axboe
  0 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2009-04-23 18:34 UTC (permalink / raw)
  To: Chris Worley; +Cc: fio

On Thu, Apr 23 2009, Chris Worley wrote:
> Jens,
> 
> Thanks for the tips!
> 
> One other question, in the first case where I use only one group with
> multiple file names, where I'm summing 10% of the disks sizes, is fio
> going to evenly distribute that size among the disks/files?  That
> seems to be the case, but I'm not sure.

Yes, it'll distribute evenly. Also see the file_service_type option.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: request for job files
  2009-04-22 13:22 request for job files Jens Axboe
  2009-04-22 16:58 ` Girish Satihal
  2009-04-22 20:32 ` Chris Worley
@ 2009-04-24  5:00 ` Gurudas Pai
  2 siblings, 0 replies; 8+ messages in thread
From: Gurudas Pai @ 2009-04-24  5:00 UTC (permalink / raw)
  To: Jens Axboe; +Cc: fio

> The sample job files shipped with fio are (generally) pretty weak, and
> I'd really love for the selection to be better. 

Hi,

Please see if following jobfile is useful. This was used to uncover 2 
kernel bugs with mainline kernel.


# this jobfile was used to uncover 2 mainline kernel bugs.
#OOPS IN BIO_GET_NR_VECS+0X0/0X30, http://lkml.org/lkml/2007/7/31/580
#fix bad unlock_page() in error case, http://lkml.org/lkml/2007/7/20/168

[global]
bs=8k
iodepth=1024
iodepth_batch=60
randrepeat=1
size=1m
#Use any ext3 for <directory>
directory=<directory>
numjobs=20
[job1]
ioengine=sync
bs=1k
direct=1
rw=randread
filename=file1:file2
[job2]
ioengine=libaio
rw=randwrite
direct=1
filename=file1:file2
[job3]
bs=1k
ioengine=posixaio
rw=randwrite
direct=1
filename=file1:file2
[job4]
ioengine=splice
direct=1
rw=randwrite
filename=file1:file2
[job5]
bs=1k
ioengine=sync
rw=randread
filename=file1:file2
[job7]
ioengine=libaio
rw=randwrite
filename=file1:file2
[job8]
ioengine=posixaio
rw=randwrite
filename=file1:file2
[job9]
ioengine=splice
rw=randwrite
filename=file1:file2
[job10]
ioengine=mmap
rw=randwrite
bs=1k
filename=file1:file2
[job11]
ioengine=mmap
rw=randwrite
direct=1
filename=file1:file2


Thanks,
-Guru


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2009-04-24  5:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-22 13:22 request for job files Jens Axboe
2009-04-22 16:58 ` Girish Satihal
2009-04-22 17:36   ` Jens Axboe
2009-04-22 20:32 ` Chris Worley
2009-04-23  5:50   ` Jens Axboe
2009-04-23 15:48     ` Chris Worley
2009-04-23 18:34       ` Jens Axboe
2009-04-24  5:00 ` Gurudas Pai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.