All of lore.kernel.org
 help / color / mirror / Atom feed
* fio reporting high value of read iops at iodepth 1
@ 2020-05-16 14:53 Debraj Manna
  2020-05-16 15:29 ` Sitsofe Wheeler
  0 siblings, 1 reply; 9+ messages in thread
From: Debraj Manna @ 2020-05-16 14:53 UTC (permalink / raw)
  To: fio

HI

I joined the mailing list yesterday. I am reposting the below query as
I don't see my yesterday's post in the mailing list archive.

I am using fio-2.2.10 on Ubuntu 16.04.

In one of my setup I am observing high value of read iops at io-depth
= 1.  One run is reporting iops=11418, another run iops=33507 and
iops=44329This does not seem to be correct. Below are the run results.
Can someone let me know what is going wrong and if I am hitting some
known issue in fio.

RUN 1

ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
/dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
--direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
--iodepth=1 --size=200M ; rm -f $file
/var/lib/ubuntu/fio-oPHRC8lfJujNk
bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.2.10
Starting 1 process
bw-test: Laying out IO file(s) (1 file(s) / 200MB)
Jobs: 1 (f=1): [r(1)] [100.0% done] [49132KB/0KB/0KB /s] [12.3K/0/0
iops] [eta 00m:00s]
bw-test: (groupid=0, jobs=1): err= 0: pid=48549: Fri May 15 18:42:08 2020
  read : io=204800KB, bw=45674KB/s, iops=11418, runt=  4484msec
    slat (usec): min=4, max=258, avg= 5.53, stdev= 2.20
    clat (usec): min=0, max=22304, avg=81.16, stdev=230.44
     lat (usec): min=17, max=22309, avg=86.78, stdev=230.46
    clat percentiles (usec):
     |  1.00th=[   14],  5.00th=[   14], 10.00th=[   15], 20.00th=[   16],
     | 30.00th=[   19], 40.00th=[   19], 50.00th=[   20], 60.00th=[   20],
     | 70.00th=[   21], 80.00th=[   25], 90.00th=[  266], 95.00th=[  286],
     | 99.00th=[  532], 99.50th=[ 1400], 99.90th=[ 2832], 99.95th=[ 3088],
     | 99.99th=[ 5280]
    bw (KB  /s): min=33760, max=51584, per=98.18%, avg=44842.00, stdev=6783.12
    lat (usec) : 2=0.04%, 4=0.01%, 10=0.02%, 20=45.62%, 50=35.21%
    lat (usec) : 100=0.15%, 250=2.24%, 500=15.67%, 750=0.25%, 1000=0.16%
    lat (msec) : 2=0.28%, 4=0.34%, 10=0.01%, 50=0.01%
  cpu          : usr=0.98%, sys=10.53%, ctx=51195, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=204800KB, aggrb=45673KB/s, minb=45673KB/s, maxb=45673KB/s,
mint=4484msec, maxt=4484msec

Disk stats (read/write):
    dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
aggrin_queue=4048, aggrutil=87.75%
  sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%

RUN 2

ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
/dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
--direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
--iodepth=1 --size=200M ; rm -f $file
/var/lib/ubuntu/fio-PipAwOU94ybHL
bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
fio-2.2.10
Starting 1 process
bw-test: Laying out IO file(s) (1 file(s) / 200MB)
Jobs: 1 (f=1)
bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
  read : io=204800KB, bw=134031KB/s, iops=33507, runt=  1528msec
    slat (usec): min=4, max=19840, avg= 5.38, stdev=87.66
    clat (usec): min=1, max=3589, avg=23.64, stdev=89.84
     lat (usec): min=17, max=19863, avg=29.07, stdev=125.52
    clat percentiles (usec):
     |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
     | 30.00th=[   15], 40.00th=[   15], 50.00th=[   16], 60.00th=[   16],
     | 70.00th=[   16], 80.00th=[   16], 90.00th=[   17], 95.00th=[   17],
     | 99.00th=[  286], 99.50th=[  318], 99.90th=[ 1576], 99.95th=[ 2352],
     | 99.99th=[ 3056]
    bw (KB  /s): min=119592, max=143792, per=99.48%, avg=133333.33,
stdev=12429.48
    lat (usec) : 2=0.01%, 10=0.01%, 20=97.54%, 50=0.47%, 100=0.09%
    lat (usec) : 250=0.05%, 500=1.61%, 750=0.06%, 1000=0.02%
    lat (msec) : 2=0.06%, 4=0.08%
  cpu          : usr=1.83%, sys=31.17%, ctx=51196, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: io=204800KB, aggrb=134031KB/s, minb=134031KB/s,
maxb=134031KB/s, mint=1528msec, maxt=1528msec

Disk stats (read/write):
    dm-6: ios=43347/0, merge=0/0, ticks=1080/0, in_queue=1084,
util=75.80%, aggrios=51200/10, aggrmerge=0/6, aggrticks=1048/0,
aggrin_queue=1048, aggrutil=62.83%
  sda: ios=51200/10, merge=0/6, ticks=1048/0, in_queue=1048, util=62.83%


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-16 14:53 fio reporting high value of read iops at iodepth 1 Debraj Manna
@ 2020-05-16 15:29 ` Sitsofe Wheeler
  2020-05-16 16:28   ` Debraj Manna
  0 siblings, 1 reply; 9+ messages in thread
From: Sitsofe Wheeler @ 2020-05-16 15:29 UTC (permalink / raw)
  To: Debraj Manna; +Cc: fio

Hi Debraj,

On Sat, 16 May 2020 at 16:05, Debraj Manna <subharaj.manna@gmail.com> wrote:
>
> HI
>
> I joined the mailing list yesterday. I am reposting the below query as
> I don't see my yesterday's post in the mailing list archive.
>
> I am using fio-2.2.10 on Ubuntu 16.04.

Just briefly I'm afraid we (upstream) don't support older versions of
fio. We're all the way up to fio 3.19 at the time of writing (see
https://github.com/axboe/fio/releases ). I'll keep looking but you
might not get much more of a response... It's worth noting that it's
fairly easy to build new versions by hand though:
https://github.com/axboe/fio/blob/fio-3.19/README#L132 .

> In one of my setup I am observing high value of read iops at io-depth
> = 1.  One run is reporting iops=11418, another run iops=33507 and
> iops=44329This does not seem to be correct. Below are the run results.
> Can someone let me know what is going wrong and if I am hitting some
> known issue in fio.

In all probability it's unlikely to be fio in this case - your jobs is
only running for 4 seconds and the region you're doing I/O over is
small so any number of things could perturb the iops you get (another
process stealing CPU time, the state of your disks cache, how your
filesystem chose to layout the file, whether the filesystem chose to
do some writeback at that point, what the kernel I/O scheduler chose
to do with the sequence of I/O it was sent etc). For some reason your
underlying disk seems less busier in the first run (util=87.75%) than
the second run (util=62.83%) which hints at outside interference (e.g.
another process doing I/O the same filesystem or even the kernel doing
swap on a different partition of the same disk).

Given so much has changed since your version I couldn't say if you are
also hitting some fixed bug but as previously mentioned it seems
unlikely. Maybe do I/O on a much larger file to see if the variation
between runs goes down? Good luck!

> RUN 1
>
> ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> --iodepth=1 --size=200M ; rm -f $file
> /var/lib/ubuntu/fio-oPHRC8lfJujNk
> bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> fio-2.2.10
> Starting 1 process
> bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> Jobs: 1 (f=1): [r(1)] [100.0% done] [49132KB/0KB/0KB /s] [12.3K/0/0
> iops] [eta 00m:00s]
> bw-test: (groupid=0, jobs=1): err= 0: pid=48549: Fri May 15 18:42:08 2020
>   read : io=204800KB, bw=45674KB/s, iops=11418, runt=  4484msec
>     slat (usec): min=4, max=258, avg= 5.53, stdev= 2.20
>     clat (usec): min=0, max=22304, avg=81.16, stdev=230.44
>      lat (usec): min=17, max=22309, avg=86.78, stdev=230.46
>     clat percentiles (usec):
>      |  1.00th=[   14],  5.00th=[   14], 10.00th=[   15], 20.00th=[   16],
>      | 30.00th=[   19], 40.00th=[   19], 50.00th=[   20], 60.00th=[   20],
>      | 70.00th=[   21], 80.00th=[   25], 90.00th=[  266], 95.00th=[  286],
>      | 99.00th=[  532], 99.50th=[ 1400], 99.90th=[ 2832], 99.95th=[ 3088],
>      | 99.99th=[ 5280]
>     bw (KB  /s): min=33760, max=51584, per=98.18%, avg=44842.00, stdev=6783.12
>     lat (usec) : 2=0.04%, 4=0.01%, 10=0.02%, 20=45.62%, 50=35.21%
>     lat (usec) : 100=0.15%, 250=2.24%, 500=15.67%, 750=0.25%, 1000=0.16%
>     lat (msec) : 2=0.28%, 4=0.34%, 10=0.01%, 50=0.01%
>   cpu          : usr=0.98%, sys=10.53%, ctx=51195, majf=0, minf=10
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
>      latency   : target=0, window=0, percentile=100.00%, depth=1
>
> Run status group 0 (all jobs):
>    READ: io=204800KB, aggrb=45673KB/s, minb=45673KB/s, maxb=45673KB/s,
> mint=4484msec, maxt=4484msec
>
> Disk stats (read/write):
>     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
> util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
> aggrin_queue=4048, aggrutil=87.75%
>   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%
>
> RUN 2
>
> ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> --iodepth=1 --size=200M ; rm -f $file
> /var/lib/ubuntu/fio-PipAwOU94ybHL
> bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> fio-2.2.10
> Starting 1 process
> bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> Jobs: 1 (f=1)
> bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
>   read : io=204800KB, bw=134031KB/s, iops=33507, runt=  1528msec
>     slat (usec): min=4, max=19840, avg= 5.38, stdev=87.66
>     clat (usec): min=1, max=3589, avg=23.64, stdev=89.84
>      lat (usec): min=17, max=19863, avg=29.07, stdev=125.52
>     clat percentiles (usec):
>      |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
>      | 30.00th=[   15], 40.00th=[   15], 50.00th=[   16], 60.00th=[   16],
>      | 70.00th=[   16], 80.00th=[   16], 90.00th=[   17], 95.00th=[   17],
>      | 99.00th=[  286], 99.50th=[  318], 99.90th=[ 1576], 99.95th=[ 2352],
>      | 99.99th=[ 3056]
>     bw (KB  /s): min=119592, max=143792, per=99.48%, avg=133333.33,
> stdev=12429.48
>     lat (usec) : 2=0.01%, 10=0.01%, 20=97.54%, 50=0.47%, 100=0.09%
>     lat (usec) : 250=0.05%, 500=1.61%, 750=0.06%, 1000=0.02%
>     lat (msec) : 2=0.06%, 4=0.08%
>   cpu          : usr=1.83%, sys=31.17%, ctx=51196, majf=0, minf=11
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
>      latency   : target=0, window=0, percentile=100.00%, depth=1
>
> Run status group 0 (all jobs):
>    READ: io=204800KB, aggrb=134031KB/s, minb=134031KB/s,
> maxb=134031KB/s, mint=1528msec, maxt=1528msec
>
> Disk stats (read/write):
>     dm-6: ios=43347/0, merge=0/0, ticks=1080/0, in_queue=1084,
> util=75.80%, aggrios=51200/10, aggrmerge=0/6, aggrticks=1048/0,
> aggrin_queue=1048, aggrutil=62.83%
>   sda: ios=51200/10, merge=0/6, ticks=1048/0, in_queue=1048, util=62.83%

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-16 15:29 ` Sitsofe Wheeler
@ 2020-05-16 16:28   ` Debraj Manna
  2020-05-16 16:33     ` Debraj Manna
  0 siblings, 1 reply; 9+ messages in thread
From: Debraj Manna @ 2020-05-16 16:28 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: fio

Thanks for replying.

I tried building fio 3.19 as per the link you shared.

I did not see any error during the build process.

ubuntu@vrni-proxy:~$ fio --version
fio-3.19

But on trying to use fio I am getting the below error

ubuntu@vrni-proxy:~$ suf=`head /dev/urandom | tr -dc A-Za-z0-9 | head
-c 13 ; echo ''` && file="/var/lib/ubuntu/fio-$suf" && echo $file &&
fio --filename=$file --direct=1  --ioengine=libaio --bs=4K
--name=bw-test --rw=randread --iodepth=1 --size=200M ; rm -f $file
/var/lib/ubuntu/fio-V9wrfA2MyjvVL
fio: engine libaio not loadable
fio: engine libaio not loadable
fio: failed to load engine

On Sat, May 16, 2020 at 8:59 PM Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>
> Hi Debraj,
>
> On Sat, 16 May 2020 at 16:05, Debraj Manna <subharaj.manna@gmail.com> wrote:
> >
> > HI
> >
> > I joined the mailing list yesterday. I am reposting the below query as
> > I don't see my yesterday's post in the mailing list archive.
> >
> > I am using fio-2.2.10 on Ubuntu 16.04.
>
> Just briefly I'm afraid we (upstream) don't support older versions of
> fio. We're all the way up to fio 3.19 at the time of writing (see
> https://github.com/axboe/fio/releases ). I'll keep looking but you
> might not get much more of a response... It's worth noting that it's
> fairly easy to build new versions by hand though:
> https://github.com/axboe/fio/blob/fio-3.19/README#L132 .
>
> > In one of my setup I am observing high value of read iops at io-depth
> > = 1.  One run is reporting iops=11418, another run iops=33507 and
> > iops=44329This does not seem to be correct. Below are the run results.
> > Can someone let me know what is going wrong and if I am hitting some
> > known issue in fio.
>
> In all probability it's unlikely to be fio in this case - your jobs is
> only running for 4 seconds and the region you're doing I/O over is
> small so any number of things could perturb the iops you get (another
> process stealing CPU time, the state of your disks cache, how your
> filesystem chose to layout the file, whether the filesystem chose to
> do some writeback at that point, what the kernel I/O scheduler chose
> to do with the sequence of I/O it was sent etc). For some reason your
> underlying disk seems less busier in the first run (util=87.75%) than
> the second run (util=62.83%) which hints at outside interference (e.g.
> another process doing I/O the same filesystem or even the kernel doing
> swap on a different partition of the same disk).
>
> Given so much has changed since your version I couldn't say if you are
> also hitting some fixed bug but as previously mentioned it seems
> unlikely. Maybe do I/O on a much larger file to see if the variation
> between runs goes down? Good luck!
>
> > RUN 1
> >
> > ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> > /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> > file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> > --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> > --iodepth=1 --size=200M ; rm -f $file
> > /var/lib/ubuntu/fio-oPHRC8lfJujNk
> > bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> > fio-2.2.10
> > Starting 1 process
> > bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> > Jobs: 1 (f=1): [r(1)] [100.0% done] [49132KB/0KB/0KB /s] [12.3K/0/0
> > iops] [eta 00m:00s]
> > bw-test: (groupid=0, jobs=1): err= 0: pid=48549: Fri May 15 18:42:08 2020
> >   read : io=204800KB, bw=45674KB/s, iops=11418, runt=  4484msec
> >     slat (usec): min=4, max=258, avg= 5.53, stdev= 2.20
> >     clat (usec): min=0, max=22304, avg=81.16, stdev=230.44
> >      lat (usec): min=17, max=22309, avg=86.78, stdev=230.46
> >     clat percentiles (usec):
> >      |  1.00th=[   14],  5.00th=[   14], 10.00th=[   15], 20.00th=[   16],
> >      | 30.00th=[   19], 40.00th=[   19], 50.00th=[   20], 60.00th=[   20],
> >      | 70.00th=[   21], 80.00th=[   25], 90.00th=[  266], 95.00th=[  286],
> >      | 99.00th=[  532], 99.50th=[ 1400], 99.90th=[ 2832], 99.95th=[ 3088],
> >      | 99.99th=[ 5280]
> >     bw (KB  /s): min=33760, max=51584, per=98.18%, avg=44842.00, stdev=6783.12
> >     lat (usec) : 2=0.04%, 4=0.01%, 10=0.02%, 20=45.62%, 50=35.21%
> >     lat (usec) : 100=0.15%, 250=2.24%, 500=15.67%, 750=0.25%, 1000=0.16%
> >     lat (msec) : 2=0.28%, 4=0.34%, 10=0.01%, 50=0.01%
> >   cpu          : usr=0.98%, sys=10.53%, ctx=51195, majf=0, minf=10
> >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> >      latency   : target=0, window=0, percentile=100.00%, depth=1
> >
> > Run status group 0 (all jobs):
> >    READ: io=204800KB, aggrb=45673KB/s, minb=45673KB/s, maxb=45673KB/s,
> > mint=4484msec, maxt=4484msec
> >
> > Disk stats (read/write):
> >     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
> > util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
> > aggrin_queue=4048, aggrutil=87.75%
> >   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%
> >
> > RUN 2
> >
> > ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> > /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> > file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> > --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> > --iodepth=1 --size=200M ; rm -f $file
> > /var/lib/ubuntu/fio-PipAwOU94ybHL
> > bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> > fio-2.2.10
> > Starting 1 process
> > bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> > Jobs: 1 (f=1)
> > bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
> >   read : io=204800KB, bw=134031KB/s, iops=33507, runt=  1528msec
> >     slat (usec): min=4, max=19840, avg= 5.38, stdev=87.66
> >     clat (usec): min=1, max=3589, avg=23.64, stdev=89.84
> >      lat (usec): min=17, max=19863, avg=29.07, stdev=125.52
> >     clat percentiles (usec):
> >      |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
> >      | 30.00th=[   15], 40.00th=[   15], 50.00th=[   16], 60.00th=[   16],
> >      | 70.00th=[   16], 80.00th=[   16], 90.00th=[   17], 95.00th=[   17],
> >      | 99.00th=[  286], 99.50th=[  318], 99.90th=[ 1576], 99.95th=[ 2352],
> >      | 99.99th=[ 3056]
> >     bw (KB  /s): min=119592, max=143792, per=99.48%, avg=133333.33,
> > stdev=12429.48
> >     lat (usec) : 2=0.01%, 10=0.01%, 20=97.54%, 50=0.47%, 100=0.09%
> >     lat (usec) : 250=0.05%, 500=1.61%, 750=0.06%, 1000=0.02%
> >     lat (msec) : 2=0.06%, 4=0.08%
> >   cpu          : usr=1.83%, sys=31.17%, ctx=51196, majf=0, minf=11
> >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
> >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> >      latency   : target=0, window=0, percentile=100.00%, depth=1
> >
> > Run status group 0 (all jobs):
> >    READ: io=204800KB, aggrb=134031KB/s, minb=134031KB/s,
> > maxb=134031KB/s, mint=1528msec, maxt=1528msec
> >
> > Disk stats (read/write):
> >     dm-6: ios=43347/0, merge=0/0, ticks=1080/0, in_queue=1084,
> > util=75.80%, aggrios=51200/10, aggrmerge=0/6, aggrticks=1048/0,
> > aggrin_queue=1048, aggrutil=62.83%
> >   sda: ios=51200/10, merge=0/6, ticks=1048/0, in_queue=1048, util=62.83%
>
> --
> Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-16 16:28   ` Debraj Manna
@ 2020-05-16 16:33     ` Debraj Manna
  2020-05-22  9:01       ` Debraj Manna
  0 siblings, 1 reply; 9+ messages in thread
From: Debraj Manna @ 2020-05-16 16:33 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: fio

I had to install libaio-dev. Now it is fine.

On Sat, May 16, 2020 at 9:58 PM Debraj Manna <subharaj.manna@gmail.com> wrote:
>
> Thanks for replying.
>
> I tried building fio 3.19 as per the link you shared.
>
> I did not see any error during the build process.
>
> ubuntu@vrni-proxy:~$ fio --version
> fio-3.19
>
> But on trying to use fio I am getting the below error
>
> ubuntu@vrni-proxy:~$ suf=`head /dev/urandom | tr -dc A-Za-z0-9 | head
> -c 13 ; echo ''` && file="/var/lib/ubuntu/fio-$suf" && echo $file &&
> fio --filename=$file --direct=1  --ioengine=libaio --bs=4K
> --name=bw-test --rw=randread --iodepth=1 --size=200M ; rm -f $file
> /var/lib/ubuntu/fio-V9wrfA2MyjvVL
> fio: engine libaio not loadable
> fio: engine libaio not loadable
> fio: failed to load engine
>
> On Sat, May 16, 2020 at 8:59 PM Sitsofe Wheeler <sitsofe@gmail.com> wrote:
> >
> > Hi Debraj,
> >
> > On Sat, 16 May 2020 at 16:05, Debraj Manna <subharaj.manna@gmail.com> wrote:
> > >
> > > HI
> > >
> > > I joined the mailing list yesterday. I am reposting the below query as
> > > I don't see my yesterday's post in the mailing list archive.
> > >
> > > I am using fio-2.2.10 on Ubuntu 16.04.
> >
> > Just briefly I'm afraid we (upstream) don't support older versions of
> > fio. We're all the way up to fio 3.19 at the time of writing (see
> > https://github.com/axboe/fio/releases ). I'll keep looking but you
> > might not get much more of a response... It's worth noting that it's
> > fairly easy to build new versions by hand though:
> > https://github.com/axboe/fio/blob/fio-3.19/README#L132 .
> >
> > > In one of my setup I am observing high value of read iops at io-depth
> > > = 1.  One run is reporting iops=11418, another run iops=33507 and
> > > iops=44329This does not seem to be correct. Below are the run results.
> > > Can someone let me know what is going wrong and if I am hitting some
> > > known issue in fio.
> >
> > In all probability it's unlikely to be fio in this case - your jobs is
> > only running for 4 seconds and the region you're doing I/O over is
> > small so any number of things could perturb the iops you get (another
> > process stealing CPU time, the state of your disks cache, how your
> > filesystem chose to layout the file, whether the filesystem chose to
> > do some writeback at that point, what the kernel I/O scheduler chose
> > to do with the sequence of I/O it was sent etc). For some reason your
> > underlying disk seems less busier in the first run (util=87.75%) than
> > the second run (util=62.83%) which hints at outside interference (e.g.
> > another process doing I/O the same filesystem or even the kernel doing
> > swap on a different partition of the same disk).
> >
> > Given so much has changed since your version I couldn't say if you are
> > also hitting some fixed bug but as previously mentioned it seems
> > unlikely. Maybe do I/O on a much larger file to see if the variation
> > between runs goes down? Good luck!
> >
> > > RUN 1
> > >
> > > ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> > > /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> > > file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> > > --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> > > --iodepth=1 --size=200M ; rm -f $file
> > > /var/lib/ubuntu/fio-oPHRC8lfJujNk
> > > bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> > > fio-2.2.10
> > > Starting 1 process
> > > bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> > > Jobs: 1 (f=1): [r(1)] [100.0% done] [49132KB/0KB/0KB /s] [12.3K/0/0
> > > iops] [eta 00m:00s]
> > > bw-test: (groupid=0, jobs=1): err= 0: pid=48549: Fri May 15 18:42:08 2020
> > >   read : io=204800KB, bw=45674KB/s, iops=11418, runt=  4484msec
> > >     slat (usec): min=4, max=258, avg= 5.53, stdev= 2.20
> > >     clat (usec): min=0, max=22304, avg=81.16, stdev=230.44
> > >      lat (usec): min=17, max=22309, avg=86.78, stdev=230.46
> > >     clat percentiles (usec):
> > >      |  1.00th=[   14],  5.00th=[   14], 10.00th=[   15], 20.00th=[   16],
> > >      | 30.00th=[   19], 40.00th=[   19], 50.00th=[   20], 60.00th=[   20],
> > >      | 70.00th=[   21], 80.00th=[   25], 90.00th=[  266], 95.00th=[  286],
> > >      | 99.00th=[  532], 99.50th=[ 1400], 99.90th=[ 2832], 99.95th=[ 3088],
> > >      | 99.99th=[ 5280]
> > >     bw (KB  /s): min=33760, max=51584, per=98.18%, avg=44842.00, stdev=6783.12
> > >     lat (usec) : 2=0.04%, 4=0.01%, 10=0.02%, 20=45.62%, 50=35.21%
> > >     lat (usec) : 100=0.15%, 250=2.24%, 500=15.67%, 750=0.25%, 1000=0.16%
> > >     lat (msec) : 2=0.28%, 4=0.34%, 10=0.01%, 50=0.01%
> > >   cpu          : usr=0.98%, sys=10.53%, ctx=51195, majf=0, minf=10
> > >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
> > >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> > >      latency   : target=0, window=0, percentile=100.00%, depth=1
> > >
> > > Run status group 0 (all jobs):
> > >    READ: io=204800KB, aggrb=45673KB/s, minb=45673KB/s, maxb=45673KB/s,
> > > mint=4484msec, maxt=4484msec
> > >
> > > Disk stats (read/write):
> > >     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
> > > util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
> > > aggrin_queue=4048, aggrutil=87.75%
> > >   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%
> > >
> > > RUN 2
> > >
> > > ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> > > /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> > > file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> > > --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> > > --iodepth=1 --size=200M ; rm -f $file
> > > /var/lib/ubuntu/fio-PipAwOU94ybHL
> > > bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> > > fio-2.2.10
> > > Starting 1 process
> > > bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> > > Jobs: 1 (f=1)
> > > bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
> > >   read : io=204800KB, bw=134031KB/s, iops=33507, runt=  1528msec
> > >     slat (usec): min=4, max=19840, avg= 5.38, stdev=87.66
> > >     clat (usec): min=1, max=3589, avg=23.64, stdev=89.84
> > >      lat (usec): min=17, max=19863, avg=29.07, stdev=125.52
> > >     clat percentiles (usec):
> > >      |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
> > >      | 30.00th=[   15], 40.00th=[   15], 50.00th=[   16], 60.00th=[   16],
> > >      | 70.00th=[   16], 80.00th=[   16], 90.00th=[   17], 95.00th=[   17],
> > >      | 99.00th=[  286], 99.50th=[  318], 99.90th=[ 1576], 99.95th=[ 2352],
> > >      | 99.99th=[ 3056]
> > >     bw (KB  /s): min=119592, max=143792, per=99.48%, avg=133333.33,
> > > stdev=12429.48
> > >     lat (usec) : 2=0.01%, 10=0.01%, 20=97.54%, 50=0.47%, 100=0.09%
> > >     lat (usec) : 250=0.05%, 500=1.61%, 750=0.06%, 1000=0.02%
> > >     lat (msec) : 2=0.06%, 4=0.08%
> > >   cpu          : usr=1.83%, sys=31.17%, ctx=51196, majf=0, minf=11
> > >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
> > >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> > >      latency   : target=0, window=0, percentile=100.00%, depth=1
> > >
> > > Run status group 0 (all jobs):
> > >    READ: io=204800KB, aggrb=134031KB/s, minb=134031KB/s,
> > > maxb=134031KB/s, mint=1528msec, maxt=1528msec
> > >
> > > Disk stats (read/write):
> > >     dm-6: ios=43347/0, merge=0/0, ticks=1080/0, in_queue=1084,
> > > util=75.80%, aggrios=51200/10, aggrmerge=0/6, aggrticks=1048/0,
> > > aggrin_queue=1048, aggrutil=62.83%
> > >   sda: ios=51200/10, merge=0/6, ticks=1048/0, in_queue=1048, util=62.83%
> >
> > --
> > Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-16 16:33     ` Debraj Manna
@ 2020-05-22  9:01       ` Debraj Manna
  2020-05-22 16:04         ` Sitsofe Wheeler
  0 siblings, 1 reply; 9+ messages in thread
From: Debraj Manna @ 2020-05-22  9:01 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: fio

My below email to the group failed because of the presence of HTML it
seems. Re-sending again with plain text format.

Hi Sitsofe,

Few queries for my understanding

1. Is there a way to bypass the disk/buffer cache while doing a fio
test? I know about direct=1 but that does not always bypass a cache.

 2. "Maybe do I/O on a much larger file to see if the variation
between runs goes down" - Will the file size be in some ratio of my
system configuration?. For example if I am having a RAM of 32 GB then
should I use a file of size of 64 GB or there are some other
recommended best practices?


On Sat, May 16, 2020 at 10:03 PM Debraj Manna <subharaj.manna@gmail.com> wrote:
>
> I had to install libaio-dev. Now it is fine.
>
> On Sat, May 16, 2020 at 9:58 PM Debraj Manna <subharaj.manna@gmail.com> wrote:
> >
> > Thanks for replying.
> >
> > I tried building fio 3.19 as per the link you shared.
> >
> > I did not see any error during the build process.
> >
> > ubuntu@vrni-proxy:~$ fio --version
> > fio-3.19
> >
> > But on trying to use fio I am getting the below error
> >
> > ubuntu@vrni-proxy:~$ suf=`head /dev/urandom | tr -dc A-Za-z0-9 | head
> > -c 13 ; echo ''` && file="/var/lib/ubuntu/fio-$suf" && echo $file &&
> > fio --filename=$file --direct=1  --ioengine=libaio --bs=4K
> > --name=bw-test --rw=randread --iodepth=1 --size=200M ; rm -f $file
> > /var/lib/ubuntu/fio-V9wrfA2MyjvVL
> > fio: engine libaio not loadable
> > fio: engine libaio not loadable
> > fio: failed to load engine
> >
> > On Sat, May 16, 2020 at 8:59 PM Sitsofe Wheeler <sitsofe@gmail.com> wrote:
> > >
> > > Hi Debraj,
> > >
> > > On Sat, 16 May 2020 at 16:05, Debraj Manna <subharaj.manna@gmail.com> wrote:
> > > >
> > > > HI
> > > >
> > > > I joined the mailing list yesterday. I am reposting the below query as
> > > > I don't see my yesterday's post in the mailing list archive.
> > > >
> > > > I am using fio-2.2.10 on Ubuntu 16.04.
> > >
> > > Just briefly I'm afraid we (upstream) don't support older versions of
> > > fio. We're all the way up to fio 3.19 at the time of writing (see
> > > https://github.com/axboe/fio/releases ). I'll keep looking but you
> > > might not get much more of a response... It's worth noting that it's
> > > fairly easy to build new versions by hand though:
> > > https://github.com/axboe/fio/blob/fio-3.19/README#L132 .
> > >
> > > > In one of my setup I am observing high value of read iops at io-depth
> > > > = 1.  One run is reporting iops=11418, another run iops=33507 and
> > > > iops=44329This does not seem to be correct. Below are the run results.
> > > > Can someone let me know what is going wrong and if I am hitting some
> > > > known issue in fio.
> > >
> > > In all probability it's unlikely to be fio in this case - your jobs is
> > > only running for 4 seconds and the region you're doing I/O over is
> > > small so any number of things could perturb the iops you get (another
> > > process stealing CPU time, the state of your disks cache, how your
> > > filesystem chose to layout the file, whether the filesystem chose to
> > > do some writeback at that point, what the kernel I/O scheduler chose
> > > to do with the sequence of I/O it was sent etc). For some reason your
> > > underlying disk seems less busier in the first run (util=87.75%) than
> > > the second run (util=62.83%) which hints at outside interference (e.g.
> > > another process doing I/O the same filesystem or even the kernel doing
> > > swap on a different partition of the same disk).
> > >
> > > Given so much has changed since your version I couldn't say if you are
> > > also hitting some fixed bug but as previously mentioned it seems
> > > unlikely. Maybe do I/O on a much larger file to see if the variation
> > > between runs goes down? Good luck!
> > >
> > > > RUN 1
> > > >
> > > > ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> > > > /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> > > > file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> > > > --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> > > > --iodepth=1 --size=200M ; rm -f $file
> > > > /var/lib/ubuntu/fio-oPHRC8lfJujNk
> > > > bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> > > > fio-2.2.10
> > > > Starting 1 process
> > > > bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> > > > Jobs: 1 (f=1): [r(1)] [100.0% done] [49132KB/0KB/0KB /s] [12.3K/0/0
> > > > iops] [eta 00m:00s]
> > > > bw-test: (groupid=0, jobs=1): err= 0: pid=48549: Fri May 15 18:42:08 2020
> > > >   read : io=204800KB, bw=45674KB/s, iops=11418, runt=  4484msec
> > > >     slat (usec): min=4, max=258, avg= 5.53, stdev= 2.20
> > > >     clat (usec): min=0, max=22304, avg=81.16, stdev=230.44
> > > >      lat (usec): min=17, max=22309, avg=86.78, stdev=230.46
> > > >     clat percentiles (usec):
> > > >      |  1.00th=[   14],  5.00th=[   14], 10.00th=[   15], 20.00th=[   16],
> > > >      | 30.00th=[   19], 40.00th=[   19], 50.00th=[   20], 60.00th=[   20],
> > > >      | 70.00th=[   21], 80.00th=[   25], 90.00th=[  266], 95.00th=[  286],
> > > >      | 99.00th=[  532], 99.50th=[ 1400], 99.90th=[ 2832], 99.95th=[ 3088],
> > > >      | 99.99th=[ 5280]
> > > >     bw (KB  /s): min=33760, max=51584, per=98.18%, avg=44842.00, stdev=6783.12
> > > >     lat (usec) : 2=0.04%, 4=0.01%, 10=0.02%, 20=45.62%, 50=35.21%
> > > >     lat (usec) : 100=0.15%, 250=2.24%, 500=15.67%, 750=0.25%, 1000=0.16%
> > > >     lat (msec) : 2=0.28%, 4=0.34%, 10=0.01%, 50=0.01%
> > > >   cpu          : usr=0.98%, sys=10.53%, ctx=51195, majf=0, minf=10
> > > >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
> > > >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > > >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > > >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> > > >      latency   : target=0, window=0, percentile=100.00%, depth=1
> > > >
> > > > Run status group 0 (all jobs):
> > > >    READ: io=204800KB, aggrb=45673KB/s, minb=45673KB/s, maxb=45673KB/s,
> > > > mint=4484msec, maxt=4484msec
> > > >
> > > > Disk stats (read/write):
> > > >     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
> > > > util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
> > > > aggrin_queue=4048, aggrutil=87.75%
> > > >   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%
> > > >
> > > > RUN 2
> > > >
> > > > ubuntu@vrni-proxy-release:/var/log/foundationdb$ suf=`head
> > > > /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''` &&
> > > > file="/var/lib/ubuntu/fio-$suf" && echo $file && fio --filename=$file
> > > > --direct=1  --ioengine=libaio --bs=4K  --name=bw-test --rw=randread
> > > > --iodepth=1 --size=200M ; rm -f $file
> > > > /var/lib/ubuntu/fio-PipAwOU94ybHL
> > > > bw-test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1
> > > > fio-2.2.10
> > > > Starting 1 process
> > > > bw-test: Laying out IO file(s) (1 file(s) / 200MB)
> > > > Jobs: 1 (f=1)
> > > > bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
> > > >   read : io=204800KB, bw=134031KB/s, iops=33507, runt=  1528msec
> > > >     slat (usec): min=4, max=19840, avg= 5.38, stdev=87.66
> > > >     clat (usec): min=1, max=3589, avg=23.64, stdev=89.84
> > > >      lat (usec): min=17, max=19863, avg=29.07, stdev=125.52
> > > >     clat percentiles (usec):
> > > >      |  1.00th=[   14],  5.00th=[   15], 10.00th=[   15], 20.00th=[   15],
> > > >      | 30.00th=[   15], 40.00th=[   15], 50.00th=[   16], 60.00th=[   16],
> > > >      | 70.00th=[   16], 80.00th=[   16], 90.00th=[   17], 95.00th=[   17],
> > > >      | 99.00th=[  286], 99.50th=[  318], 99.90th=[ 1576], 99.95th=[ 2352],
> > > >      | 99.99th=[ 3056]
> > > >     bw (KB  /s): min=119592, max=143792, per=99.48%, avg=133333.33,
> > > > stdev=12429.48
> > > >     lat (usec) : 2=0.01%, 10=0.01%, 20=97.54%, 50=0.47%, 100=0.09%
> > > >     lat (usec) : 250=0.05%, 500=1.61%, 750=0.06%, 1000=0.02%
> > > >     lat (msec) : 2=0.06%, 4=0.08%
> > > >   cpu          : usr=1.83%, sys=31.17%, ctx=51196, majf=0, minf=11
> > > >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
> > > >      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > > >      complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
> > > >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
> > > >      latency   : target=0, window=0, percentile=100.00%, depth=1
> > > >
> > > > Run status group 0 (all jobs):
> > > >    READ: io=204800KB, aggrb=134031KB/s, minb=134031KB/s,
> > > > maxb=134031KB/s, mint=1528msec, maxt=1528msec
> > > >
> > > > Disk stats (read/write):
> > > >     dm-6: ios=43347/0, merge=0/0, ticks=1080/0, in_queue=1084,
> > > > util=75.80%, aggrios=51200/10, aggrmerge=0/6, aggrticks=1048/0,
> > > > aggrin_queue=1048, aggrutil=62.83%
> > > >   sda: ios=51200/10, merge=0/6, ticks=1048/0, in_queue=1048, util=62.83%
> > >
> > > --
> > > Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-22  9:01       ` Debraj Manna
@ 2020-05-22 16:04         ` Sitsofe Wheeler
  2020-05-22 16:54           ` Debraj Manna
  0 siblings, 1 reply; 9+ messages in thread
From: Sitsofe Wheeler @ 2020-05-22 16:04 UTC (permalink / raw)
  To: Debraj Manna; +Cc: fio

On Fri, 22 May 2020 at 10:01, Debraj Manna <subharaj.manna@gmail.com> wrote:
>
> Few queries for my understanding
>
> 1. Is there a way to bypass the disk/buffer cache while doing a fio
> test? I know about direct=1 but that does not always bypass a cache.

Generally whether it will or won't is down to what you're doing I/O
against and how you're doing it. If you're doing it to a block device
of a real device then direct=1 usually won't touch the buffer cache.
If you're doing it to a file in a filesystem you're at the whim of the
filesystem (some do, some don't , some do unless you're using certain
options, some do so long as you get all the rules right but if you
don't then they quietly just use the buffer cache etc). I do notice
you're only doing reads and after laying out a file fio tries its best
to drop the cache for that file -
https://github.com/axboe/fio/blob/f09a7773f5821d5b89428419dcef1987ced39b67/filesetup.c#L509
. On reflection, this should mean you're unlikely to be hitting the
kernel cache (maybe check how many I/Os the kernel sent to disk versus
those that fio was asked to do).

>  2. "Maybe do I/O on a much larger file to see if the variation
> between runs goes down" - Will the file size be in some ratio of my
> system configuration?. For example if I am having a RAM of 32 GB then
> should I use a file of size of 64 GB or there are some other
> recommended best practices?

This advice is to reduce the effectiveness of caching (if caching
really is taking place it might not be - see above). If most of your
I/O isn't going to cache then most of your I/O will reflect non-cache
speeds even if you are asking for buffered I/O. Doing too little I/O
is prone to be disturbed by other things that may be happening on the
system so I'd expect longer results to be more reliable.

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-22 16:04         ` Sitsofe Wheeler
@ 2020-05-22 16:54           ` Debraj Manna
  2020-05-24  9:45             ` Sitsofe Wheeler
  0 siblings, 1 reply; 9+ messages in thread
From: Debraj Manna @ 2020-05-22 16:54 UTC (permalink / raw)
  To: Sitsofe Wheeler; +Cc: fio

Thanks again.

One more query

"maybe check how many I/Os the kernel sent to disk versus those that
fio was asked to do" - For this can I rely on iostat or do you
recommend any other tool?


On Fri, May 22, 2020 at 9:34 PM Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>
> On Fri, 22 May 2020 at 10:01, Debraj Manna <subharaj.manna@gmail.com> wrote:
> >
> > Few queries for my understanding
> >
> > 1. Is there a way to bypass the disk/buffer cache while doing a fio
> > test? I know about direct=1 but that does not always bypass a cache.
>
> Generally whether it will or won't is down to what you're doing I/O
> against and how you're doing it. If you're doing it to a block device
> of a real device then direct=1 usually won't touch the buffer cache.
> If you're doing it to a file in a filesystem you're at the whim of the
> filesystem (some do, some don't , some do unless you're using certain
> options, some do so long as you get all the rules right but if you
> don't then they quietly just use the buffer cache etc). I do notice
> you're only doing reads and after laying out a file fio tries its best
> to drop the cache for that file -
> https://github.com/axboe/fio/blob/f09a7773f5821d5b89428419dcef1987ced39b67/filesetup.c#L509
> . On reflection, this should mean you're unlikely to be hitting the
> kernel cache (maybe check how many I/Os the kernel sent to disk versus
> those that fio was asked to do).
>
> >  2. "Maybe do I/O on a much larger file to see if the variation
> > between runs goes down" - Will the file size be in some ratio of my
> > system configuration?. For example if I am having a RAM of 32 GB then
> > should I use a file of size of 64 GB or there are some other
> > recommended best practices?
>
> This advice is to reduce the effectiveness of caching (if caching
> really is taking place it might not be - see above). If most of your
> I/O isn't going to cache then most of your I/O will reflect non-cache
> speeds even if you are asking for buffered I/O. Doing too little I/O
> is prone to be disturbed by other things that may be happening on the
> system so I'd expect longer results to be more reliable.
>
> --
> Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
  2020-05-22 16:54           ` Debraj Manna
@ 2020-05-24  9:45             ` Sitsofe Wheeler
       [not found]               ` <CAF6DVKOZvOYw75v-b-JpqG3vPB6O04GnG765zv1ymNisxOuoiw@mail.gmail.com>
  0 siblings, 1 reply; 9+ messages in thread
From: Sitsofe Wheeler @ 2020-05-24  9:45 UTC (permalink / raw)
  To: Debraj Manna; +Cc: fio

On Fri, 22 May 2020 at 17:54, Debraj Manna <subharaj.manna@gmail.com> wrote:
>
> One more query
>
> "maybe check how many I/Os the kernel sent to disk versus those that
> fio was asked to do" - For this can I rely on iostat or do you
> recommend any other tool?

You could do it by manually reading the disk stats collected in
/sys/block/<device>/stat just before the I/O starts but then you'd
have to somehow know when the actual part of the real job starts
rather than when you were doing layout etc. Further, I believe those
are the underlying files that iostat may use for monitoring anyway (it
can also use /proc/diskstats but you can see what iostat does by
tracing through both branches around
https://github.com/sysstat/sysstat/blob/b5fd09323bdbd36e34472ee448563f8a7fd34464/iostat.c#L2006
)...

...but you already have processed versions statistics in your fio
output. For example in your RUN 1 you had this:
[...]
> bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
[...]
>   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
[...]
>      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
[...]
> Disk stats (read/write):
>     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
> util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
> aggrin_queue=4048, aggrutil=87.75%
>   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%

The "Disk stats (read/write)" show what went out of relevant block
layer devices while your job was running and is mentioned in the fio
documentation (e.g.
https://fio.readthedocs.io/en/latest/fio_doc.html#interpreting-the-output
). Be aware that the stats may also cover I/O not generated by your
job (e.g. I/O done by other processes that was sent to disk,
filesystem metadata I/O etc.) but it's as close as you're going to get
without reaching deeper into the kernel. This is why it's important to
use as quiet a system as possible to avoid interference/contamination.

Looking closer, something interesting in your results is that fio says
it issued 51200 read I/Os but the device mapper device (dm-6, which
your filesystem is presumably on) apparently only saw 48310 reads AND
saw 2 writes (which suggests something else was happening on that
device as your fio job wouldn't have been issuing writes at that
point). Curiously the underlying block device to your device mapper
device (sda) saw 51200 reads and 40 writes. That might be because the
time the underlying block stats were read was ever so slightly later
than that of when the device mapper stats were read or because some
other device mapper device was also sending I/O to sda (also note some
merging of writes also took place but no merging of reads did).

-- 
Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: fio reporting high value of read iops at iodepth 1
       [not found]               ` <CAF6DVKOZvOYw75v-b-JpqG3vPB6O04GnG765zv1ymNisxOuoiw@mail.gmail.com>
@ 2020-05-25  6:21                 ` Debraj Manna
  0 siblings, 0 replies; 9+ messages in thread
From: Debraj Manna @ 2020-05-25  6:21 UTC (permalink / raw)
  To: fio

Thanks again for the detailed explanation. It is really helpful.


On Mon, May 25, 2020 at 11:48 AM Debraj Manna <subharaj.manna@gmail.com> wrote:
>
> Thanks again for the detailed explanation. It is really helpful.
>
> On Sun, May 24, 2020 at 3:15 PM Sitsofe Wheeler <sitsofe@gmail.com> wrote:
>>
>> On Fri, 22 May 2020 at 17:54, Debraj Manna <subharaj.manna@gmail.com> wrote:
>> >
>> > One more query
>> >
>> > "maybe check how many I/Os the kernel sent to disk versus those that
>> > fio was asked to do" - For this can I rely on iostat or do you
>> > recommend any other tool?
>>
>> You could do it by manually reading the disk stats collected in
>> /sys/block/<device>/stat just before the I/O starts but then you'd
>> have to somehow know when the actual part of the real job starts
>> rather than when you were doing layout etc. Further, I believe those
>> are the underlying files that iostat may use for monitoring anyway (it
>> can also use /proc/diskstats but you can see what iostat does by
>> tracing through both branches around
>> https://github.com/sysstat/sysstat/blob/b5fd09323bdbd36e34472ee448563f8a7fd34464/iostat.c#L2006
>> )...
>>
>> ...but you already have processed versions statistics in your fio
>> output. For example in your RUN 1 you had this:
>> [...]
>> > bw-test: (groupid=0, jobs=1): err= 0: pid=48575: Fri May 15 18:42:15 2020
>> [...]
>> >   IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
>> [...]
>> >      issued    : total=r=51200/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
>> [...]
>> > Disk stats (read/write):
>> >     dm-6: ios=48310/2, merge=0/0, ticks=3944/0, in_queue=3972,
>> > util=90.95%, aggrios=51200/40, aggrmerge=0/205, aggrticks=4048/0,
>> > aggrin_queue=4048, aggrutil=87.75%
>> >   sda: ios=51200/40, merge=0/205, ticks=4048/0, in_queue=4048, util=87.75%
>>
>> The "Disk stats (read/write)" show what went out of relevant block
>> layer devices while your job was running and is mentioned in the fio
>> documentation (e.g.
>> https://fio.readthedocs.io/en/latest/fio_doc.html#interpreting-the-output
>> ). Be aware that the stats may also cover I/O not generated by your
>> job (e.g. I/O done by other processes that was sent to disk,
>> filesystem metadata I/O etc.) but it's as close as you're going to get
>> without reaching deeper into the kernel. This is why it's important to
>> use as quiet a system as possible to avoid interference/contamination.
>>
>> Looking closer, something interesting in your results is that fio says
>> it issued 51200 read I/Os but the device mapper device (dm-6, which
>> your filesystem is presumably on) apparently only saw 48310 reads AND
>> saw 2 writes (which suggests something else was happening on that
>> device as your fio job wouldn't have been issuing writes at that
>> point). Curiously the underlying block device to your device mapper
>> device (sda) saw 51200 reads and 40 writes. That might be because the
>> time the underlying block stats were read was ever so slightly later
>> than that of when the device mapper stats were read or because some
>> other device mapper device was also sending I/O to sda (also note some
>> merging of writes also took place but no merging of reads did).
>>
>> --
>> Sitsofe | http://sucs.org/~sits/


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-05-25  6:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-16 14:53 fio reporting high value of read iops at iodepth 1 Debraj Manna
2020-05-16 15:29 ` Sitsofe Wheeler
2020-05-16 16:28   ` Debraj Manna
2020-05-16 16:33     ` Debraj Manna
2020-05-22  9:01       ` Debraj Manna
2020-05-22 16:04         ` Sitsofe Wheeler
2020-05-22 16:54           ` Debraj Manna
2020-05-24  9:45             ` Sitsofe Wheeler
     [not found]               ` <CAF6DVKOZvOYw75v-b-JpqG3vPB6O04GnG765zv1ymNisxOuoiw@mail.gmail.com>
2020-05-25  6:21                 ` Debraj Manna

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.