* fio replay
@ 2014-02-26 13:44 Lamkin, Eric
2014-02-26 14:13 ` David Nellans
0 siblings, 1 reply; 19+ messages in thread
From: Lamkin, Eric @ 2014-02-26 13:44 UTC (permalink / raw)
To: fio
Is there a version of fio where replay works with a Linux blktrace?
Thanks,
Eric
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-02-26 13:44 fio replay Lamkin, Eric
@ 2014-02-26 14:13 ` David Nellans
2014-02-26 14:23 ` Lamkin, Eric
0 siblings, 1 reply; 19+ messages in thread
From: David Nellans @ 2014-02-26 14:13 UTC (permalink / raw)
To: Lamkin, Eric; +Cc: fio
> On Feb 26, 2014, at 7:44 AM, "Lamkin, Eric" <Eric.Lamkin@lsi.com> wrote:
>
> Is there a version of fio where replay works with a Linux blktrace?
>
anything within the last few years should play them back just fine, is it broken in a specific version you're trying?
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-02-26 14:13 ` David Nellans
@ 2014-02-26 14:23 ` Lamkin, Eric
2014-02-26 15:30 ` Sitsofe Wheeler
2014-02-26 17:49 ` Jens Axboe
0 siblings, 2 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-02-26 14:23 UTC (permalink / raw)
To: David Nellans; +Cc: fio
David,
Thanks for responding.
I sent the below out last week but didn't get a reply so thought I'd try a simpler 'what works' tactic this morning.
As seen in the below I'm using 2.0.10.
Eric
--------------
Deleted formatting.
From: Lamkin, Eric
Sent: Thursday, February 20, 2014 9:36 AM
To: 'fio@vger.kernel.org'
Subject: fio replay
I'm evaluating fio for replay of Linux blktrace workloads.
So far the whole workload is not being replayed.
I'm wondering if I can get some pointers on what I may be doing wrong.
Here are some details:
Methodology:
. Capture the IO workload with blktrace
- Mount the debug file system on the host with the IO workload to capture . mount -t debugfs debugfs /sys/kernel/debug
- Start blktrace for all devices of interest . blktrace -d /dev/sdb /dev/sdca
- Stop blktrace once the IO workload of interest is complete . Ctrl-C
- Convert the blktrace file set to a single binary with blkparse . blkparse sdb sdca -O -d blkparseall.bin
- Create a blkparse .bin per device (may be required for device mapping) . blkparse /dev/sdb -O -d blkparsesdb.bin . blkparse /dev/sdca -O -d blkparsesdca.bin . . . .
- Capture the partition file from the same host . cat /proc/partitions > partitions.txt
Observation:
. Something is preventing the entire blktrace from replaying.
. During replay a time remaining counter begins decrementing from 2+ minutes.
. At some point the replay comes to a conclusion with time remaining on the clock.
. Do not know if the following message from fio is a clue to why the full replay does not occur. fio: file hash not empty on exit
The blktrace was captured on the source host for /dev/sdc and /dev/sdd.
Replay occurs on the replay host to /dev/sdc and /dev/sdd.
Partitions on source host.
[root@x3650M4 Profiler]# cat /home/Documents/TraceFiles/MR15couchbase/partitions1MR15.txt
major minor #blocks name
8 32 1999699968 sdc
8 48 1142685696 sdd
8 16 488386584 sdb
8 0 488386584 sda
8 1 512000 sda1
8 2 487873536 sda2
253 0 52428800 dm-0
253 1 16482304 dm-1
253 2 418959360 dm-2
Blktrace on the source host.
[root@Megaraid15 tracefiles]# blktrace -d /dev/sdc /dev/sdd
^C=== sdc ===
CPU 0: 4755036 events, 222893 KiB data
CPU 1: 4597591 events, 215513 KiB data
CPU 2: 742267 events, 34794 KiB data
CPU 3: 821213 events, 38495 KiB data
CPU 4: 398054 events, 18659 KiB data
CPU 5: 468293 events, 21952 KiB data
CPU 6: 211068 events, 9894 KiB data
CPU 7: 259636 events, 12171 KiB data
CPU 8: 118142 events, 5538 KiB data
CPU 9: 152712 events, 7159 KiB data
CPU 10: 43662 events, 2047 KiB data
CPU 11: 60687 events, 2845 KiB data
CPU 12: 34881 events, 1636 KiB data
CPU 13: 46452 events, 2178 KiB data
CPU 14: 23514 events, 1103 KiB data
CPU 15: 26782 events, 1256 KiB data
CPU 16: 2978005 events, 139594 KiB data
CPU 17: 3138535 events, 147119 KiB data
CPU 18: 70275 events, 3295 KiB data
CPU 19: 105291 events, 4936 KiB data
CPU 20: 54002 events, 2532 KiB data
CPU 21: 50163 events, 2352 KiB data
CPU 22: 38882 events, 1823 KiB data
CPU 23: 31927 events, 1497 KiB data
CPU 24: 19594 events, 919 KiB data
CPU 25: 23753 events, 1114 KiB data
CPU 26: 10676 events, 501 KiB data
CPU 27: 12503 events, 587 KiB data
CPU 28: 1632 events, 77 KiB data
CPU 29: 9099 events, 427 KiB data
CPU 30: 5472 events, 257 KiB data
CPU 31: 3983 events, 187 KiB data
Total: 19313782 events (dropped 30207), 905335 KiB data
=== sdd ===
CPU 0: 1 events, 1 KiB data
CPU 1: 0 events, 0 KiB data
CPU 2: 0 events, 0 KiB data
CPU 3: 0 events, 0 KiB data
CPU 4: 0 events, 0 KiB data
CPU 5: 0 events, 0 KiB data
CPU 6: 0 events, 0 KiB data
CPU 7: 0 events, 0 KiB data
CPU 8: 0 events, 0 KiB data
CPU 9: 0 events, 0 KiB data
CPU 10: 0 events, 0 KiB data
CPU 11: 0 events, 0 KiB data
CPU 12: 0 events, 0 KiB data
CPU 13: 0 events, 0 KiB data
CPU 14: 0 events, 0 KiB data
CPU 15: 0 events, 0 KiB data
CPU 16: 0 events, 0 KiB data
CPU 17: 0 events, 0 KiB data
CPU 18: 0 events, 0 KiB data
CPU 19: 0 events, 0 KiB data
CPU 20: 0 events, 0 KiB data
CPU 21: 0 events, 0 KiB data
CPU 22: 0 events, 0 KiB data
CPU 23: 0 events, 0 KiB data
CPU 24: 0 events, 0 KiB data
CPU 25: 0 events, 0 KiB data
CPU 26: 0 events, 0 KiB data
CPU 27: 0 events, 0 KiB data
CPU 28: 0 events, 0 KiB data
CPU 29: 0 events, 0 KiB data
CPU 30: 0 events, 0 KiB data
CPU 31: 0 events, 0 KiB data
Total: 1 events (dropped 0), 1 KiB data
You have 30207 ( 0.2%) dropped events
Consider using a larger buffer size (-b) and/or more buffers (-n)
Partitions on replay host.
[root@x3650M4 Profiler]# cat /proc/partitions
major minor #blocks name
8 0 291991552 sda
8 1 204800 sda1
8 2 512000 sda2
8 3 291272704 sda3
253 0 52428800 dm-0
253 1 8175616 dm-1
253 2 230666240 dm-2
8 16 47185920 sdb
8 32 1999699968 sdc
8 48 1142685696 sdd
Since the partitions match for the devices traced no replay_redirect is used.
Results w/o replay_no_stall.
[root@x3650M4 MR15couchbase]# fio --name=rp1 --ioengine=libaio --iodepth=16 --read_iolog=bprun1all.bin
rp1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.0.10
Starting 1 process
got timestamp notify: 4000001, 0
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [27.1% done] [0K/70636K/0K /s] [0 /17.7K/0 iops] [eta 01m:02s]
rp1: (groupid=0, jobs=1): err= 0: pid=14710: Fri Jan 31 15:36:29 2014
write: io=1535.6MB, bw=73954KB/s, iops=18488 , runt= 21262msec
slat (usec): min=4 , max=1156 , avg=13.89, stdev= 6.02
clat (usec): min=4 , max=387196 , avg=520.86, stdev=2425.91
lat (usec): min=49 , max=387211 , avg=535.31, stdev=2425.96
clat percentiles (usec):
| 1.00th=[ 43], 5.00th=[ 60], 10.00th=[ 92], 20.00th=[ 143],
| 30.00th=[ 183], 40.00th=[ 278], 50.00th=[ 322], 60.00th=[ 588],
| 70.00th=[ 740], 80.00th=[ 812], 90.00th=[ 948], 95.00th=[ 1272],
| 99.00th=[ 1800], 99.50th=[ 2096], 99.90th=[ 2800], 99.95th=[ 3664],
| 99.99th=[21632]
lat (usec) : 10=0.01%, 20=0.01%, 50=3.43%, 100=7.59%, 250=25.49%
lat (usec) : 500=16.89%, 750=17.87%, 1000=19.45%
lat (msec) : 2=8.63%, 4=0.60%, 10=0.03%, 20=0.01%, 50=0.01%
lat (msec) : 500=0.01%
cpu : usr=6.21%, sys=27.22%, ctx=63452, majf=0, minf=18
IO depths : 1=6.1%, 2=12.0%, 4=24.0%, 8=19.2%, 16=38.6%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=86.6%, 8=2.6%, 16=10.8%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=393102/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=1535.6MB, aggrb=73953KB/s, minb=73953KB/s, maxb=73953KB/s, mint=21262msec, maxt=21262msec
fio: file hash not empty on exit
Blktrace of replay.
[root@x3650M4 cbreplay1]# blktrace /dev/sdc /dev/sdd
^C=== sdc ===
CPU 0: 5952393 events, 279019 KiB data
CPU 1: 11609 events, 545 KiB data
CPU 2: 10868 events, 510 KiB data
CPU 3: 4514 events, 212 KiB data
CPU 4: 10245 events, 481 KiB data
CPU 5: 220 events, 11 KiB data
CPU 6: 129295 events, 6061 KiB data
CPU 7: 5353 events, 251 KiB data
CPU 8: 6251 events, 294 KiB data
CPU 9: 680 events, 32 KiB data
CPU 10: 22 events, 2 KiB data
CPU 11: 0 events, 0 KiB data
Total: 6131450 events (dropped 0), 287412 KiB data
=== sdd ===
CPU 0: 0 events, 0 KiB data
CPU 1: 0 events, 0 KiB data
CPU 2: 1 events, 1 KiB data
CPU 3: 0 events, 0 KiB data
CPU 4: 0 events, 0 KiB data
CPU 5: 0 events, 0 KiB data
CPU 6: 0 events, 0 KiB data
CPU 7: 0 events, 0 KiB data
CPU 8: 0 events, 0 KiB data
CPU 9: 0 events, 0 KiB data
CPU 10: 0 events, 0 KiB data
CPU 11: 0 events, 0 KiB data
Total: 1 events (dropped 0), 1 KiB data
Note, replay host has fewer processors than original host
sdc 19,313,782 original events, 6,131,450 replay events
sdc 905,335 KiB data, 287,412 KiB replay data
Using replay_no_stall ran more quickly but still not completely.
[root@x3650M4 MR15couchbase]# fio --name=rp1 --ioengine=libaio --iodepth=16 --replay_no_stall=1 --read_iolog=bprun1all.bin
rp1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.0.10
Starting 1 process
got timestamp notify: 4000001, 0
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [41.2% done] [0K/277.2M/0K /s] [0 /70.1K/0 iops] [eta 00m:20s]
rp1: (groupid=0, jobs=1): err= 0: pid=5680: Wed Feb 5 09:23:14 2014
write: io=1535.6MB, bw=245459KB/s, iops=61364 , runt= 6406msec
slat (usec): min=4 , max=4028 , avg= 9.47, stdev=11.76
clat (usec): min=2 , max=4234 , avg=156.59, stdev=73.09
lat (usec): min=41 , max=4338 , avg=166.32, stdev=74.21
clat percentiles (usec):
| 1.00th=[ 59], 5.00th=[ 79], 10.00th=[ 93], 20.00th=[ 112],
| 30.00th=[ 129], 40.00th=[ 141], 50.00th=[ 155], 60.00th=[ 167],
| 70.00th=[ 175], 80.00th=[ 187], 90.00th=[ 219], 95.00th=[ 249],
| 99.00th=[ 302], 99.50th=[ 338], 99.90th=[ 836], 99.95th=[ 1688],
| 99.99th=[ 1992]
lat (usec) : 4=0.01%, 10=0.01%, 50=0.21%, 100=12.80%, 250=82.10%
lat (usec) : 500=4.69%, 750=0.08%, 1000=0.02%
lat (msec) : 2=0.08%, 4=0.01%, 10=0.01%
cpu : usr=11.25%, sys=63.68%, ctx=87243, majf=1, minf=19
IO depths : 1=6.1%, 2=12.0%, 4=24.0%, 8=19.2%, 16=38.6%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=86.6%, 8=2.6%, 16=10.8%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=393102/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=1535.6MB, aggrb=245458KB/s, minb=245458KB/s, maxb=245458KB/s, mint=6406msec, maxt=6406msec
fio: file hash not empty on exit
Blktrace of replay with replay_no_stall.
[root@x3650M4 cbreplay2]# blktrace -d /dev/sdc /dev/sdd
^C=== sdc ===
CPU 0: 5639178 events, 264337 KiB data
CPU 1: 131297 events, 6155 KiB data
CPU 2: 37969 events, 1780 KiB data
CPU 3: 17794 events, 835 KiB data
CPU 4: 0 events, 0 KiB data
CPU 5: 3364 events, 158 KiB data
CPU 6: 151797 events, 7116 KiB data
CPU 7: 9839 events, 462 KiB data
CPU 8: 8902 events, 418 KiB data
CPU 9: 11832 events, 555 KiB data
CPU 10: 9135 events, 429 KiB data
CPU 11: 0 events, 0 KiB data
Total: 6021107 events (dropped 0), 282240 KiB data
=== sdd ===
CPU 0: 0 events, 0 KiB data
CPU 1: 0 events, 0 KiB data
CPU 2: 0 events, 0 KiB data
CPU 3: 0 events, 0 KiB data
CPU 4: 0 events, 0 KiB data
CPU 5: 0 events, 0 KiB data
CPU 6: 0 events, 0 KiB data
CPU 7: 0 events, 0 KiB data
CPU 8: 0 events, 0 KiB data
CPU 9: 1 events, 1 KiB data
CPU 10: 0 events, 0 KiB data
CPU 11: 0 events, 0 KiB data
Total: 1 events (dropped 0), 1 KiB data
Original to replay blktrace analysis comparison (based on completion operations)
Original Replay w/o no_stall Replay w/no_stall
132.5 seconds 23.2 seconds 6.66 seconds
2 CPUs (0,1) 1 CPU (0) 1 CPU (0)
100% write 99.98% write 99.98% write
3,842,340,968 blk range 3,999,399,928 blk range 3,999,399,928 blk range
817,103 IOs 393,185 IOs 393,185 IOs
11,863,200 blocks 3,145,480 blocks 3,145,480 blocks
248,142 seq runs 78,502 seq runs 78,511 seq runs
10,661,656 seq blocks 2,972,672 seq blocks 2,972,312 seq blocks
83.2% seq IOs 94.5% seq IOs 94.5% seq IOs
89.9% seq bytes 94.5% seq bytes 94.5% seq bytes
Note, fio replay uses blktrace queue vs completion operations
The blktrace file filtered for queue operations had to be split to analyze in Excel
Original to replay blktrace analysis comparison (based on queue operations)
Original Replay w/o no_stall Replay w/no_stall
132.5 seconds 23.2 seconds 6.66 seconds
32 CPUs (0-31) 11 CPUs (0-10) 10 CPUs (0-3,5-10)
100% write 99.98% write 99.98% write
3,842,340,968 blk range 3,999,399,928 blk range 3,999,399,928 blk range
1,482,813 operations 393,185 operations 393,185 operations
11,862,504 blocks 3,145,480 blocks 3,145,480 blocks
254,458 seq runs 78,508 seq runs 78,508 seq runs
10,818,640 seq blocks 2,972,816 seq blocks 2,972,816 seq blocks
91.2% seq IOs 94.5% seq IOs 94.5% seq IOs
91.2% seq bytes 94.5% seq bytes 94.5% seq bytes
Changing iodepth to 12 to match # cores on replay host. Did not resolve issue.
[root@x3650M4 MR15couchbase]# fio --name=rp1 --ioengine=libaio --iodepth=12 --read_iolog=bprun1all.bin
rp1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=12
fio-2.0.10
Starting 1 process
got timestamp notify: 4000001, 0
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [31.0% done] [0K/85676K/0K /s] [0 /21.5K/0 iops] [eta 01m:09s]
rp1: (groupid=0, jobs=1): err= 0: pid=27491: Tue Feb 11 09:19:56 2014
write: io=1535.6MB, bw=67315KB/s, iops=16828 , runt= 23359msec
slat (usec): min=5 , max=4580 , avg=19.69, stdev=18.13
clat (usec): min=4 , max=18051 , avg=477.41, stdev=429.40
lat (usec): min=43 , max=18063 , avg=497.81, stdev=430.40
clat percentiles (usec):
| 1.00th=[ 45], 5.00th=[ 50], 10.00th=[ 92], 20.00th=[ 151],
| 30.00th=[ 195], 40.00th=[ 278], 50.00th=[ 294], 60.00th=[ 510],
| 70.00th=[ 668], 80.00th=[ 796], 90.00th=[ 932], 95.00th=[ 1288],
| 99.00th=[ 1768], 99.50th=[ 2096], 99.90th=[ 2832], 99.95th=[ 3472],
| 99.99th=[ 7456]
lat (usec) : 10=0.01%, 20=0.01%, 50=4.72%, 100=7.40%, 250=25.11%
lat (usec) : 500=22.40%, 750=15.50%, 1000=15.72%
lat (msec) : 2=8.53%, 4=0.57%, 10=0.03%, 20=0.01%
cpu : usr=5.68%, sys=35.42%, ctx=60638, majf=1, minf=19
IO depths : 1=6.1%, 2=12.0%, 4=24.0%, 8=57.9%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=88.1%, 8=2.3%, 16=9.6%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=393102/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=1535.6MB, aggrb=67314KB/s, minb=67314KB/s, maxb=67314KB/s, mint=23359msec, maxt=23359msec
fio: file hash not empty on exit
Eric
--
To unsubscribe from this list: send the line "unsubscribe fio" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
-----Original Message-----
From: David Nellans [mailto:david@nellans.org]
Sent: Wednesday, February 26, 2014 8:13 AM
To: Lamkin, Eric
Cc: fio@vger.kernel.org
Subject: Re: fio replay
> On Feb 26, 2014, at 7:44 AM, "Lamkin, Eric" <Eric.Lamkin@lsi.com> wrote:
>
> Is there a version of fio where replay works with a Linux blktrace?
>
anything within the last few years should play them back just fine, is it broken in a specific version you're trying?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-02-26 14:23 ` Lamkin, Eric
@ 2014-02-26 15:30 ` Sitsofe Wheeler
2014-02-26 15:48 ` Lamkin, Eric
2014-02-26 17:49 ` Jens Axboe
1 sibling, 1 reply; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-02-26 15:30 UTC (permalink / raw)
To: Lamkin, Eric; +Cc: David Nellans, fio
Hi,
I've been using blktrace with fio by doing the following on Fedora 20
with a git version of fio (fio-2.1.5-16-g81fa):
blktrace -d /dev/sdd
blkparse sdd -O -d sdd.dump
fio --read_iolog=sdd.dump --ioengine=libaio --name=replay
Using iowatcher I can see that fio is hitting the same blocks that the
blktrace recorded.
It is also worth noting that certain options like rate, rate_iops and bs
do not seem to affect replay jobs. Additionally, fio does not seem able
to replay a job with the same timings that it is was recorded with (it
also seems to do so faster) whereas btreplay has no issue doing this
(even reproducing similar depths to the original job).
The one minor issue that I see is this:
Run status group 0 (all jobs):
READ: io=40549MB, aggrb=30919KB/s, minb=30919KB/s, maxb=30919KB/s, mint=1342918msec, maxt=1342918msec
WRITE: io=40562MB, aggrb=30929KB/s, minb=30929KB/s, maxb=30929KB/s, mint=1342918msec, maxt=1342918msec
fio: file hash not empty on exit
Why is the file hash not empty on exit? This happens no matter what I
do.
On Wed, Feb 26, 2014 at 02:23:27PM +0000, Lamkin, Eric wrote:
>
> Thanks for responding.
> I sent the below out last week but didn't get a reply so thought I'd
> try a simpler 'what works' tactic this morning. As seen in the below
> I'm using 2.0.10.
>
> > On Feb 26, 2014, at 7:44 AM, "Lamkin, Eric" <Eric.Lamkin@lsi.com> wrote:
> >
> > Is there a version of fio where replay works with a Linux blktrace?
> >
> anything within the last few years should play them back just fine, is
> it broken in a specific version you're trying?
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-02-26 15:30 ` Sitsofe Wheeler
@ 2014-02-26 15:48 ` Lamkin, Eric
2014-02-26 16:15 ` Sitsofe Wheeler
0 siblings, 1 reply; 19+ messages in thread
From: Lamkin, Eric @ 2014-02-26 15:48 UTC (permalink / raw)
To: Sitsofe Wheeler; +Cc: David Nellans, fio
Sitsofe,
Good to know someone is getting replay to work.
Were you replaying on the same host where the blktrace was recorded? I am not.
I'll try with no --iodepth command. That is all I see different between your methodology and mine.
I would also like to know what the message - fio: file hash not empty on exit - means.
What is btreplay? Does it allow device mapping if the traced devices have different names on the replay host?
Will btreplay run unconstrained by the original timings?
Thanks,
Eric
-----Original Message-----
From: Sitsofe Wheeler [mailto:sitsofe@yahoo.com]
Sent: Wednesday, February 26, 2014 9:30 AM
To: Lamkin, Eric
Cc: David Nellans; fio@vger.kernel.org
Subject: Re: fio replay
Hi,
I've been using blktrace with fio by doing the following on Fedora 20 with a git version of fio (fio-2.1.5-16-g81fa):
blktrace -d /dev/sdd
blkparse sdd -O -d sdd.dump
fio --read_iolog=sdd.dump --ioengine=libaio --name=replay
Using iowatcher I can see that fio is hitting the same blocks that the blktrace recorded.
It is also worth noting that certain options like rate, rate_iops and bs do not seem to affect replay jobs. Additionally, fio does not seem able to replay a job with the same timings that it is was recorded with (it also seems to do so faster) whereas btreplay has no issue doing this (even reproducing similar depths to the original job).
The one minor issue that I see is this:
Run status group 0 (all jobs):
READ: io=40549MB, aggrb=30919KB/s, minb=30919KB/s, maxb=30919KB/s, mint=1342918msec, maxt=1342918msec
WRITE: io=40562MB, aggrb=30929KB/s, minb=30929KB/s, maxb=30929KB/s, mint=1342918msec, maxt=1342918msec
fio: file hash not empty on exit
Why is the file hash not empty on exit? This happens no matter what I do.
On Wed, Feb 26, 2014 at 02:23:27PM +0000, Lamkin, Eric wrote:
>
> Thanks for responding.
> I sent the below out last week but didn't get a reply so thought I'd
> try a simpler 'what works' tactic this morning. As seen in the below
> I'm using 2.0.10.
>
> > On Feb 26, 2014, at 7:44 AM, "Lamkin, Eric" <Eric.Lamkin@lsi.com> wrote:
> >
> > Is there a version of fio where replay works with a Linux blktrace?
> >
> anything within the last few years should play them back just fine, is
> it broken in a specific version you're trying?
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-02-26 15:48 ` Lamkin, Eric
@ 2014-02-26 16:15 ` Sitsofe Wheeler
0 siblings, 0 replies; 19+ messages in thread
From: Sitsofe Wheeler @ 2014-02-26 16:15 UTC (permalink / raw)
To: Lamkin, Eric; +Cc: David Nellans, fio
On Wed, Feb 26, 2014 at 03:48:54PM +0000, Lamkin, Eric wrote:
>
> Were you replaying on the same host where the blktrace was recorded?
Yes I was.
> I am not.
So long as the target block device is big enough I would hope it would
not make a difference so long as you were using replay_redirect . It
definitely works for me using different devices.
> I'll try with no --iodepth command. That is all I see different
> between your methodology and mine.
That should not make a difference other than making things slower. The
real command I'm using is
fio --read_iolog=sdd.dump --ioengine=libaio --iodepth=10 --direct=1 --thinktime=15000 --thinktime_blocks=16 --replay_no_stall=1 --replay_redirect=/dev/sdf --name=replay
> What is btreplay? Does it allow device mapping if the traced devices
> have different names on the replay host? Will btreplay run
> unconstrained by the original timings?
This is a bit off topic but I'll take stab at answering anyway.
btreplay "recreate[s] IO loads recorded by blktrace"
(http://linux.die.net/man/8/btreplay ). Usage is as follows:
blktrace -d /dev/sdd
btrecord sdd
btreplay sdd
Device mappings are supported using the -M/--map-devs option. It can be
made to go as fast as possible by using the -N/--no-stalls option.
You may also want to look at blkreplay by Thomas Schoebel-Theuer of 1&1.
--
Sitsofe | http://sucs.org/~sits/
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-02-26 14:23 ` Lamkin, Eric
2014-02-26 15:30 ` Sitsofe Wheeler
@ 2014-02-26 17:49 ` Jens Axboe
2014-02-26 21:44 ` Jens Axboe
1 sibling, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2014-02-26 17:49 UTC (permalink / raw)
To: Lamkin, Eric, David Nellans; +Cc: fio
On 2014-02-26 06:23, Lamkin, Eric wrote:
> David,
>
> Thanks for responding.
> I sent the below out last week but didn't get a reply so thought I'd try a simpler 'what works' tactic this morning.
> As seen in the below I'm using 2.0.10.
I just ran a quick test here, current -git seems to work fine for me. It
might be a bit finicky in that the error handling isn't stellar, which
I'm thinking could be your issue since you have dropped events in your
blktraces. You need to eliminate any dropped events - either log to
tmpfs or send the traces over the network, that would likely do it.
If that doesn't solve it for you, please boil this down to a smaller
test case. Start with one device. Does that work? Up the complexity
until you see an issue, then I can likely help you get this fixed if I
get the problematic trace.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-02-26 17:49 ` Jens Axboe
@ 2014-02-26 21:44 ` Jens Axboe
2014-02-26 21:55 ` Lamkin, Eric
0 siblings, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2014-02-26 21:44 UTC (permalink / raw)
To: Lamkin, Eric, David Nellans; +Cc: fio
On 2014-02-26 09:49, Jens Axboe wrote:
> On 2014-02-26 06:23, Lamkin, Eric wrote:
>> David,
>>
>> Thanks for responding.
>> I sent the below out last week but didn't get a reply so thought I'd
>> try a simpler 'what works' tactic this morning.
>> As seen in the below I'm using 2.0.10.
>
> I just ran a quick test here, current -git seems to work fine for me. It
> might be a bit finicky in that the error handling isn't stellar, which
> I'm thinking could be your issue since you have dropped events in your
> blktraces. You need to eliminate any dropped events - either log to
> tmpfs or send the traces over the network, that would likely do it.
>
> If that doesn't solve it for you, please boil this down to a smaller
> test case. Start with one device. Does that work? Up the complexity
> until you see an issue, then I can likely help you get this fixed if I
> get the problematic trace.
Can you try current -git? I made some improvements/fixes to the multiple
device handling.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-02-26 21:44 ` Jens Axboe
@ 2014-02-26 21:55 ` Lamkin, Eric
2014-02-27 18:48 ` Jens Axboe
2014-03-03 13:25 ` Lamkin, Eric
0 siblings, 2 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-02-26 21:55 UTC (permalink / raw)
To: Jens Axboe, David Nellans; +Cc: fio
Jens,
Sorry I don't know how to compile.
Hope that doesn't mean no one will ever replay to me again.
I'm very appreciative of the suggestions you've made and will try capturing a source trace where I don't get any dropped event messages. I'll also capture just one device.
My blktrace log files go to the OS storage device whereas the devices traced are on a different storage path.
In this first experiment I didn't have to do any device mapping as I was able to arrange storage on the replay host of sufficient capacity with matching device references.
Eric
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Wednesday, February 26, 2014 3:45 PM
To: Lamkin, Eric; David Nellans
Cc: fio@vger.kernel.org
Subject: Re: fio replay
On 2014-02-26 09:49, Jens Axboe wrote:
> On 2014-02-26 06:23, Lamkin, Eric wrote:
>> David,
>>
>> Thanks for responding.
>> I sent the below out last week but didn't get a reply so thought I'd
>> try a simpler 'what works' tactic this morning.
>> As seen in the below I'm using 2.0.10.
>
> I just ran a quick test here, current -git seems to work fine for me.
> It might be a bit finicky in that the error handling isn't stellar,
> which I'm thinking could be your issue since you have dropped events
> in your blktraces. You need to eliminate any dropped events - either
> log to tmpfs or send the traces over the network, that would likely do it.
>
> If that doesn't solve it for you, please boil this down to a smaller
> test case. Start with one device. Does that work? Up the complexity
> until you see an issue, then I can likely help you get this fixed if I
> get the problematic trace.
Can you try current -git? I made some improvements/fixes to the multiple device handling.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-02-26 21:55 ` Lamkin, Eric
@ 2014-02-27 18:48 ` Jens Axboe
2014-02-27 20:12 ` Lamkin, Eric
2014-03-03 13:25 ` Lamkin, Eric
1 sibling, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2014-02-27 18:48 UTC (permalink / raw)
To: Lamkin, Eric, David Nellans; +Cc: fio
On 2014-02-26 13:55, Lamkin, Eric wrote:
> Jens,
>
> Sorry I don't know how to compile.
> Hope that doesn't mean no one will ever replay to me again.
It's pretty easy, basically boils down to doing:
$ git clone git://git.kernel.dk/fio
$ cd fio
$ make
and then run ./fio. What OS are you using?
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-02-27 18:48 ` Jens Axboe
@ 2014-02-27 20:12 ` Lamkin, Eric
0 siblings, 0 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-02-27 20:12 UTC (permalink / raw)
To: Jens Axboe, David Nellans; +Cc: fio
Jens,
I'm very appreciative that you and others are giving this time and consideration.
For the replay host the OS is RHEL6.3. (The originating host is RHEL6.4.)
[root@x3650M4 cache_stats]# uname -r
2.6.32-279.el6.x86_64
I've completed a new trace capture and replay using a single device.
The originating block trace encountered no dropped events this time.
I used fio 2.1.4 this time.
The replay once again stopped prematurely.
Here is the synopsis based on analysis of the block trace captured on the originating host to the block trace captured on the replay host.
I have another email to this thread with all the details that I will send upon request.
I would postulate since others have had success with replay that the issue is with something I am doing vs a fio issue.
Just need help identifying what I need to do differently. That is to say that running a different version of fio will likely result in the same issue unless there are fixes in replay that map to what has been observed here. I rather imagine that discovering my issue will involve someone well versed in replay taking time to review my procedure and results (in the detailed email not yet sent).
Comparative analysis replay to original
Capture time .88%
% reads +1.16%
% writes -1.16%
Block range 104%
Queue operations .78%
Blocks queued .78%
% sequential queued 108%
Max blocks per sequential run 31.4%
Completion operations 1.42%
Blocks completed .78%
% sequential completions 110%
Eric
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Thursday, February 27, 2014 12:49 PM
To: Lamkin, Eric; David Nellans
Cc: fio@vger.kernel.org
Subject: Re: fio replay
On 2014-02-26 13:55, Lamkin, Eric wrote:
> Jens,
>
> Sorry I don't know how to compile.
> Hope that doesn't mean no one will ever replay to me again.
It's pretty easy, basically boils down to doing:
$ git clone git://git.kernel.dk/fio
$ cd fio
$ make
and then run ./fio. What OS are you using?
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-02-26 21:55 ` Lamkin, Eric
2014-02-27 18:48 ` Jens Axboe
@ 2014-03-03 13:25 ` Lamkin, Eric
2014-03-04 3:12 ` Alireza Haghdoost
2014-03-06 15:55 ` Jens Axboe
1 sibling, 2 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-03-03 13:25 UTC (permalink / raw)
To: Jens Axboe, David Nellans; +Cc: fio
Here's my update with a blktrace captured on a single device with no dropped events.
Also switched to fio 2.1.4.
Capture a single device trace with no dropped events.
[root@Megaraid15 trace2]# blktrace -b 16384 -d /dev/sdc
^C^C
=== sdc ===
CPU 0: 3271662 events, 153360 KiB data
CPU 1: 2301711 events, 107893 KiB data
CPU 2: 476799 events, 22350 KiB data
CPU 3: 388404 events, 18207 KiB data
CPU 4: 347622 events, 16295 KiB data
CPU 5: 240275 events, 11263 KiB data
CPU 6: 177413 events, 8317 KiB data
CPU 7: 139892 events, 6558 KiB data
CPU 8: 76175 events, 3571 KiB data
CPU 9: 77181 events, 3618 KiB data
CPU 10: 63682 events, 2986 KiB data
CPU 11: 45967 events, 2155 KiB data
CPU 12: 35658 events, 1672 KiB data
CPU 13: 20237 events, 949 KiB data
CPU 14: 17849 events, 837 KiB data
CPU 15: 18200 events, 854 KiB data
CPU 16: 2510411 events, 117676 KiB data
CPU 17: 1426072 events, 66848 KiB data
CPU 18: 57914 events, 2715 KiB data
CPU 19: 37197 events, 1744 KiB data
CPU 20: 44023 events, 2064 KiB data
CPU 21: 27153 events, 1273 KiB data
CPU 22: 29866 events, 1400 KiB data
CPU 23: 20500 events, 961 KiB data
CPU 24: 21954 events, 1030 KiB data
CPU 25: 7327 events, 344 KiB data
CPU 26: 12458 events, 584 KiB data
CPU 27: 10061 events, 472 KiB data
CPU 28: 10061 events, 472 KiB data
CPU 29: 5627 events, 264 KiB data
CPU 30: 7835 events, 368 KiB data
CPU 31: 3345 events, 157 KiB data
Total: 11930531 events (dropped 0), 559245 KiB data
Create blkparse .bin file for replay.
[root@Megaraid15 trace2]# blkparse -i sdc -O -d bprun2all.bin
Input file sdc.blktrace.0 added
Input file sdc.blktrace.1 added
Input file sdc.blktrace.2 added
Input file sdc.blktrace.3 added
Input file sdc.blktrace.4 added
Input file sdc.blktrace.5 added
Input file sdc.blktrace.6 added
Input file sdc.blktrace.7 added
Input file sdc.blktrace.8 added
Input file sdc.blktrace.9 added
Input file sdc.blktrace.10 added
Input file sdc.blktrace.11 added
Input file sdc.blktrace.12 added
Input file sdc.blktrace.13 added
Input file sdc.blktrace.14 added
Input file sdc.blktrace.15 added
Input file sdc.blktrace.16 added
Input file sdc.blktrace.17 added
Input file sdc.blktrace.18 added
Input file sdc.blktrace.19 added
Input file sdc.blktrace.20 added
Input file sdc.blktrace.21 added
Input file sdc.blktrace.22 added
Input file sdc.blktrace.23 added
Input file sdc.blktrace.24 added
Input file sdc.blktrace.25 added
Input file sdc.blktrace.26 added
Input file sdc.blktrace.27 added
Input file sdc.blktrace.28 added
Input file sdc.blktrace.29 added
Input file sdc.blktrace.30 added
Input file sdc.blktrace.31 added
Create blkparse .txt file to analyze.
[root@Megaraid15 trace2]# blkparse -t sdc > bprun2all.txt
[root@Megaraid15 trace2]# ls -l
total 1118648
-rw-r--r--. 1 root root 572666191 Feb 27 08:11 bprun2all.bin
-rw-r--r--. 1 root root 705142381 Feb 27 08:44 bprun2all.txt
-rw-r--r--. 1 root root 157039820 Feb 27 07:59 sdc.blktrace.0
-rw-r--r--. 1 root root 110482130 Feb 27 07:59 sdc.blktrace.1
-rw-r--r--. 1 root root 3056749 Feb 27 07:59 sdc.blktrace.10
-rw-r--r--. 1 root root 2206453 Feb 27 07:59 sdc.blktrace.11
-rw-r--r--. 1 root root 1711631 Feb 27 07:59 sdc.blktrace.12
-rw-r--r--. 1 root root 971383 Feb 27 07:59 sdc.blktrace.13
-rw-r--r--. 1 root root 856781 Feb 27 07:59 sdc.blktrace.14
-rw-r--r--. 1 root root 873640 Feb 27 07:59 sdc.blktrace.15
-rw-r--r--. 1 root root 120499732 Feb 27 07:59 sdc.blktrace.16
-rw-r--r--. 1 root root 68451459 Feb 27 07:59 sdc.blktrace.17
-rw-r--r--. 1 root root 2779909 Feb 27 07:59 sdc.blktrace.18
-rw-r--r--. 1 root root 1785457 Feb 27 07:59 sdc.blktrace.19
-rw-r--r--. 1 root root 22886358 Feb 27 07:59 sdc.blktrace.2
-rw-r--r--. 1 root root 2113146 Feb 27 07:59 sdc.blktrace.20
-rw-r--r--. 1 root root 1303379 Feb 27 07:59 sdc.blktrace.21
-rw-r--r--. 1 root root 1433588 Feb 27 07:59 sdc.blktrace.22
-rw-r--r--. 1 root root 984035 Feb 27 07:59 sdc.blktrace.23
-rw-r--r--. 1 root root 1053802 Feb 27 07:59 sdc.blktrace.24
-rw-r--r--. 1 root root 351710 Feb 27 07:59 sdc.blktrace.25
-rw-r--r--. 1 root root 597990 Feb 27 07:59 sdc.blktrace.26
-rw-r--r--. 1 root root 482965 Feb 27 07:59 sdc.blktrace.27
-rw-r--r--. 1 root root 482930 Feb 27 07:59 sdc.blktrace.28
-rw-r--r--. 1 root root 270127 Feb 27 07:59 sdc.blktrace.29
-rw-r--r--. 1 root root 18643407 Feb 27 07:59 sdc.blktrace.3
-rw-r--r--. 1 root root 376114 Feb 27 07:59 sdc.blktrace.30
-rw-r--r--. 1 root root 160568 Feb 27 07:59 sdc.blktrace.31
-rw-r--r--. 1 root root 16685885 Feb 27 07:59 sdc.blktrace.4
-rw-r--r--. 1 root root 11533215 Feb 27 07:59 sdc.blktrace.5
-rw-r--r--. 1 root root 8515854 Feb 27 07:59 sdc.blktrace.6
-rw-r--r--. 1 root root 6714832 Feb 27 07:59 sdc.blktrace.7
-rw-r--r--. 1 root root 3656409 Feb 27 07:59 sdc.blktrace.8
-rw-r--r--. 1 root root 3704733 Feb 27 07:59 sdc.blktrace.9
Trace analysis.
Total capture time 92.93 seconds
% reads 0
% writes 100
Block range 3,842,507,008
Queue operations 910,789
CPUs used for queue operations 32 (all)
Blocks queued 7,286,312
Average blocks per queue operation 8
Sequential queue operations 144,034 88.4%
Sequential blocks queued 6,439,168 88.4%
Max blocks per sequential run 816
Completion operations 503,585
CPUs used for completion 2 (0,1)
Blocks completed 7,286,328
Average blocks per completion operation 14
Sequential completions 139,985 78.1%
Sequential block completions 6,327,912 86.8%
On a different host run fio replay.
fio replay completed in < 5 seconds. Now using version 2.1.4
[root@x3650M4 Run2]# fio --name=rp2 --ioengine=libaio --iodepth=12 --read_iolog=bprun2all.bin
rp2: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=12
fio-2.1.4
Starting 1 process
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [0.6% done] [0KB/2468KB/0KB /s] [0/617/0 iops] [eta 08m:11s]
rp2: (groupid=0, jobs=1): err= 0: pid=28318: Thu Feb 27 12:12:43 2014
write: io=28176KB, bw=41558KB/s, iops=10389, runt= 678msec
slat (usec): min=7, max=206, avg=22.02, stdev= 7.94
clat (usec): min=28, max=168903, avg=928.91, stdev=7207.25
lat (usec): min=47, max=168926, avg=951.59, stdev=7207.26
clat percentiles (usec):
| 1.00th=[ 44], 5.00th=[ 77], 10.00th=[ 149], 20.00th=[ 266],
| 30.00th=[ 282], 40.00th=[ 286], 50.00th=[ 294], 60.00th=[ 302],
| 70.00th=[ 338], 80.00th=[ 1208], 90.00th=[ 1352], 95.00th=[ 1496],
| 99.00th=[ 2800], 99.50th=[ 3728], 99.90th=[168960], 99.95th=[168960],
| 99.99th=[168960]
lat (usec) : 50=2.46%, 100=5.08%, 250=8.97%, 500=56.93%, 750=0.68%
lat (usec) : 1000=0.65%
lat (msec) : 2=22.66%, 4=2.13%, 10=0.13%, 100=0.16%, 250=0.16%
cpu : usr=3.10%, sys=23.93%, ctx=648, majf=0, minf=15
IO depths : 1=3.2%, 2=6.2%, 4=12.4%, 8=78.3%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=95.6%, 8=0.3%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=7044/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=28176KB, aggrb=41557KB/s, minb=41557KB/s, maxb=41557KB/s, mint=678msec, maxt=678msec
fio: file hash not empty on exit
blktrace of replay
[root@x3650M4 cbRun2replay1]# blktrace -d /dev/sdc
^C=== sdc ===
CPU 0: 100619 events, 4717 KiB data
CPU 1: 1731 events, 82 KiB data
CPU 2: 113 events, 6 KiB data
CPU 3: 0 events, 0 KiB data
CPU 4: 0 events, 0 KiB data
CPU 5: 0 events, 0 KiB data
CPU 6: 7047 events, 331 KiB data
CPU 7: 881 events, 42 KiB data
CPU 8: 0 events, 0 KiB data
CPU 9: 0 events, 0 KiB data
CPU 10: 0 events, 0 KiB data
CPU 11: 0 events, 0 KiB data
Total: 110391 events (dropped 0), 5175 KiB data
Create blkparse .txt file to analyze.
[root@x3650M4 cbRun2replay1]# blkparse -t sdc > bp-run2rp1.txt
[root@x3650M4 cbRun2replay1]# ls -l
total 11476
-rw-r--r--. 1 root root 6436652 Feb 27 12:34 bp-run2rp1.txt
-rw-r--r--. 1 root root 4829736 Feb 27 12:12 sdc.blktrace.0
-rw-r--r--. 1 root root 83097 Feb 27 12:12 sdc.blktrace.1
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.10
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.11
-rw-r--r--. 1 root root 5427 Feb 27 12:12 sdc.blktrace.2
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.3
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.4
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.5
-rw-r--r--. 1 root root 338288 Feb 27 12:12 sdc.blktrace.6
-rw-r--r--. 1 root root 42333 Feb 27 12:12 sdc.blktrace.7
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.8
-rw-r--r--. 1 root root 0 Feb 27 12:12 sdc.blktrace.9
Trace analysis.
Total capture time 0.816 seconds
% reads 1.16
% writes 98.8
Block range 3,999,399,928
Queue operations 7,127
CPUs used for queue operations 5 (0,1,2,6,7)
Blocks queued 57,016
Average blocks per queue operation 8
Sequential queue operations 599 95.8%
Sequential blocks queued 54,624 95.8%
Max blocks per sequential run 256
Completion operations 7,127
CPUs used for completion 1 (0)
Blocks completed 57,016
Average blocks per completion operation 8
Sequential completions 600 95.8%
Sequential block completions 54,608 95.8%
Comparative analysis replay to original
Capture time .88%
% reads +1.16%
% writes -1.16%
Block range 104%
Queue operations .78%
Blocks queued .78%
% sequential queued 108%
Max blocks per sequential run 31.4%
Completion operations 1.42%
Blocks completed .78%
% sequential completions 110%
Salient kernel parameters for reference
Originating host
[root@Megaraid15 Scripts]# ./gkparams.sh sdc
Get Scheduler noop anticipatory deadline [cfq]
Get nr_requests 128
Get queue_depth 256
Get max_sectors_kb 280
Get minimum_io_size 4096
Get Linux IO Read-Ahead 256
Get nomerges - IO merging and coalescing 0
Get rq_affinity 1
Get rotational 1
Replay host
[root@x3650M4 Scripts]# ./gkparams.sh sdc
Get Scheduler noop anticipatory deadline [cfq]
Get nr_requests 128
Get queue_depth 256
Get max_sectors_kb 280
Get minimum_io_size 4096
Get Linux IO Read-Ahead 256
Get nomerges - IO merging and coalescing 0
Get rq_affinity 1
Get rotational 1
Eric
-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf Of Lamkin, Eric
Sent: Wednesday, February 26, 2014 3:55 PM
To: Jens Axboe; David Nellans
Cc: fio@vger.kernel.org
Subject: RE: fio replay
Jens,
Sorry I don't know how to compile.
Hope that doesn't mean no one will ever replay to me again.
I'm very appreciative of the suggestions you've made and will try capturing a source trace where I don't get any dropped event messages. I'll also capture just one device.
My blktrace log files go to the OS storage device whereas the devices traced are on a different storage path.
In this first experiment I didn't have to do any device mapping as I was able to arrange storage on the replay host of sufficient capacity with matching device references.
Eric
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Wednesday, February 26, 2014 3:45 PM
To: Lamkin, Eric; David Nellans
Cc: fio@vger.kernel.org
Subject: Re: fio replay
On 2014-02-26 09:49, Jens Axboe wrote:
> On 2014-02-26 06:23, Lamkin, Eric wrote:
>> David,
>>
>> Thanks for responding.
>> I sent the below out last week but didn't get a reply so thought I'd
>> try a simpler 'what works' tactic this morning.
>> As seen in the below I'm using 2.0.10.
>
> I just ran a quick test here, current -git seems to work fine for me.
> It might be a bit finicky in that the error handling isn't stellar,
> which I'm thinking could be your issue since you have dropped events
> in your blktraces. You need to eliminate any dropped events - either
> log to tmpfs or send the traces over the network, that would likely do it.
>
> If that doesn't solve it for you, please boil this down to a smaller
> test case. Start with one device. Does that work? Up the complexity
> until you see an issue, then I can likely help you get this fixed if I
> get the problematic trace.
Can you try current -git? I made some improvements/fixes to the multiple device handling.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-03-03 13:25 ` Lamkin, Eric
@ 2014-03-04 3:12 ` Alireza Haghdoost
2014-03-04 13:37 ` Lamkin, Eric
2014-03-06 15:55 ` Jens Axboe
1 sibling, 1 reply; 19+ messages in thread
From: Alireza Haghdoost @ 2014-03-04 3:12 UTC (permalink / raw)
To: Lamkin, Eric; +Cc: Jens Axboe, David Nellans, fio
On Mon, Mar 3, 2014 at 7:25 AM, Lamkin, Eric <Eric.Lamkin@lsi.com> wrote:
>
> Comparative analysis replay to original
> Capture time .88%
> % reads +1.16%
> % writes -1.16%
> Block range 104%
> Queue operations .78%
> Blocks queued .78%
> % sequential queued 108%
> Max blocks per sequential run 31.4%
> Completion operations 1.42%
> Blocks completed .78%
> % sequential completions 110%
Eric,
Thanks for sharing the results with us. I am interested to hearing
your interpretation on this comparison...
--
Alireza
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-03-04 3:12 ` Alireza Haghdoost
@ 2014-03-04 13:37 ` Lamkin, Eric
0 siblings, 0 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-03-04 13:37 UTC (permalink / raw)
To: Alireza Haghdoost; +Cc: Jens Axboe, David Nellans, fio
Alireza,
I provided the information to show how I ran a replay along with the differences between the original trace and the replay trace with the hope of receiving some advice on what I was doing wrong to cause replay to stop short.
So my interpretation is that something went wrong. Please help determine what I need to do to use replay correctly.
Eric
-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf Of Alireza Haghdoost
Sent: Monday, March 03, 2014 9:13 PM
To: Lamkin, Eric
Cc: Jens Axboe; David Nellans; fio@vger.kernel.org
Subject: Re: fio replay
On Mon, Mar 3, 2014 at 7:25 AM, Lamkin, Eric <Eric.Lamkin@lsi.com> wrote:
>
> Comparative analysis replay to original Capture time .88% % reads
> +1.16% % writes -1.16% Block range 104% Queue operations .78% Blocks
> queued .78% % sequential queued 108% Max blocks per sequential run
> 31.4% Completion operations 1.42% Blocks completed .78% % sequential
> completions 110%
Eric,
Thanks for sharing the results with us. I am interested to hearing your interpretation on this comparison...
--
Alireza
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: fio replay
2014-03-03 13:25 ` Lamkin, Eric
2014-03-04 3:12 ` Alireza Haghdoost
@ 2014-03-06 15:55 ` Jens Axboe
2014-03-06 16:12 ` Lamkin, Eric
1 sibling, 1 reply; 19+ messages in thread
From: Jens Axboe @ 2014-03-06 15:55 UTC (permalink / raw)
To: Lamkin, Eric, David Nellans; +Cc: fio
On 03/03/2014 06:25 AM, Lamkin, Eric wrote:
> Here's my update with a blktrace captured on a single device with no dropped events.
> Also switched to fio 2.1.4.
>
> Capture a single device trace with no dropped events.
[snip]
Thanks for the detailed report. But why are you replaying with
fio-2.1.4? If there's a bug here (and it looks like it), then please
replay the blktrace with fio-2.1.6 and I can take a look at it.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-03-06 15:55 ` Jens Axboe
@ 2014-03-06 16:12 ` Lamkin, Eric
2014-03-28 13:52 ` Lamkin, Eric
0 siblings, 1 reply; 19+ messages in thread
From: Lamkin, Eric @ 2014-03-06 16:12 UTC (permalink / raw)
To: Jens Axboe; +Cc: fio, David Nellans
OK, will grab 2.1.6 and reply with observations.
I've only recently subscribed to this mail-list and it amazes me that you can give attention to such a barrage of requests.
Eric
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Thursday, March 06, 2014 9:55 AM
To: Lamkin, Eric; David Nellans
Cc: fio@vger.kernel.org
Subject: Re: fio replay
On 03/03/2014 06:25 AM, Lamkin, Eric wrote:
> Here's my update with a blktrace captured on a single device with no dropped events.
> Also switched to fio 2.1.4.
>
> Capture a single device trace with no dropped events.
[snip]
Thanks for the detailed report. But why are you replaying with fio-2.1.4? If there's a bug here (and it looks like it), then please replay the blktrace with fio-2.1.6 and I can take a look at it.
--
Jens Axboe
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-03-06 16:12 ` Lamkin, Eric
@ 2014-03-28 13:52 ` Lamkin, Eric
2014-04-10 15:37 ` Lamkin, Eric
0 siblings, 1 reply; 19+ messages in thread
From: Lamkin, Eric @ 2014-03-28 13:52 UTC (permalink / raw)
To: Jens Axboe; +Cc: fio
Jens,
Finally got back to this.
I do get different results with 2.1.6.1.
Not sure what is wrong.
[root@x3650M4 Run2]# ls
bprun2all.bin
[root@x3650M4 Run2]# fio --name=rp2 --ioengine=libaio --iodepth=12 --read_iolog=bprun2all.bin
rp2: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=12
fio-2.1.6.1
Starting 1 process
blktrace: got timestamp notify: 4000001, 0
fio: filesetup.c:1264: get_file: Assertion `fio_file_open(f)' failed.:00s]
fio: pid=14664, got signal=6e] [0KB/37KB/0KB /s] [0/9/0 iops] [eta 09h:02m:07s]
rp2: (groupid=0, jobs=1): err= 0: pid=14664: Fri Mar 28 08:47:57 2014
lat (usec) : 100=10.71%, 250=89.29%
cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0
IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=75.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=28/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=12
Run status group 0 (all jobs):
WRITE: io=0KB, aggrb=0KB/s, minb=0KB/s, maxb=0KB/s, mint=595msec, maxt=595msec
[root@x3650M4 Run2]#
Eric
-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf Of Lamkin, Eric
Sent: Thursday, March 06, 2014 10:12 AM
To: Jens Axboe
Cc: fio@vger.kernel.org; David Nellans
Subject: RE: fio replay
OK, will grab 2.1.6 and reply with observations.
I've only recently subscribed to this mail-list and it amazes me that you can give attention to such a barrage of requests.
Eric
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Thursday, March 06, 2014 9:55 AM
To: Lamkin, Eric; David Nellans
Cc: fio@vger.kernel.org
Subject: Re: fio replay
On 03/03/2014 06:25 AM, Lamkin, Eric wrote:
> Here's my update with a blktrace captured on a single device with no dropped events.
> Also switched to fio 2.1.4.
>
> Capture a single device trace with no dropped events.
[snip]
Thanks for the detailed report. But why are you replaying with fio-2.1.4? If there's a bug here (and it looks like it), then please replay the blktrace with fio-2.1.6 and I can take a look at it.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
2014-03-28 13:52 ` Lamkin, Eric
@ 2014-04-10 15:37 ` Lamkin, Eric
0 siblings, 0 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-04-10 15:37 UTC (permalink / raw)
To: Jens Axboe; +Cc: fio
Still looking for some feedback on my fio replay issue.
-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf Of Lamkin, Eric
Sent: Friday, March 28, 2014 8:52 AM
To: Jens Axboe
Cc: fio@vger.kernel.org
Subject: RE: fio replay
Jens,
Finally got back to this.
I do get different results with 2.1.6.1.
Not sure what is wrong.
[root@x3650M4 Run2]# ls
bprun2all.bin
[root@x3650M4 Run2]# fio --name=rp2 --ioengine=libaio --iodepth=12 --read_iolog=bprun2all.bin
rp2: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=12
fio-2.1.6.1
Starting 1 process
blktrace: got timestamp notify: 4000001, 0
fio: filesetup.c:1264: get_file: Assertion `fio_file_open(f)' failed.:00s]
fio: pid=14664, got signal=6e] [0KB/37KB/0KB /s] [0/9/0 iops] [eta 09h:02m:07s]
rp2: (groupid=0, jobs=1): err= 0: pid=14664: Fri Mar 28 08:47:57 2014
lat (usec) : 100=10.71%, 250=89.29%
cpu : usr=0.00%, sys=0.00%, ctx=0, majf=0, minf=0
IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=75.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=28/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=12
Run status group 0 (all jobs):
WRITE: io=0KB, aggrb=0KB/s, minb=0KB/s, maxb=0KB/s, mint=595msec, maxt=595msec
[root@x3650M4 Run2]#
Eric
-----Original Message-----
From: fio-owner@vger.kernel.org [mailto:fio-owner@vger.kernel.org] On Behalf Of Lamkin, Eric
Sent: Thursday, March 06, 2014 10:12 AM
To: Jens Axboe
Cc: fio@vger.kernel.org; David Nellans
Subject: RE: fio replay
OK, will grab 2.1.6 and reply with observations.
I've only recently subscribed to this mail-list and it amazes me that you can give attention to such a barrage of requests.
Eric
-----Original Message-----
From: Jens Axboe [mailto:axboe@kernel.dk]
Sent: Thursday, March 06, 2014 9:55 AM
To: Lamkin, Eric; David Nellans
Cc: fio@vger.kernel.org
Subject: Re: fio replay
On 03/03/2014 06:25 AM, Lamkin, Eric wrote:
> Here's my update with a blktrace captured on a single device with no dropped events.
> Also switched to fio 2.1.4.
>
> Capture a single device trace with no dropped events.
[snip]
Thanks for the detailed report. But why are you replaying with fio-2.1.4? If there's a bug here (and it looks like it), then please replay the blktrace with fio-2.1.6 and I can take a look at it.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe fio" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: fio replay
@ 2014-02-20 16:39 Lamkin, Eric
0 siblings, 0 replies; 19+ messages in thread
From: Lamkin, Eric @ 2014-02-20 16:39 UTC (permalink / raw)
To: fio
Deleted formatting.
From: Lamkin, Eric
Sent: Thursday, February 20, 2014 9:36 AM
To: 'fio@vger.kernel.org'
Subject: fio replay
I'm evaluating fio for replay of Linux blktrace workloads.
So far the whole workload is not being replayed.
I'm wondering if I can get some pointers on what I may be doing wrong.
Here are some details:
Methodology:
. Capture the IO workload with blktrace
- Mount the debug file system on the host with the IO workload to capture
. mount -t debugfs debugfs /sys/kernel/debug
- Start blktrace for all devices of interest
. blktrace -d /dev/sdb /dev/sdca
- Stop blktrace once the IO workload of interest is complete
. Ctrl-C
- Convert the blktrace file set to a single binary with blkparse
. blkparse sdb sdca -O -d blkparseall.bin
- Create a blkparse .bin per device (may be required for device mapping)
. blkparse /dev/sdb -O -d blkparsesdb.bin
. blkparse /dev/sdca -O -d blkparsesdca.bin
. . . .
- Capture the partition file from the same host
. cat /proc/partitions > partitions.txt
Observation:
. Something is preventing the entire blktrace from replaying.
. During replay a time remaining counter begins decrementing from 2+ minutes.
. At some point the replay comes to a conclusion with time remaining on the clock.
. Do not know if the following message from fio is a clue to why the full replay does not occur. fio: file hash not empty on exit
The blktrace was captured on the source host for /dev/sdc and /dev/sdd.
Replay occurs on the replay host to /dev/sdc and /dev/sdd.
Partitions on source host.
[root@x3650M4 Profiler]# cat /home/Documents/TraceFiles/MR15couchbase/partitions1MR15.txt
major minor� #blocks� name
�� 8������ 32 1999699968 sdc
�� 8������ 48 1142685696 sdd
�� 8������ 16� 488386584 sdb
�� 8������� 0� 488386584 sda
�� 8������� 1���� 512000 sda1
�� 8������� 2� 487873536 sda2
253������� 0�� 52428800 dm-0
253������� 1�� 16482304 dm-1
253������� 2� 418959360 dm-2
Blktrace on the source host.
[root@Megaraid15 tracefiles]# blktrace -d /dev/sdc /dev/sdd
^C=== sdc ===
� CPU� 0:������������� 4755036 events,�� 222893 KiB data
� CPU� 1:������������� 4597591 events,�� 215513 KiB data
� CPU� 2:�������������� 742267 events,��� 34794 KiB data
� CPU� 3:�������������� 821213 events,��� 38495 KiB data
� CPU� 4:�������������� 398054 events,��� 18659 KiB data
� CPU� 5:�� ������������468293 events,��� 21952 KiB data
� CPU� 6:�������������� 211068 events,���� 9894 KiB data
� CPU� 7:�������������� 259636 events,��� 12171 KiB data
� CPU� 8:�������������� 118142 events,���� 5538 KiB data
� CPU� 9:�������������� 152712 events,� ���7159 KiB data
� CPU 10:��������������� 43662 events,���� 2047 KiB data
� CPU 11:��������������� 60687 events,���� 2845 KiB data
� CPU 12:��������������� 34881 events,���� 1636 KiB data
� CPU 13:��������������� 46452 events,���� 2178 KiB data
� CPU 14:� ��������������23514 events,���� 1103 KiB data
� CPU 15:��������������� 26782 events,���� 1256 KiB data
� CPU 16:������������� 2978005 events,�� 139594 KiB data
� CPU 17:������������� 3138535 events,�� 147119 KiB data
� CPU 18:��������������� 70275 events, ����3295 KiB data
� CPU 19:�������������� 105291 events,���� 4936 KiB data
� CPU 20:��������������� 54002 events,���� 2532 KiB data
� CPU 21:��������������� 50163 events,���� 2352 KiB data
� CPU 22:��������������� 38882 events,���� 1823 KiB data
� CPU 23: ���������������31927 events,���� 1497 KiB data
� CPU 24:��������������� 19594 events,����� 919 KiB data
� CPU 25:��������������� 23753 events,���� 1114 KiB data
� CPU 26:��������������� 10676 events,����� 501 KiB data
� CPU 27:��������������� 12503 events,����� 587 KiB data
� CPU 28:���������������� 1632 events,������ 77 KiB data
� CPU 29:���������������� 9099 events,����� 427 KiB data
� CPU 30:���������������� 5472 events,����� 257 KiB data
� CPU 31:���������������� 3983 events,����� 187 KiB data
� Total: �������������19313782 events (dropped 30207),�� 905335 KiB data
=== sdd ===
� CPU� 0:������������������� 1 events,������� 1 KiB data
� CPU� 1:������������������� 0 events,������� 0 KiB data
� CPU� 2:������������������� 0 events,������� 0 KiB data
� CPU� 3:������������������� 0 events,������� 0 KiB data
� CPU� 4:������������������� 0 events,������� 0 KiB data
� CPU� 5:������������������� 0 events,������� 0 KiB data
� CPU� 6:������������������� 0 events,������� 0 KiB data
� CPU� 7:������������������� 0 events,������� 0 KiB data
� CPU� 8:������������������� 0 events,������� 0 KiB data
� CPU� 9:������������������� 0 events,������� 0 KiB data
� CPU 10:������������������� 0 events,������� 0 KiB data
� CPU 11:������������������� 0 events,������� 0 KiB data
� CPU 12:������������������� 0 events,������� 0 KiB data
� CPU 13:������������������� 0 events,������� 0 KiB data
� CPU 14:������������������� 0 events,������� 0 KiB data
� CPU 15:������������������� 0 events,������� 0 KiB data
� CPU 16:������������������� 0 events,������� 0 KiB data
� CPU 17:������������������� 0 events,������� 0 KiB data
� CPU 18:������������������� 0 events,������� 0 KiB data
� CPU 19:������������������� 0 events,������� 0 KiB data
� CPU 20:������������������� 0 events,������� 0 KiB data
�CPU 21:������������������� 0 events,������� 0 KiB data
� CPU 22:������������������� 0 events,������� 0 KiB data
� CPU 23:������������������� 0 events,������� 0 KiB data
� CPU 24:������������������� 0 events,������� 0 KiB data
� CPU 25:������������������� 0 events,������� 0 KiB data
� CPU 26:������������������� 0 events,������� 0 KiB data
� CPU 27:������������������� 0 events,������� 0 KiB data
� CPU 28:������������������� 0 events,������� 0 KiB data
� CPU 29:������������������� 0 events,������� 0 KiB data
� CPU 30:������������������� 0 events,������� 0 KiB data
� CPU 31:������������������� 0 events,������� 0 KiB data
� Total:�������������������� 1 events (dropped 0),������� 1 KiB data
You have 30207 (� 0.2%) dropped events
Consider using a larger buffer size (-b) and/or more buffers (-n)
Partitions on replay host.
[root@x3650M4 Profiler]# cat /proc/partitions
major minor� #blocks� name
�� 8������� 0� 291991552 sda
�� 8������� 1���� 204800 sda1
�� 8������� 2���� 512000 sda2
�� 8������� 3� 291272704 sda3
253������� 0�� 52428800 dm-0
253������� 1��� 8175616 dm-1
253������� 2� 230666240 dm-2
�� 8������ 16�� 47185920 sdb
�� 8������ 32 1999699968 sdc
�� 8������ 48 1142685696 sdd
Since the partitions match for the devices traced no replay_redirect is used.
Results w/o replay_no_stall.
[root@x3650M4 MR15couchbase]# fio --name=rp1 --ioengine=libaio --iodepth=16 --read_iolog=bprun1all.bin
rp1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.0.10
Starting 1 process
got timestamp notify: 4000001, 0
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [27.1% done] [0K/70636K/0K /s] [0 /17.7K/0� iops] [eta 01m:02s]
rp1: (groupid=0, jobs=1): err= 0: pid=14710: Fri Jan 31 15:36:29 2014
� write: io=1535.6MB, bw=73954KB/s, iops=18488 , runt= 21262msec
��� slat (usec): min=4 , max=1156 , avg=13.89, stdev= 6.02
��� clat (usec): min=4 , max=387196 , avg=520.86, stdev=2425.91
���� lat (usec): min=49 , max=387211 , avg=535.31, stdev=2425.96
��� clat percentiles (usec):
���� |� 1.00th=[�� 43],� 5.00th=[�� 60], 10.00th=[�� 92], 20.00th=[� 143],
� ���| 30.00th=[� 183], 40.00th=[� 278], 50.00th=[� 322], 60.00th=[� 588],
���� | 70.00th=[� 740], 80.00th=[� 812], 90.00th=[� 948], 95.00th=[ 1272],
���� | 99.00th=[ 1800], 99.50th=[ 2096], 99.90th=[ 2800], 99.95th=[ 3664],
���� | 99.99th=[21632]
��� lat (usec) : 10=0.01%, 20=0.01%, 50=3.43%, 100=7.59%, 250=25.49%
��� lat (usec) : 500=16.89%, 750=17.87%, 1000=19.45%
��� lat (msec) : 2=8.63%, 4=0.60%, 10=0.03%, 20=0.01%, 50=0.01%
��� lat (msec) : 500=0.01%
� cpu��������� : usr=6.21%, sys=27.22%, ctx=63452, majf=0, minf=18
� IO depths��� : 1=6.1%, 2=12.0%, 4=24.0%, 8=19.2%, 16=38.6%, 32=0.0%, >=64=0.0%
���� submit��� : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
���� complete� : 0=0.0%, 4=86.6%, 8=2.6%, 16=10.8%, 32=0.0%, 64=0.0%, >=64=0.0%
����issued��� : total=r=0/w=393102/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
� WRITE: io=1535.6MB, aggrb=73953KB/s, minb=73953KB/s, maxb=73953KB/s, mint=21262msec, maxt=21262msec
fio: file hash not empty on exit
Blktrace of replay.
[root@x3650M4 cbreplay1]# blktrace /dev/sdc /dev/sdd
^C=== sdc ===
� CPU� 0:������������� 5952393 events,�� 279019 KiB data
� CPU� 1:��������������� 11609 events,����� 545 KiB data
� CPU� 2:��������������� 10868 events,���� �510 KiB data
� CPU� 3:���������������� 4514 events,����� 212 KiB data
� CPU� 4:��������������� 10245 events,����� 481 KiB data
� CPU� 5:����������������� 220 events,������ 11 KiB data
� CPU� 6:�������������� 129295 events,���� 6061 KiB data
� CPU� 7:���� ������������5353 events,����� 251 KiB data
� CPU� 8:���������������� 6251 events,����� 294 KiB data
� CPU� 9:����������������� 680 events,������ 32 KiB data
� CPU 10:������������������ 22 events,������� 2 KiB data
� CPU 11:������������������� 0 events,��� ����0 KiB data
� Total:�������������� 6131450 events (dropped 0),�� 287412 KiB data
=== sdd ===
� CPU� 0:������������������� 0 events,������� 0 KiB data
� CPU� 1:������������������� 0 events,������� 0 KiB data
� CPU� 2:������������������� 1 events,������� 1 KiB data
� CPU� 3:������������������� 0 events,������� 0 KiB data
� CPU� 4:������������������� 0 events,������� 0 KiB data
� CPU� 5:������������������� 0 events,������� 0 KiB data
� CPU� 6:������������������� 0 events,������� 0 KiB data
� CPU� 7:������� ������������0 events,������� 0 KiB data
� CPU� 8:������������������� 0 events,������� 0 KiB data
� CPU� 9:������������������� 0 events,������� 0 KiB data
� CPU 10:������������������� 0 events,������� 0 KiB data
� CPU 11:������������������� 0 events,������ �0 KiB data
� Total:�������������������� 1 events (dropped 0),������� 1 KiB data
Note, replay host has fewer processors than original host
sdc 19,313,782 original events, 6,131,450� replay events
sdc 905,335 KiB data, 287,412 KiB replay data
Using replay_no_stall ran more quickly but still not completely.
[root@x3650M4 MR15couchbase]# fio --name=rp1 --ioengine=libaio --iodepth=16 --replay_no_stall=1 --read_iolog=bprun1all.bin
rp1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.0.10
Starting 1 process
got timestamp notify: 4000001, 0
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [41.2% done] [0K/277.2M/0K /s] [0 /70.1K/0� iops] [eta 00m:20s]�������
rp1: (groupid=0, jobs=1): err= 0: pid=5680: Wed Feb� 5 09:23:14 2014
� write: io=1535.6MB, bw=245459KB/s, iops=61364 , runt=� 6406msec
��� slat (usec): min=4 , max=4028 , avg= 9.47, stdev=11.76
��� clat (usec): min=2 , max=4234 , avg=156.59, stdev=73.09
���� lat (usec): min=41 , max=4338 , avg=166.32, stdev=74.21
��� clat percentiles (usec):
���� |� 1.00th=[�� 59],� 5.00th=[�� 79], 10.00th=[ ��93], 20.00th=[� 112],
���� | 30.00th=[� 129], 40.00th=[� 141], 50.00th=[� 155], 60.00th=[� 167],
���� | 70.00th=[� 175], 80.00th=[� 187], 90.00th=[� 219], 95.00th=[� 249],
���� | 99.00th=[� 302], 99.50th=[� 338], 99.90th=[� 836], 99.95th=[ 1688],
���� | 99.99th=[ 1992]
��� lat (usec) : 4=0.01%, 10=0.01%, 50=0.21%, 100=12.80%, 250=82.10%
��� lat (usec) : 500=4.69%, 750=0.08%, 1000=0.02%
��� lat (msec) : 2=0.08%, 4=0.01%, 10=0.01%
� cpu��������� : usr=11.25%, sys=63.68%, ctx=87243, majf=1, minf=19
� IO depths��� : 1=6.1%, 2=12.0%, 4=24.0%, 8=19.2%, 16=38.6%, 32=0.0%, >=64=0.0%
���� submit��� : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
���� complete� : 0=0.0%, 4=86.6%, 8=2.6%, 16=10.8%, 32=0.0%, 64=0.0%, >=64=0.0%
���� issued��� : total=r=0/w=393102/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
� WRITE: io=1535.6MB, aggrb=245458KB/s, minb=245458KB/s, maxb=245458KB/s, mint=6406msec, maxt=6406msec
fio: file hash not empty on exit
Blktrace of replay with replay_no_stall.
[root@x3650M4 cbreplay2]# blktrace -d /dev/sdc /dev/sdd
^C=== sdc ===
� CPU� 0:������������� 5639178 events,�� 264337 KiB data
� CPU� 1:�������������� 131297 events,���� 6155 KiB data
� CPU� 2:��������������� 37969 events,���� 1780 KiB data
� CPU� 3:��������������� 17794 events,����� 835 KiB data
� CPU� 4:������������������� 0 events,������� 0 KiB data
� CPU� 5:���������������� 3364 events,����� 158 KiB data
� CPU� 6:�������������� 151797 events,���� 7116 KiB data
� CPU� 7:���������������� 9839 events,����� 462 KiB data
� CPU� 8:���������������� 8902 events,����� 418 KiB data
� CPU� 9:��������������� 11832 events,����� 555 KiB data
� CPU 10:���������������� 9135 events,����� 429 KiB data
� CPU 11:������������������� 0 events,������� 0 KiB data
� Total:�������������� 6021107 events (dropped 0),�� 282240 KiB data
=== sdd ===
� CPU� 0:������������������� 0 events,������� 0 KiB data
� CPU� 1:������������������� 0 events,������� 0 KiB data
� CPU� 2:������������������� 0 events,������� 0 KiB data
� CPU� 3:������������������� 0 events,������� 0 KiB data
� CPU� 4:������������������� 0 events,������� 0 KiB data
� CPU� 5:������������������� 0 events,������� 0 KiB data
� CPU� 6:������������������� 0 events,������� 0 KiB data
� CPU� 7:������������������� 0 events,������� 0 KiB data
� CPU� 8:������������������� 0 events,������� 0 KiB data
� CPU� 9:������������������� 1 events,������� 1 KiB data
� CPU 10:������������������� 0 events,������� 0 KiB data
� CPU 11:������������������� 0 events,������� 0 KiB data
� Total:�������������������� 1 events (dropped 0),������� 1 KiB data
Original to replay blktrace analysis comparison (based on completion operations)
Original������������ ���� Replay w/o no_stall� ���� Replay w/no_stall
132.5 seconds������� ���� 23.2 seconds�������� ���� 6.66 seconds
2 CPUs (0,1)�������� ���� 1 CPU (0)������ ���� ���� 1 CPU (0)
100% write����� ���� ���� 99.98% write�������� ���� 99.98% write
3,842,340,968 blk range��� 3,999,399,928 blk range��� 3,999,399,928 blk range
817,103 IOs��������� ���� 393,185 IOs��������� ���� 393,185 IOs
11,863,200 blocks��� ���� 3,145,480 blocks���������� 3,145,480 blocks
248,142 seq runs���������� 78,502 seq runs����� ���� 78,511 seq runs
10,661,656 seq blocks ���� 2,972,672 seq blocks ���� 2,972,312 seq blocks
83.2% seq IOs������� ���� 94.5% seq IOs������� ���� 94.5% seq IOs
89.9% seq bytes����� ���� 94.5% seq bytes����� ���� 94.5% seq bytes
Note, fio replay uses blktrace queue vs completion operations
The blktrace file filtered for queue operations had to be split to analyze in Excel
Original to replay blktrace analysis comparison (based on queue operations)
Original������������ ���� Replay w/o no_stall� ���� Replay w/no_stall
132.5 seconds������� ���� 23.2 seconds�������� ���� 6.66 seconds
32 CPUs (0-31)������ ���� 11 CPUs (0-10)������ ���� 10 CPUs (0-3,5-10)
100% write����� ���� ���� 99.98% write�������� ���� 99.98% write
3,842,340,968 blk range��� 3,999,399,928 blk range��� 3,999,399,928 blk range
1,482,813 operations ���� 393,185 operations�� ���� 393,185 operations
11,862,504 blocks��� ���� 3,145,480 blocks���������� 3,145,480 blocks
254,458 seq runs���������� 78,508 seq runs����� ���� 78,508 seq runs
10,818,640 seq blocks ���� 2,972,816 seq blocks ���� 2,972,816 seq blocks
91.2% seq IOs������� ���� 94.5% seq IOs������� ���� 94.5% seq IOs
91.2% seq bytes����� ���� 94.5% seq bytes ���� ���� 94.5% seq bytes
Changing iodepth to 12 to match # cores on replay host.� Did not resolve issue.
[root@x3650M4 MR15couchbase]# fio --name=rp1 --ioengine=libaio --iodepth=12 --read_iolog=bprun1all.bin
rp1: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=12
fio-2.0.10
Starting 1 process
got timestamp notify: 4000001, 0
got timestamp notify: 4000001, 0
Jobs: 1 (f=1): [W] [31.0% done] [0K/85676K/0K /s] [0 /21.5K/0� iops] [eta 01m:09s]�������
rp1: (groupid=0, jobs=1): err= 0: pid=27491: Tue Feb 11 09:19:56 2014
� write: io=1535.6MB, bw=67315KB/s, iops=16828 , runt= 23359msec
��� slat (usec): min=5 , max=4580 , avg=19.69, stdev=18.13
��� clat (usec): min=4 , max=18051 , avg=477.41, stdev=429.40
���� lat (usec): min=43 , max=18063 , avg=497.81, stdev=430.40
��� clat percentiles (usec):
���� |� 1.00th=[�� 45],� 5.00th=[�� 50], 10.00th=[�� 92], 20.00th=[� 151],
���� | 30.00th=[� 195], 40.00th=[� 278], 50.00th=[� 294], 60.00th=[� 510],
���� | 70.00th=[� 668], 80.00th=[� 796], 90.00th=[� 932], 95.00th=[ 1288],
���� | 99.00th=[ 1768], 99.50th=[ 2096], 99.90th=[ 2832], 99.95th=[ 3472],
���� | 99.99th=[ 7456]
��� lat (usec) : 10=0.01%, 20=0.01%, 50=4.72%, 100=7.40%, 250=25.11%
��� lat (usec) : 500=22.40%, 750=15.50%, 1000=15.72%
��� lat (msec) : 2=8.53%, 4=0.57%, 10=0.03%, 20=0.01%
� cpu��������� : usr=5.68%, sys=35.42%, ctx=60638, majf=1, minf=19
� IO depths��� : 1=6.1%, 2=12.0%, 4=24.0%, 8=57.9%, 16=0.0%, 32=0.0%, >=64=0.0%
���� submit��� : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
���� complete� : 0=0.0%, 4=88.1%, 8=2.3%, 16=9.6%, 32=0.0%, 64=0.0%, >=64=0.0%
���� issued��� : total=r=0/w=393102/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
� WRITE: io=1535.6MB, aggrb=67314KB/s, minb=67314KB/s, maxb=67314KB/s, mint=23359msec, maxt=23359msec
fio: file hash not empty on exit
Eric
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2014-04-10 15:37 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-26 13:44 fio replay Lamkin, Eric
2014-02-26 14:13 ` David Nellans
2014-02-26 14:23 ` Lamkin, Eric
2014-02-26 15:30 ` Sitsofe Wheeler
2014-02-26 15:48 ` Lamkin, Eric
2014-02-26 16:15 ` Sitsofe Wheeler
2014-02-26 17:49 ` Jens Axboe
2014-02-26 21:44 ` Jens Axboe
2014-02-26 21:55 ` Lamkin, Eric
2014-02-27 18:48 ` Jens Axboe
2014-02-27 20:12 ` Lamkin, Eric
2014-03-03 13:25 ` Lamkin, Eric
2014-03-04 3:12 ` Alireza Haghdoost
2014-03-04 13:37 ` Lamkin, Eric
2014-03-06 15:55 ` Jens Axboe
2014-03-06 16:12 ` Lamkin, Eric
2014-03-28 13:52 ` Lamkin, Eric
2014-04-10 15:37 ` Lamkin, Eric
-- strict thread matches above, loose matches on Subject: below --
2014-02-20 16:39 Lamkin, Eric
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.