All of lore.kernel.org
 help / color / mirror / Atom feed
* Fio version 3.1 on CentOS - Possible bug reporting
       [not found] <1551233986158.86408@skhms.com>
@ 2019-02-28 23:35 ` Kailash Mallikarjunaswamy
  0 siblings, 0 replies; only message in thread
From: Kailash Mallikarjunaswamy @ 2019-02-28 23:35 UTC (permalink / raw)
  To: fio, majordomo, axboe; +Cc: Yi Tong, Sean (Seong Won) Shin

[-- Attachment #1: Type: text/plain, Size: 2950 bytes --]

Hi,



I am writing to report a possible* bug on:



fio version - 3.1

Operating System - CentOS  7.6.1810

kernel: 3.10.0-957.1.3.el7.x86_64



I use a HP Proliant Enterprise Server hardware to run the metrics



Bug: In some of the runs, the max value of completion latency (clat) is lesser than the 9-nines percentile value. How is this possible? I have marked the numbers in red below. Kindly take a look.

____________________________________________________________________________

Workload:



sudo chrt -f 1 fio  --filename=/dev/nvme0n1 --numjobs=1 --iodepth=512 --offset=0M --bs=

7681501126656 --size=

7681501126656 --io_size=%s --rw=read --buffered=0 --thread --time_based --runtime=60 --name=seqtest0 --group_reporting --disable_lat=1 --disable_slat=1 --disable_bw_measurement=1 --zero_buffers --ioengine=libaio --percentile=99:99.9:99.99:99.999:99.9999:99.99999:99.999999:99.9999999 --output-format=json,normal --eta=never --invalidate=1 --end_fsync=0 --refill_buffers



____________________________________________________________________________

Result:



seqtest0: (groupid=0, jobs=1): err= 0: pid=3908: Tue Feb 26 18:02:25 2019
   read: IOPS=25.1k, BW=3135MiB/s (3287MB/s)(184GiB/60021msec)
    clat (usec): min=16426, max=41005, avg=20384.67, stdev=337.20
    clat percentiles (usec):
     | 99.0000000th=[20579], 99.9000000th=[21103], 99.9900000th=[38011],
     | 99.9990000th=[40633], 99.9999000th=[41157], 99.9999900th=[41157],
     | 99.9999990th=[41157], 99.9999999th=[41157]
   bw (  MiB/s): min= 2981, max= 3144, per=100.00%, avg=3135.58, stdev=14.46, samples=120
   iops        : min=23848, max=25154, avg=25084.63, stdev=115.65, samples=120
  lat (msec)   : 20=0.31%, 50=99.69%
  cpu          : usr=4.10%, sys=64.00%, ctx=742962, majf=0, minf=44
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
     issued rwt: total=1505214,0,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=512

Run status group 0 (all jobs):
   READ: bw=3135MiB/s (3287MB/s), 3135MiB/s-3135MiB/s (3287MB/s-3287MB/s), io=184GiB (197GB), run=60021-60021msec

____________________________________________________________________________



Any inputs/help will be greatly appreciated.



Thanks in advance!



The information contained in this e-mail is considered confidential of SK hynix memory solutions America Inc. and intended only for the persons addressed or copied in this e-mail. Any unauthorized use, dissemination of the information, or copying of this message is strictly prohibited. If you are not the intended recipient, please contact the sender immediately and permanently delete the original and any copies of this email.

[-- Attachment #2: Type: text/html, Size: 8541 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-02-28 23:35 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1551233986158.86408@skhms.com>
2019-02-28 23:35 ` Fio version 3.1 on CentOS - Possible bug reporting Kailash Mallikarjunaswamy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.