* fio seqread benchmark, 100% cpu usage
@ 2021-02-07 19:18 Christian Braun
0 siblings, 0 replies; only message in thread
From: Christian Braun @ 2021-02-07 19:18 UTC (permalink / raw)
To: linux-btrfs
[-- Attachment #1: Type: text/plain, Size: 1503 bytes --]
Hi,
i am benchmarking a new setup with fio to use with btrfs. When doing a
seqread test the system cpu usage is at 100% and read performance is
degraded. There are 8 "kworker/u64:0+btrfs-endio" (the machine has 8
cores) using all the cpu resources. That does not seem right.
With seqwrite this does not happen. With seqwrite cpu usage is at 20%
and write performance is great and equal to ext4
btrfs:
READ: bw=3840MiB/s (4026MB/s), 3840MiB/s-3840MiB/s
(4026MB/s-4026MB/s), io=160GiB (172GB), run=42671-42671msec
WRITE: bw=4716MiB/s (4945MB/s), 4716MiB/s-4716MiB/s
(4945MB/s-4945MB/s), io=640GiB (687GB), run=138963-138963msec
ext4:
READ: bw=6442MiB/s (6754MB/s), 6442MiB/s-6442MiB/s
(6754MB/s-6754MB/s), io=160GiB (172GB), run=25435-25435msec
WRITE: bw=5025MiB/s (5269MB/s), 5025MiB/s-5025MiB/s
(5269MB/s-5269MB/s), io=640GiB (687GB), run=130419-130419msec
The benchmarking is done on a md raid0 device of 4 nvme drives. The
setup is using defaults:
mdadm --create --verbose --level=0 --raid-devices=4 /dev/md/pex
/dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1
mkfs.btrfs -s 4096 /dev/md/pex
mount -o discard,noatime /dev/md/pex /media/test
Linux x 5.10.13-arch1-2 #1 SMP PREEMPT Sun, 07 Feb 2021 10:19:03 +0000
x86_64 GNU/Linux
btrfs-progs v5.10
More details are in the attached log.
I expected for btrfs seqread to perform equal to the ext4 seqread.
Especially since it does with seqwrite.
Are there settings i missed or did wrong?
Thank you for reading,
Christian
[-- Attachment #2: btrfs-20210207.txt --]
[-- Type: text/plain, Size: 18657 bytes --]
=== INFO ===
# uname -a
Linux x 5.10.13-arch1-2 #1 SMP PREEMPT Sun, 07 Feb 2021 10:19:03 +0000 x86_64 GNU/Linux
# btrfs --version
btrfs-progs v5.10
# btrfs fi show
Label: none uuid: x
Total devices 1 FS bytes used 608.48GiB
devid 1 size 744.82GiB used 737.80GiB path x
Label: none uuid: x
Total devices 1 FS bytes used 802.87GiB
devid 1 size 3.64TiB used 822.02GiB path /dev/md127
# btrfs fi df /media/test
Data, single: total=814.01GiB, used=802.04GiB
System, single: total=4.00MiB, used=128.00KiB
Metadata, single: total=1.01GiB, used=851.75MiB
GlobalReserve, single: total=512.00MiB, used=0.00B
=== SETUP ===
mdadm --create --verbose --level=0 --raid-devices=4 /dev/md/pex /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1 /dev/nvme4n1p1
mdadm -Q --detail /dev/md/pex
/dev/md/pex:
Version : 1.2
Creation Time : Sun Jan 31 09:04:03 2021
Raid Level : raid0
Array Size : 3906514944 (3725.54 GiB 4000.27 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Sun Jan 31 09:04:03 2021
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Consistency Policy : none
Name : x
UUID : x
Events : 0
Number Major Minor RaidDevice State
0 259 8 0 active sync /dev/nvme1n1p1
1 259 9 1 active sync /dev/nvme2n1p1
2 259 13 2 active sync /dev/nvme3n1p1
3 259 12 3 active sync /dev/nvme4n1p1
=== BTRFS ===
=> dstat was run in parallel to fio to measure cpu usage
mkfs.btrfs -s 4096 /dev/md/pex
btrfs-progs v5.10
See http://btrfs.wiki.kernel.org for more information.
Detected a SSD, turning off metadata duplication. Mkfs with -m dup if you want to force metadata duplication.
Label: (null)
UUID: x
Node size: 16384
Sector size: 4096
Filesystem size: 3.64TiB
Block group profiles:
Data: single 8.00MiB
Metadata: single 8.00MiB
System: single 4.00MiB
SSD detected: yes
Incompat features: extref, skinny-metadata
Runtime features:
Checksum: crc32c
Number of devices: 1
Devices:
ID SIZE PATH
1 3.64TiB /dev/md/pex
mount -o discard,noatime /dev/md/pex /media/test
fstrim -v /media/test
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=32M --numjobs=16 --size=10G --runtime=600 --group_reporting
seqread: (g=0): rw=read, bs=(R) 32.0MiB-32.0MiB, (W) 32.0MiB-32.0MiB, (T) 32.0MiB-32.0MiB, ioengine=libaio, iodepth=1
...
fio-3.25
Starting 16 processes
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
Jobs: 5 (f=5): [_(6),R(1),_(3),R(3),_(2),R(1)][97.7%][r=3808MiB/s][r=119 IOPS][eta 00m:01s]
seqread: (groupid=0, jobs=16): err= 0: pid=7274: Sun Feb 7 12:36:19 2021
read: IOPS=119, BW=3840MiB/s (4026MB/s)(160GiB/42671msec)
slat (msec): min=26, max=447, avg=115.62, stdev=46.64
clat (usec): min=6, max=103711, avg=14844.11, stdev=10291.09
lat (msec): min=26, max=457, avg=130.48, stdev=48.26
clat percentiles (usec):
| 1.00th=[ 668], 5.00th=[ 2212], 10.00th=[ 3851], 20.00th=[ 6390],
| 30.00th=[ 8455], 40.00th=[ 10290], 50.00th=[ 12518], 60.00th=[ 15270],
| 70.00th=[ 18744], 80.00th=[ 22938], 90.00th=[ 27395], 95.00th=[ 33817],
| 99.00th=[ 47973], 99.50th=[ 54264], 99.90th=[ 69731], 99.95th=[ 73925],
| 99.99th=[103285]
bw ( MiB/s): min= 1700, max= 7030, per=100.00%, avg=3902.26, stdev=59.61, samples=1318
iops : min= 40, max= 218, avg=113.93, stdev= 1.94, samples=1318
lat (usec) : 10=0.02%, 50=0.02%, 250=0.02%, 500=0.27%, 750=0.94%
lat (usec) : 1000=0.82%
lat (msec) : 2=2.50%, 4=5.90%, 10=27.87%, 20=35.18%, 50=25.66%
lat (msec) : 100=0.78%, 250=0.02%
cpu : usr=0.01%, sys=28.35%, ctx=4006026, majf=0, minf=131275
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=3840MiB/s (4026MB/s), 3840MiB/s-3840MiB/s (4026MB/s-4026MB/s), io=160GiB (172GB), run=42671-42671msec
--total-cpu-usage-- dsk/nvme0n1-dsk/nvme1n1-dsk/nvme2n1-dsk/nvme3n1-dsk/nvme4n1 net/enp34s0 ---paging-- ---system-- ------memory-usage-----
usr sys idl wai stl| read writ: read writ: read writ: read writ: read writ| recv send| in out | int csw | used free buff cach
3 0 97 0 0| 0 0 : 0 0 : 0 0 : 0 0 : 0 0 | 397B 273B| 0 0 |2244 3949 |3732M 54.9G 5236k 4796M
25 69 7 0 0| 0 1638B: 893M 0 : 893M 0 : 893M 0 : 893M 0 | 588B 405B| 0 0 | 120k 217k|4254M 54.4G 5236k 4801M
26 74 0 0 0| 0 1106k: 963M 0 : 963M 0 : 963M 0 : 963M 0 | 702B 393B| 0 0 | 130k 230k|4257M 54.4G 5236k 4802M
26 74 0 0 0| 0 0 : 960M 0 : 960M 0 : 960M 0 : 960M 0 | 540B 354B| 0 0 | 130k 226k|4255M 54.4G 5236k 4802M
26 74 0 0 0| 0 5325B: 966M 0 : 966M 0 : 966M 0 : 966M 0 |5534B 4568B| 0 0 | 130k 226k|4260M 54.4G 5236k 4802M
10 24 66 0 0| 0 1494k: 314M 0 : 314M 0 : 313M 0 : 313M 0 |1448B 936B| 0 0 | 45k 85k|3741M 54.9G 5236k 4797M
4 0 96 0 0| 0 0 : 0 0 : 0 0 : 0 0 : 0 0 | 950B 927B| 0 0 |2682 4666 |3738M 54.9G 5236k 4796M
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
230 root 20 0 0,0m 0,0m 82,4 0,0 1:13.89 R kworker/u64:17+btrfs-endio
8452 root 20 0 0,0m 0,0m 82,4 0,0 0:12.04 R kworker/u64:1+btrfs-endio
225 root 20 0 0,0m 0,0m 81,5 0,0 1:16.71 R kworker/u64:12+btrfs-endio
8455 root 20 0 0,0m 0,0m 79,6 0,0 0:11.20 R kworker/u64:13+btrfs-endio
8451 root 20 0 0,0m 0,0m 78,7 0,0 0:11.45 R kworker/u64:0+btrfs-endio
6623 root 20 0 0,0m 0,0m 77,8 0,0 1:08.41 R kworker/u64:7+btrfs-endio
229 root 20 0 0,0m 0,0m 69,4 0,0 1:07.60 R kworker/u64:16+btrfs-endio
8453 root 20 0 0,0m 0,0m 64,8 0,0 0:11.53 R kworker/u64:4-btrfs-endio
200 root 20 0 0,0m 0,0m 60,2 0,0 1:15.99 R kworker/u64:3+btrfs-endio
fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32M --numjobs=16 --size=40G --runtime=600 --group_reporting
seqwrite: (g=0): rw=write, bs=(R) 32.0MiB-32.0MiB, (W) 32.0MiB-32.0MiB, (T) 32.0MiB-32.0MiB, ioengine=libaio, iodepth=1
...
fio-3.25
Starting 16 processes
Jobs: 1 (f=1): [_(4),W(1),_(11)][99.3%][w=4228MiB/s][w=132 IOPS][eta 00m:01s]
seqwrite: (groupid=0, jobs=16): err= 0: pid=8008: Sun Feb 7 12:41:13 2021
write: IOPS=147, BW=4716MiB/s (4945MB/s)(640GiB/138963msec); 0 zone resets
slat (usec): min=7456, max=43211, avg=15057.58, stdev=3315.32
clat (usec): min=1812, max=305361, avg=89982.19, stdev=45150.61
lat (msec): min=10, max=317, avg=105.04, stdev=45.01
clat percentiles (msec):
| 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 51], 20.00th=[ 57],
| 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 77], 60.00th=[ 88],
| 70.00th=[ 100], 80.00th=[ 121], 90.00th=[ 157], 95.00th=[ 186],
| 99.00th=[ 232], 99.50th=[ 247], 99.90th=[ 275], 99.95th=[ 284],
| 99.99th=[ 300]
bw ( MiB/s): min= 1663, max=13760, per=100.00%, avg=4856.21, stdev=122.70, samples=4289
iops : min= 50, max= 430, avg=151.63, stdev= 3.84, samples=4289
lat (msec) : 2=0.21%, 4=0.37%, 10=0.28%, 20=0.57%, 50=8.66%
lat (msec) : 100=60.27%, 250=29.25%, 500=0.40%
cpu : usr=1.05%, sys=13.02%, ctx=26614, majf=0, minf=188
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,20480,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=4716MiB/s (4945MB/s), 4716MiB/s-4716MiB/s (4945MB/s-4945MB/s), io=640GiB (687GB), run=138963-138963msec
--total-cpu-usage-- dsk/nvme0n1-dsk/nvme1n1-dsk/nvme2n1-dsk/nvme3n1-dsk/nvme4n1 net/enp34s0 ---paging-- ---system-- ------memory-usage-----
usr sys idl wai stl| read writ: read writ: read writ: read writ: read writ| recv send| in out | int csw | used free buff cach
3 0 97 0 0|6554B 49k: 0 0 : 0 0 : 0 0 : 0 0 |2031B 2851B| 0 0 |2278 4025 |3734M 54.9G 5236k 4797M
5 10 85 0 0| 0 0 : 0 794M: 0 793M: 0 799M: 0 797M| 967B 615B| 0 0 | 15k 4751 |4254M 54.4G 5236k 4841M
6 14 80 0 0| 0 2438k: 0 1214M: 0 1209M: 0 1211M: 0 1213M| 858k 20k| 0 0 | 23k 6884 |4279M 54.3G 5236k 4908M
6 13 81 0 0| 0 0 : 0 1207M: 0 1206M: 0 1206M: 0 1206M| 430B 518B| 0 0 | 21k 5138 |4275M 54.3G 5236k 4963M
6 13 81 0 0| 0 0 : 0 1208M: 0 1208M: 0 1207M: 0 1207M| 944B 575B| 0 0 | 21k 5227 |4274M 54.0G 5236k 5212M
5 13 76 5 0| 0 6554B: 0 1130M: 0 1139M: 0 1137M: 0 1131M| 263B 278B| 0 0 | 20k 5078 |4277M 54.0G 5236k 5258M
5 12 77 6 0| 0 1328k: 0 1044M: 0 1049M: 0 1045M: 0 1049M| 409B 391B| 0 0 | 19k 5013 |4274M 53.9G 5236k 5300M
5 13 82 0 0| 0 0 : 0 1196M: 0 1194M: 0 1194M: 0 1187M|1317B 1305B| 0 0 | 21k 5053 |4279M 53.9G 5236k 5349M
5 13 81 0 0| 0 0 : 0 1194M: 0 1189M: 0 1189M: 0 1190M| 817B 925B| 0 0 | 21k 5015 |4283M 53.8G 5236k 5396M
5 13 81 0 0| 0 0 : 0 1194M: 0 1194M: 0 1194M: 0 1195M| 911B 776B| 0 0 | 21k 5134 |4287M 53.8G 5236k 5443M
5 12 77 6 0| 13k 7602k: 0 1101M: 0 1110M: 0 1110M: 0 1110M|1524B 1808B| 0 0 | 20k 6273 |4287M 53.7G 5236k 5503M
5 12 81 2 0| 0 4096B: 0 1155M: 0 1146M: 0 1146M: 0 1146M| 815B 1051B| 0 0 | 20k 5018 |4287M 53.7G 5236k 5496M
6 13 81 0 0| 0 152k: 0 1195M: 0 1195M: 0 1195M: 0 1195M| 881B 883B| 0 0 | 22k 5606 |4139M 53.9G 5236k 5496M
6 13 81 0 0| 0 0 : 0 1193M: 0 1194M: 0 1194M: 0 1194M|1979B 1353B| 0 0 | 21k 5978 |4112M 53.9G 5236k 5496M
6 14 78 2 0| 146k 10M: 0 1161M: 0 1162M: 0 1163M: 0 1168M| 478B 550B| 0 0 | 21k 6177 |3957M 54.0G 5236k 5518M
4 4 91 0 0| 0 0 : 0 417M: 0 414M: 0 414M: 0 415M| 565B 536B| 0 0 |9007 4581 |3692M 54.3G 5236k 5513M
4 0 95 0 0| 0 2670k: 0 0 : 0 0 : 0 0 : 0 0 | 603B 411B| 0 0 |2982 5036 |3691M 54.3G 5236k 5513M
=== EXT4 ===
mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -b 4096 /dev/md/pex
mount -o discard,noatime /dev/md/pex /media/test
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=32M --numjobs=16 --size=10G --runtime=600 --group_reporting
seqread: (g=0): rw=read, bs=(R) 32.0MiB-32.0MiB, (W) 32.0MiB-32.0MiB, (T) 32.0MiB-32.0MiB, ioengine=libaio, iodepth=1
...
fio-3.25
Starting 16 processes
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
seqread: Laying out IO file (1 file / 10240MiB)
Jobs: 4 (f=3): [R(1),f(1),_(8),R(1),_(2),R(1),_(2)][83.9%][r=6464MiB/s][r=202 IOPS][eta 00m:05s]
seqread: (groupid=0, jobs=16): err= 0: pid=4396: Sun Feb 7 12:23:06 2021
read: IOPS=201, BW=6442MiB/s (6754MB/s)(160GiB/25435msec)
slat (usec): min=819, max=56410, avg=2058.16, stdev=2696.00
clat (msec): min=3, max=195, avg=71.61, stdev=33.99
lat (msec): min=5, max=196, avg=73.66, stdev=34.00
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 28], 10.00th=[ 40], 20.00th=[ 47],
| 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 74],
| 70.00th=[ 83], 80.00th=[ 96], 90.00th=[ 122], 95.00th=[ 142],
| 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 192],
| 99.99th=[ 197]
bw ( MiB/s): min= 3199, max=15693, per=100.00%, avg=6851.21, stdev=188.82, samples=745
iops : min= 99, max= 490, avg=213.52, stdev= 5.89, samples=745
lat (msec) : 4=0.76%, 10=0.53%, 20=3.07%, 50=25.04%, 100=51.82%
lat (msec) : 250=18.79%
cpu : usr=0.03%, sys=2.69%, ctx=5456, majf=0, minf=131292
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
READ: bw=6442MiB/s (6754MB/s), 6442MiB/s-6442MiB/s (6754MB/s-6754MB/s), io=160GiB (172GB), run=25435-25435msec
Disk stats (read/write):
md127: ios=352238/0, merge=0/0, ticks=15656279/0, in_queue=15656279, util=99.73%, aggrios=144962/0, aggrmerge=20102/0, aggrticks=5061421/0, aggrin_queue=5061421, aggrutil=99.36%
nvme3n1: ios=143604/0, merge=24345/0, ticks=7298292/0, in_queue=7298292, util=99.36%
nvme4n1: ios=147106/0, merge=17993/0, ticks=7328713/0, in_queue=7328713, util=99.36%
nvme1n1: ios=143931/0, merge=19185/0, ticks=2776649/0, in_queue=2776649, util=99.32%
nvme2n1: ios=145208/0, merge=18885/0, ticks=2842032/0, in_queue=2842032, util=99.32%
fio --name=seqwrite --rw=write --direct=1 --ioengine=libaio --bs=32M --numjobs=16 --size=40G --runtime=600 --group_reporting
seqwrite: (g=0): rw=write, bs=(R) 32.0MiB-32.0MiB, (W) 32.0MiB-32.0MiB, (T) 32.0MiB-32.0MiB, ioengine=libaio, iodepth=1
...
fio-3.25
Starting 16 processes
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
seqwrite: Laying out IO file (1 file / 40960MiB)
Jobs: 2 (f=2): [_(7),W(1),_(7),W(1)][97.0%][w=4590MiB/s][w=143 IOPS][eta 00m:04s]
seqwrite: (groupid=0, jobs=16): err= 0: pid=5269: Sun Feb 7 12:28:44 2021
write: IOPS=157, BW=5025MiB/s (5269MB/s)(640GiB/130419msec); 0 zone resets
slat (usec): min=1212, max=52598, avg=2839.02, stdev=982.73
clat (msec): min=4, max=492, avg=91.23, stdev=51.72
lat (msec): min=5, max=496, avg=94.07, stdev=51.65
clat percentiles (msec):
| 1.00th=[ 12], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 55],
| 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 74], 60.00th=[ 87],
| 70.00th=[ 102], 80.00th=[ 125], 90.00th=[ 167], 95.00th=[ 203],
| 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 305], 99.95th=[ 317],
| 99.99th=[ 347]
bw ( MiB/s): min= 1600, max=16385, per=100.00%, avg=5449.54, stdev=187.86, samples=3840
iops : min= 50, max= 512, avg=170.16, stdev= 5.87, samples=3840
lat (msec) : 10=0.20%, 20=1.87%, 50=12.36%, 100=54.93%, 250=29.24%
lat (msec) : 500=1.40%
cpu : usr=1.12%, sys=1.84%, ctx=22030, majf=0, minf=207
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued rwts: total=0,20480,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: bw=5025MiB/s (5269MB/s), 5025MiB/s-5025MiB/s (5269MB/s-5269MB/s), io=640GiB (687GB), run=130419-130419msec
Disk stats (read/write):
md127: ios=0/1382241, merge=0/0, ticks=0/62804771, in_queue=62804771, util=100.00%, aggrios=0/448385, aggrmerge=0/39049, aggrticks=0/15539613, aggrin_queue=15540245, aggrutil=99.85%
nvme3n1: ios=0/453748, merge=0/31108, ticks=0/7916113, in_queue=7916448, util=96.61%
nvme4n1: ios=0/447268, merge=0/40096, ticks=0/6830916, in_queue=6831265, util=96.00%
nvme1n1: ios=0/442615, merge=0/47555, ticks=0/26496899, in_queue=26497931, util=99.85%
nvme2n1: ios=0/449911, merge=0/37438, ticks=0/20914527, in_queue=20915336, util=99.11%
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-02-07 19:30 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-07 19:18 fio seqread benchmark, 100% cpu usage Christian Braun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).