All of lore.kernel.org
 help / color / mirror / Atom feed
* Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
@ 2022-05-03  7:43 Lukáš Doktor
  2022-05-05 10:09 ` Stefan Hajnoczi
  0 siblings, 1 reply; 8+ messages in thread
From: Lukáš Doktor @ 2022-05-03  7:43 UTC (permalink / raw)
  To: longpeng2, Paolo Bonzini, qemu-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 1247 bytes --]

Hello Mike, Paolo, others,

in my perf pipeline I noticed a regression bisected to the f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 - "thread-posix: remove the posix semaphore support" commit and I'd like to ask you to verify it might have caused that and eventually consider fixing it. The regression is visible, reproducible and clearly bisectable to this commit with the following 2 scenarios:

1. fio write 4KiB using the nbd ioengine on localhost
2. fio read 4KiB using #cpu jobs and iodepth=8 on a rotational disk using qcow2 image and default virt-install 

    <disk type="file" device="disk">
      <driver name="qemu" type="qcow2"/>
      <source file="/var/lib/libvirt/images/RHEL-8.4.0-20210503.1-virtlab506.DefaultLibvirt0.qcow2"/>
      <target dev="vda" bus="virtio"/>
    </disk>

but smaller regressions can be seen under other scenarios as well since this commit. You can find the report from bisections here:

https://ldoktor.github.io/tmp/RedHat-virtlab506/v7.0.0/RedHat-virtlab506-f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94-RHEL-8.4.0-20210503.1-1.html
https://ldoktor.github.io/tmp/RedHat-virtlab506/v7.0.0/RedHat-virtlab506-f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94-RHEL-8.4.0-20210503.1-2.html

Regards,
Lukáš

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 12153 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-03  7:43 Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 Lukáš Doktor
@ 2022-05-05 10:09 ` Stefan Hajnoczi
  2022-05-05 12:34   ` longpeng2--- via
  2022-05-05 12:44   ` Daniel P. Berrangé
  0 siblings, 2 replies; 8+ messages in thread
From: Stefan Hajnoczi @ 2022-05-05 10:09 UTC (permalink / raw)
  To: longpeng2, Paolo Bonzini; +Cc: qemu-devel, Lukáš Doktor

[-- Attachment #1: Type: text/plain, Size: 1899 bytes --]

On Tue, May 03, 2022 at 09:43:15AM +0200, Lukáš Doktor wrote:
> Hello Mike, Paolo, others,
> 
> in my perf pipeline I noticed a regression bisected to the f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 - "thread-posix: remove the posix semaphore support" commit and I'd like to ask you to verify it might have caused that and eventually consider fixing it. The regression is visible, reproducible and clearly bisectable to this commit with the following 2 scenarios:

I can't parse the commit message for
f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94, so it's not 100% clear to me
why it was necessary to remove sem_*() calls.

util/thread-pool.c uses qemu_sem_*() to notify worker threads when work
becomes available. It makes sense that this operation is
performance-critical and that's why the benchmark regressed.

Maybe thread-pool.c can use qemu_cond_*() instead of qemu_sem_*(). That
avoids the extra mutex (we already have pool->lock) and counter (we
already have pool->request_list)?

> 
> 1. fio write 4KiB using the nbd ioengine on localhost
> 2. fio read 4KiB using #cpu jobs and iodepth=8 on a rotational disk using qcow2 image and default virt-install 
> 
>     <disk type="file" device="disk">
>       <driver name="qemu" type="qcow2"/>
>       <source file="/var/lib/libvirt/images/RHEL-8.4.0-20210503.1-virtlab506.DefaultLibvirt0.qcow2"/>
>       <target dev="vda" bus="virtio"/>
>     </disk>
> 
> but smaller regressions can be seen under other scenarios as well since this commit. You can find the report from bisections here:
> 
> https://ldoktor.github.io/tmp/RedHat-virtlab506/v7.0.0/RedHat-virtlab506-f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94-RHEL-8.4.0-20210503.1-1.html
> https://ldoktor.github.io/tmp/RedHat-virtlab506/v7.0.0/RedHat-virtlab506-f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94-RHEL-8.4.0-20210503.1-2.html
> 
> Regards,
> Lukáš






[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 484 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-05 10:09 ` Stefan Hajnoczi
@ 2022-05-05 12:34   ` longpeng2--- via
  2022-05-05 12:44   ` Daniel P. Berrangé
  1 sibling, 0 replies; 8+ messages in thread
From: longpeng2--- via @ 2022-05-05 12:34 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel, Lukáš Doktor, Paolo Bonzini

Hi Stefan,

在 2022/5/5 18:09, Stefan Hajnoczi 写道:
> On Tue, May 03, 2022 at 09:43:15AM +0200, Lukáš Doktor wrote:
>> Hello Mike, Paolo, others,
>>
>> in my perf pipeline I noticed a regression bisected to the f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 - "thread-posix: remove the posix semaphore support" commit and I'd like to ask you to verify it might have caused that and eventually consider fixing it. The regression is visible, reproducible and clearly bisectable to this commit with the following 2 scenarios:
> I can't parse the commit message for
> f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94, so it's not 100% clear to me
> why it was necessary to remove sem_*() calls.

We can find the previous discussion here:

[1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg870174.html

[2] https://www.mail-archive.com/qemu-devel@nongnu.org/msg870409.html


Because sem_timedwait() only supports absolute time and it would be 
affected

if the system time is changing. Another reason to remove sem_*() is to make

the code much neater.


> util/thread-pool.c uses qemu_sem_*() to notify worker threads when work
> becomes available. It makes sense that this operation is
> performance-critical and that's why the benchmark regressed.
>
> Maybe thread-pool.c can use qemu_cond_*() instead of qemu_sem_*(). That
> avoids the extra mutex (we already have pool->lock) and counter (we
> already have pool->request_list)?
>
>> 1. fio write 4KiB using the nbd ioengine on localhost
>> 2. fio read 4KiB using #cpu jobs and iodepth=8 on a rotational disk using qcow2 image and default virt-install
>>
>>      <disk type="file" device="disk">
>>        <driver name="qemu" type="qcow2"/>
>>        <source file="/var/lib/libvirt/images/RHEL-8.4.0-20210503.1-virtlab506.DefaultLibvirt0.qcow2"/>
>>        <target dev="vda" bus="virtio"/>
>>      </disk>
>>
>> but smaller regressions can be seen under other scenarios as well since this commit. You can find the report from bisections here:
>>
>> https://ldoktor.github.io/tmp/RedHat-virtlab506/v7.0.0/RedHat-virtlab506-f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94-RHEL-8.4.0-20210503.1-1.html
>> https://ldoktor.github.io/tmp/RedHat-virtlab506/v7.0.0/RedHat-virtlab506-f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94-RHEL-8.4.0-20210503.1-2.html
>>
>> Regards,
>> Lukáš
>
>
>
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-05 10:09 ` Stefan Hajnoczi
  2022-05-05 12:34   ` longpeng2--- via
@ 2022-05-05 12:44   ` Daniel P. Berrangé
  2022-05-05 13:27     ` Paolo Bonzini
  1 sibling, 1 reply; 8+ messages in thread
From: Daniel P. Berrangé @ 2022-05-05 12:44 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: longpeng2, Paolo Bonzini, qemu-devel, Lukáš Doktor

On Thu, May 05, 2022 at 11:09:08AM +0100, Stefan Hajnoczi wrote:
> On Tue, May 03, 2022 at 09:43:15AM +0200, Lukáš Doktor wrote:
> > Hello Mike, Paolo, others,
> > 
> > in my perf pipeline I noticed a regression bisected to the f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 - "thread-posix: remove the posix semaphore support" commit and I'd like to ask you to verify it might have caused that and eventually consider fixing it. The regression is visible, reproducible and clearly bisectable to this commit with the following 2 scenarios:
> 
> I can't parse the commit message for
> f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94, so it's not 100% clear to me
> why it was necessary to remove sem_*() calls.
> 
> util/thread-pool.c uses qemu_sem_*() to notify worker threads when work
> becomes available. It makes sense that this operation is
> performance-critical and that's why the benchmark regressed.

Doh, I questioned whether the change would have a performance impact,
and it wasn't thought to be used in perf critical places

   https://www.mail-archive.com/qemu-devel@nongnu.org/msg870737.html

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-05 12:44   ` Daniel P. Berrangé
@ 2022-05-05 13:27     ` Paolo Bonzini
  2022-05-06  4:30       ` Lukáš Doktor
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2022-05-05 13:27 UTC (permalink / raw)
  To: Daniel P. Berrangé, Stefan Hajnoczi
  Cc: longpeng2, qemu-devel, Lukáš Doktor

On 5/5/22 14:44, Daniel P. Berrangé wrote:
>> util/thread-pool.c uses qemu_sem_*() to notify worker threads when work
>> becomes available. It makes sense that this operation is
>> performance-critical and that's why the benchmark regressed.
>
> Doh, I questioned whether the change would have a performance impact,
> and it wasn't thought to be used in perf critical places

The expectation was that there would be no contention and thus no 
overhead because of the pool->lock that exists anyway, but that was 
optimistic.

Lukáš, can you run a benchmark with this condvar implementation that was 
suggested by Stefan:

https://lore.kernel.org/qemu-devel/20220505131346.823941-1-pbonzini@redhat.com/raw

?

If it still regresses, we can either revert the patch or look at a 
different implementation (even getting rid of the global queue is an 
option).

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-05 13:27     ` Paolo Bonzini
@ 2022-05-06  4:30       ` Lukáš Doktor
  2022-05-06  8:42         ` Paolo Bonzini
  2022-05-06 11:30         ` Paolo Bonzini
  0 siblings, 2 replies; 8+ messages in thread
From: Lukáš Doktor @ 2022-05-06  4:30 UTC (permalink / raw)
  To: Paolo Bonzini, Daniel P. Berrangé, Stefan Hajnoczi
  Cc: longpeng2, qemu-devel


[-- Attachment #1.1.1: Type: text/plain, Size: 3590 bytes --]

Hello all,

thank you for the responses, I ran 3 runs per each commit using 5 iteration of fio-nbd using 

f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 + Stefan's commit
d7482ffe9756919531307330fd1c6dbec66e8c32

using the regressed f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 as a base-line the relative percentage results were:

f9f    |  0.0 | -2.8 |  0.6
stefan | -3.1 | -1.2 | -2.2
d74    |  7.2 |  9.1 |  8.2

Not sure whether the Stefan's commit was suppose to be applied on top of the f9fc893b commit but at least for fio-nbd 4k writes it slightly worsen the situation.

Do you want me to try the fio inside guest as well, or is this fio-nbd check sufficient for now?

Also let me briefly share the details about the execution:

---

mkdir -p /var/lib/runperf/runperf-nbd/
truncate -s 256M /var/lib/runperf/runperf-nbd//disk.img
nohup qemu-nbd -t -k /var/lib/runperf/runperf-nbd//socket -f raw /var/lib/runperf/runperf-nbd//disk.img &> $(mktemp /var/lib/runperf/runperf-nbd//qemu_nbd_XXXX.log) & echo $! >> /var/lib/runperf/runperf-nbd//kill_pids
for PID in $(cat /var/lib/runperf/runperf-nbd//kill_pids); do disown -h $PID; done
export TERM=xterm-256color
true
mkdir -p /var/lib/runperf/runperf-nbd/
cat > /var/lib/runperf/runperf-nbd/nbd.fio << \Gr1UaS
# To use fio to test nbdkit:
#
# nbdkit -U - memory size=256M --run 'export unixsocket; fio examples/nbd.fio'
#
# To use fio to test qemu-nbd:
#
# rm -f /tmp/disk.img /tmp/socket
# truncate -s 256M /tmp/disk.img
# export target=/tmp/socket
# qemu-nbd -t -k $target -f raw /tmp/disk.img &
# fio examples/nbd.fio
# killall qemu-nbd

[global]
bs = $@
runtime = 30
ioengine = nbd
iodepth = 32
direct = 1
sync = 0
time_based = 1
clocksource = gettimeofday
ramp_time = 5
write_bw_log = fio
write_iops_log = fio
write_lat_log = fio
log_avg_msec = 1000
write_hist_log = fio
log_hist_msec = 10000
# log_hist_coarseness = 4 # 76 bins

rw = $@
uri=nbd+unix:///?socket=/var/lib/runperf/runperf-nbd/socket
# Starting from nbdkit 1.14 the following will work:
#uri=${uri}

[job0]
offset=0

[job1]
offset=64m

[job2]
offset=128m

[job3]
offset=192m

Gr1UaS

benchmark_bin=/usr/local/bin/fio pbench-fio  --block-sizes=4 --job-file=/var/lib/runperf/runperf-nbd/nbd.fio --numjobs=4 --runtime=60 --samples=5 --test-types=write --clients=$WORKER_IP

---

I am using pbench to run the execution, but you can simply replace the "$@" variables in the produced "/var/lib/runperf/runperf-nbd/nbd.fio" and run it directly using fio.

Regards,
Lukáš


Dne 05. 05. 22 v 15:27 Paolo Bonzini napsal(a):
> On 5/5/22 14:44, Daniel P. Berrangé wrote:
>>> util/thread-pool.c uses qemu_sem_*() to notify worker threads when work
>>> becomes available. It makes sense that this operation is
>>> performance-critical and that's why the benchmark regressed.
>>
>> Doh, I questioned whether the change would have a performance impact,
>> and it wasn't thought to be used in perf critical places
> 
> The expectation was that there would be no contention and thus no overhead because of the pool->lock that exists anyway, but that was optimistic.
> 
> Lukáš, can you run a benchmark with this condvar implementation that was suggested by Stefan:
> 
> https://lore.kernel.org/qemu-devel/20220505131346.823941-1-pbonzini@redhat.com/raw
> 
> ?
> 
> If it still regresses, we can either revert the patch or look at a different implementation (even getting rid of the global queue is an option).
> 
> Thanks,
> 
> Paolo
> 

[-- Attachment #1.1.2: OpenPGP public key --]
[-- Type: application/pgp-keys, Size: 12153 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 495 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-06  4:30       ` Lukáš Doktor
@ 2022-05-06  8:42         ` Paolo Bonzini
  2022-05-06 11:30         ` Paolo Bonzini
  1 sibling, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2022-05-06  8:42 UTC (permalink / raw)
  To: Lukáš Doktor, Daniel P. Berrangé, Stefan Hajnoczi
  Cc: longpeng2, qemu-devel

On 5/6/22 06:30, Lukáš Doktor wrote:
> Also let me briefly share the details about the execution:

Thanks, this is super useful!

I got very similar results to yours:

QEMU 6.2			bw=1132MiB/s
QEMU 7.0			bw=1046MiB/s
QEMU 7.0 + patch		bw=1012MiB/s
QEMU 7.0 + tweaked patch	bw=1077MiB/s

"tweaked patch" is moving qemu_cond_signal after qemu_mutex_unlock.
It's better than QemuSemaphore in QEMU 7.0 but still not as good as
the original.  /me thinks

Paolo

> ---
> 
> mkdir -p /var/lib/runperf/runperf-nbd/
> truncate -s 256M /var/lib/runperf/runperf-nbd//disk.img
> nohup qemu-nbd -t -k /var/lib/runperf/runperf-nbd//socket -f raw /var/lib/runperf/runperf-nbd//disk.img &> $(mktemp /var/lib/runperf/runperf-nbd//qemu_nbd_XXXX.log) & echo $! >> /var/lib/runperf/runperf-nbd//kill_pids
> for PID in $(cat /var/lib/runperf/runperf-nbd//kill_pids); do disown -h $PID; done
> export TERM=xterm-256color
> true
> mkdir -p /var/lib/runperf/runperf-nbd/
> cat > /var/lib/runperf/runperf-nbd/nbd.fio << \Gr1UaS
> # To use fio to test nbdkit:
> #
> # nbdkit -U - memory size=256M --run 'export unixsocket; fio examples/nbd.fio'
> #
> # To use fio to test qemu-nbd:
> #
> # rm -f /tmp/disk.img /tmp/socket
> # truncate -s 256M /tmp/disk.img
> # export target=/tmp/socket
> # qemu-nbd -t -k $target -f raw /tmp/disk.img &
> # fio examples/nbd.fio
> # killall qemu-nbd
> 
> [global]
> bs = $@
> runtime = 30
> ioengine = nbd
> iodepth = 32
> direct = 1
> sync = 0
> time_based = 1
> clocksource = gettimeofday
> ramp_time = 5
> write_bw_log = fio
> write_iops_log = fio
> write_lat_log = fio
> log_avg_msec = 1000
> write_hist_log = fio
> log_hist_msec = 10000
> # log_hist_coarseness = 4 # 76 bins
> 
> rw = $@
> uri=nbd+unix:///?socket=/var/lib/runperf/runperf-nbd/socket
> # Starting from nbdkit 1.14 the following will work:
> #uri=${uri}
> 
> [job0]
> offset=0
> 
> [job1]
> offset=64m
> 
> [job2]
> offset=128m
> 
> [job3]
> offset=192m
> 
> Gr1UaS
> 
> benchmark_bin=/usr/local/bin/fio pbench-fio  --block-sizes=4 --job-file=/var/lib/runperf/runperf-nbd/nbd.fio --numjobs=4 --runtime=60 --samples=5 --test-types=write --clients=$WORKER_IP
> 
> ---
> 
> I am using pbench to run the execution, but you can simply replace the "$@" variables in the produced "/var/lib/runperf/runperf-nbd/nbd.fio" and run it directly using fio.
> 
> Regards,
> Lukáš
> 
> 
> Dne 05. 05. 22 v 15:27 Paolo Bonzini napsal(a):
>> On 5/5/22 14:44, Daniel P. Berrangé wrote:
>>>> util/thread-pool.c uses qemu_sem_*() to notify worker threads when work
>>>> becomes available. It makes sense that this operation is
>>>> performance-critical and that's why the benchmark regressed.
>>>
>>> Doh, I questioned whether the change would have a performance impact,
>>> and it wasn't thought to be used in perf critical places
>>
>> The expectation was that there would be no contention and thus no overhead because of the pool->lock that exists anyway, but that was optimistic.
>>
>> Lukáš, can you run a benchmark with this condvar implementation that was suggested by Stefan:
>>
>> https://lore.kernel.org/qemu-devel/20220505131346.823941-1-pbonzini@redhat.com/raw
>>
>> ?
>>
>> If it still regresses, we can either revert the patch or look at a different implementation (even getting rid of the global queue is an option).
>>
>> Thanks,
>>
>> Paolo



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
  2022-05-06  4:30       ` Lukáš Doktor
  2022-05-06  8:42         ` Paolo Bonzini
@ 2022-05-06 11:30         ` Paolo Bonzini
  1 sibling, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2022-05-06 11:30 UTC (permalink / raw)
  To: Lukáš Doktor, Daniel P. Berrangé, Stefan Hajnoczi
  Cc: longpeng2, qemu-devel

On 5/6/22 06:30, Lukáš Doktor wrote:
> Hello all,
> 
> thank you for the responses, I ran 3 runs per each commit using 5 iteration of fio-nbd using
> 
> f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94
> f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 + Stefan's commit
> d7482ffe9756919531307330fd1c6dbec66e8c32

Ok, there's another simple change that can be made to bring performance
back to 6.2 levels, actually a bit better.  I'll post patches soon.
Here are 4 fio runs:

6.2:
    iops        : min=58051, max=62260, avg=60282.57, stdev=1081.18, samples=30
     clat percentiles (usec):   1.00th=[  490],   99.99th=[  775]
    iops        : min=59401, max=61290, avg=60651.27, stdev=468.24, samples=30
     clat percentiles (usec):   1.00th=[  490],   99.99th=[  717]
    iops        : min=59583, max=60816, avg=60353.43, stdev=282.69, samples=30
     clat percentiles (usec):   1.00th=[  490],   99.99th=[  701]
    iops        : min=58099, max=60713, avg=59739.53, stdev=755.49, samples=30
     clat percentiles (usec):   1.00th=[  494],   99.99th=[  717]


patched:
    iops        : min=60616, max=62522, avg=61654.37, stdev=555.67, samples=30
     clat percentiles (usec):   1.00th=[  474],   99.99th=[ 1303]
    iops        : min=61841, max=63600, avg=62878.47, stdev=442.40, samples=30
     clat percentiles (usec):   1.00th=[  465],   99.99th=[  685]
    iops        : min=62976, max=63910, avg=63531.60, stdev=261.05, samples=30
     clat percentiles (usec):   1.00th=[  461],   99.99th=[  693]
    iops        : min=60803, max=63623, avg=62653.37, stdev=808.76, samples=30
     clat percentiles (usec):   1.00th=[  465],   99.99th=[  685]


I also played a bit with direct wakeup of threads using a QemuEvent per thread.
Peak performance is higher (low percentiles are better) but the problem is that
it doesn't necessarily pick the most effective thread for wakeup resulting in
oscillations:

    iops        : min=60971, max=65726, avg=63771.93, stdev=1381.06, samples=30
     clat percentiles (usec): 1.00th=[  457],  99.99th=[  685]
    iops        : min=57537, max=64914, avg=63694.37, stdev=1809.40, samples=30
     clat percentiles (usec): 1.00th=[  461],  99.99th=[  693]
    iops        : min=58175, max=64711, avg=61277.80, stdev=2216.05, samples=30
     clat percentiles (usec): 1.00th=[  465],  99.99th=[  685]
    iops        : min=56349, max=63938, avg=58442.33, stdev=2012.54, samples=30
     clat percentiles (usec): 1.00th=[  469],  99.99th=[  668]

I'll go for the simple one.

Paolo


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-05-06 11:32 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-03  7:43 Fio regression caused by f9fc8932b11f3bcf2a2626f567cb6fdd36a33a94 Lukáš Doktor
2022-05-05 10:09 ` Stefan Hajnoczi
2022-05-05 12:34   ` longpeng2--- via
2022-05-05 12:44   ` Daniel P. Berrangé
2022-05-05 13:27     ` Paolo Bonzini
2022-05-06  4:30       ` Lukáš Doktor
2022-05-06  8:42         ` Paolo Bonzini
2022-05-06 11:30         ` Paolo Bonzini

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.