All of lore.kernel.org
 help / color / mirror / Atom feed
* [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
@ 2015-11-02  5:24 hzwulibin
  2015-11-02  6:11 ` Chen, Xiaoxi
  0 siblings, 1 reply; 5+ messages in thread
From: hzwulibin @ 2015-11-02  5:24 UTC (permalink / raw)
  To: ceph-devel, ceph-users

Hi, 
same environment, after a test script, the io latency(get from sudo ceph --admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) 
increase from about 4ms to 7.3ms

qemu version: debian 2.1.2
kernel:3.10.45-openstack-amd64
system: debian 7.8
ceph: 0.94.5
VM CPU number: 4  (cpu MHz : 2599.998)
VM memory size: 16GB
9 OSD storage servers, with 4 SSD OSD on each, total 36 OSDs.

Test scripts in VM:
# cat reproduce.sh
#!/bin/bash

times=20
for((i=1;i<=$times;i++))
do
    tmpdate=`date "+%F-%T"`
    echo "=======================$tmpdate($i/$times)======================="
    tmp=$((i%2))
    if [[ $tmp -eq 0 ]];then
        echo "############### fio /root/vdb.cfg ###############"
        fio /root/vdb.cfg
    else
        echo "############### fio /root/vdc.cfg ###############"
        fio /root/vdc.cfg
    fi
done


tmpdate=`date "+%F-%T"`
echo "############### [$tmpdate] fio /root/vde.cfg ###############"
fio /root/vde.cfg


# cat vdb.cfg 
[global]
rw=randwrite
direct=1
numjobs=64
ioengine=sync
bsrange=4k-4k
runtime=180
group_reporting

[disk01]
filename=/dev/vdb


# cat vdc.cfg 
[global]
rw=randwrite
direct=1
numjobs=64
ioengine=sync
bsrange=4k-4k
runtime=180
group_reporting

[disk01]
filename=/dev/vdc

# cat vdd.cfg 
[global]
rw=randwrite
direct=1
numjobs=64
ioengine=sync
bsrange=4k-4k
runtime=180
group_reporting

[disk01]
filename=/dev/vdd

# cat vde.cfg 
[global]
rw=randwrite
direct=1
numjobs=64
ioengine=sync
bsrange=4k-4k
runtime=180
group_reporting

[disk01]
filename=/dev/vde 		

After run the scripts reproduce.sh, the disks in the VM's IOPS cutdown from 12k to 5k, the latency increase from 4ms to 7.3ms.

run steps:
1. create a VM
2. create four volumes and attatch to the VM
3. sh reproduce.sh
4. in the runtime of  reproduce.sh, run "fio vdd.cfg" or "fio vde.cfg" to checkt the performance
	
After reproduce.sh finished, performance down.


Anyone has the same problem or has some ideas about this?

Thanks!
--------------
hzwulibin
2015-11-02

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
  2015-11-02  5:24 [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test hzwulibin
@ 2015-11-02  6:11 ` Chen, Xiaoxi
  2015-11-03  0:43   ` hzwulibin-Re5JQEeQqe8AvxtiuMwx3w
                     ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Chen, Xiaoxi @ 2015-11-02  6:11 UTC (permalink / raw)
  To: hzwulibin, ceph-devel, ceph-users

Pre-allocated the volume by "DD" across the entire RBD before you do any performance test:).

In this case, you may want to re-create the RBD, pre-allocate and try again.

> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of hzwulibin
> Sent: Monday, November 2, 2015 1:24 PM
> To: ceph-devel; ceph-users
> Subject: [performance] why rbd_aio_write latency increase from 4ms to
> 7.3ms after the same test
> 
> Hi,
> same environment, after a test script, the io latency(get from sudo ceph --
> admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) increase
> from about 4ms to 7.3ms
> 
> qemu version: debian 2.1.2
> kernel:3.10.45-openstack-amd64
> system: debian 7.8
> ceph: 0.94.5
> VM CPU number: 4  (cpu MHz : 2599.998)
> VM memory size: 16GB
> 9 OSD storage servers, with 4 SSD OSD on each, total 36 OSDs.
> 
> Test scripts in VM:
> # cat reproduce.sh
> #!/bin/bash
> 
> times=20
> for((i=1;i<=$times;i++))
> do
>     tmpdate=`date "+%F-%T"`
>     echo
> "=======================$tmpdate($i/$times)===================
> ===="
>     tmp=$((i%2))
>     if [[ $tmp -eq 0 ]];then
>         echo "############### fio /root/vdb.cfg ###############"
>         fio /root/vdb.cfg
>     else
>         echo "############### fio /root/vdc.cfg ###############"
>         fio /root/vdc.cfg
>     fi
> done
> 
> 
> tmpdate=`date "+%F-%T"`
> echo "############### [$tmpdate] fio /root/vde.cfg ###############"
> fio /root/vde.cfg
> 
> 
> # cat vdb.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdb
> 
> 
> # cat vdc.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdc
> 
> # cat vdd.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdd
> 
> # cat vde.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vde
> 
> After run the scripts reproduce.sh, the disks in the VM's IOPS cutdown from
> 12k to 5k, the latency increase from 4ms to 7.3ms.
> 
> run steps:
> 1. create a VM
> 2. create four volumes and attatch to the VM 3. sh reproduce.sh 4. in the
> runtime of  reproduce.sh, run "fio vdd.cfg" or "fio vde.cfg" to checkt the
> performance
> 
> After reproduce.sh finished, performance down.
> 
> 
> Anyone has the same problem or has some ideas about this?
> 
> Thanks!
> --------------
> hzwulibin
> 2015-11-02
> \x04 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX  \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?
> & )ߢ^[f

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
  2015-11-02  6:11 ` Chen, Xiaoxi
@ 2015-11-03  0:43   ` hzwulibin-Re5JQEeQqe8AvxtiuMwx3w
  2015-11-03  0:46   ` hzwulibin
  2015-11-03  0:47   ` hzwulibin
  2 siblings, 0 replies; 5+ messages in thread
From: hzwulibin-Re5JQEeQqe8AvxtiuMwx3w @ 2015-11-03  0:43 UTC (permalink / raw)
  To: Chen, Xiaoxi, ceph-devel-u79uwXL29TY76Z2rM5mHXA,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw


[-- Attachment #1.1: Type: text/plain, Size: 3391 bytes --]

Hi,

Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not growing up 
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..

Thanks!



hzwulibin@gmail.com
 
From: Chen, Xiaoxi
Date: 2015-11-02 14:11
To: hzwulibin; ceph-devel; ceph-users
Subject: RE: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
Pre-allocated the volume by "DD" across the entire RBD before you do any performance test:).
 
In this case, you may want to re-create the RBD, pre-allocate and try again.
 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of hzwulibin
> Sent: Monday, November 2, 2015 1:24 PM
> To: ceph-devel; ceph-users
> Subject: [performance] why rbd_aio_write latency increase from 4ms to
> 7.3ms after the same test
> 
> Hi,
> same environment, after a test script, the io latency(get from sudo ceph --
> admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) increase
> from about 4ms to 7.3ms
> 
> qemu version: debian 2.1.2
> kernel:3.10.45-openstack-amd64
> system: debian 7.8
> ceph: 0.94.5
> VM CPU number: 4  (cpu MHz : 2599.998)
> VM memory size: 16GB
> 9 OSD storage servers, with 4 SSD OSD on each, total 36 OSDs.
> 
> Test scripts in VM:
> # cat reproduce.sh
> #!/bin/bash
> 
> times=20
> for((i=1;i<=$times;i++))
> do
>     tmpdate=`date "+%F-%T"`
>     echo
> "=======================$tmpdate($i/$times)===================
> ===="
>     tmp=$((i%2))
>     if [[ $tmp -eq 0 ]];then
>         echo "############### fio /root/vdb.cfg ###############"
>         fio /root/vdb.cfg
>     else
>         echo "############### fio /root/vdc.cfg ###############"
>         fio /root/vdc.cfg
>     fi
> done
> 
> 
> tmpdate=`date "+%F-%T"`
> echo "############### [$tmpdate] fio /root/vde.cfg ###############"
> fio /root/vde.cfg
> 
> 
> # cat vdb.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdb
> 
> 
> # cat vdc.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdc
> 
> # cat vdd.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdd
> 
> # cat vde.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vde
> 
> After run the scripts reproduce.sh, the disks in the VM's IOPS cutdown from
> 12k to 5k, the latency increase from 4ms to 7.3ms.
> 
> run steps:
> 1. create a VM
> 2. create four volumes and attatch to the VM 3. sh reproduce.sh 4. in the
> runtime of  reproduce.sh, run "fio vdd.cfg" or "fio vde.cfg" to checkt the
> performance
> 
> After reproduce.sh finished, performance down.
> 
> 
> Anyone has the same problem or has some ideas about this?
> 
> Thanks!
> --------------
> hzwulibin
> 2015-11-02
> {.n +       +%  lzwm  b 맲  r  yǩ ׯzX    ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z w   ?
> & )ߢ f

[-- Attachment #1.2: Type: text/html, Size: 7035 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re:  RE: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
  2015-11-02  6:11 ` Chen, Xiaoxi
  2015-11-03  0:43   ` hzwulibin-Re5JQEeQqe8AvxtiuMwx3w
@ 2015-11-03  0:46   ` hzwulibin
  2015-11-03  0:47   ` hzwulibin
  2 siblings, 0 replies; 5+ messages in thread
From: hzwulibin @ 2015-11-03  0:46 UTC (permalink / raw)
  To: ceph-devel

Hi,

Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not growing up 
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..

Thanks!

------------------				 
hzwulibin
2015-11-03

-------------------------------------------------------------
发件人:"Chen, Xiaoxi" <xiaoxi.chen@intel.com>
发送日期:2015-11-02 14:11
收件人:hzwulibin,ceph-devel,ceph-users
抄送:
主题:RE: [performance] why rbd_aio_write latency increase from 4ms to
 7.3ms after the same test

Pre-allocated the volume by "DD" across the entire RBD before you do any performance test:).

In this case, you may want to re-create the RBD, pre-allocate and try again.

> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of hzwulibin
> Sent: Monday, November 2, 2015 1:24 PM
> To: ceph-devel; ceph-users
> Subject: [performance] why rbd_aio_write latency increase from 4ms to
> 7.3ms after the same test
> 
> Hi,
> same environment, after a test script, the io latency(get from sudo ceph --
> admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) increase
> from about 4ms to 7.3ms
> 
> qemu version: debian 2.1.2
> kernel:3.10.45-openstack-amd64
> system: debian 7.8
> ceph: 0.94.5
> VM CPU number: 4  (cpu MHz : 2599.998)
> VM memory size: 16GB
> 9 OSD storage servers, with 4 SSD OSD on each, total 36 OSDs.
> 
> Test scripts in VM:
> # cat reproduce.sh
> #!/bin/bash
> 
> times=20
> for((i=1;i<=$times;i++))
> do
>     tmpdate=`date "+%F-%T"`
>     echo
> "=======================$tmpdate($i/$times)===================
> ===="
>     tmp=$((i%2))
>     if [[ $tmp -eq 0 ]];then
>         echo "############### fio /root/vdb.cfg ###############"
>         fio /root/vdb.cfg
>     else
>         echo "############### fio /root/vdc.cfg ###############"
>         fio /root/vdc.cfg
>     fi
> done
> 
> 
> tmpdate=`date "+%F-%T"`
> echo "############### [$tmpdate] fio /root/vde.cfg ###############"
> fio /root/vde.cfg
> 
> 
> # cat vdb.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdb
> 
> 
> # cat vdc.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdc
> 
> # cat vdd.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdd
> 
> # cat vde.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vde
> 
> After run the scripts reproduce.sh, the disks in the VM's IOPS cutdown from
> 12k to 5k, the latency increase from 4ms to 7.3ms.
> 
> run steps:
> 1. create a VM
> 2. create four volumes and attatch to the VM 3. sh reproduce.sh 4. in the
> runtime of  reproduce.sh, run "fio vdd.cfg" or "fio vde.cfg" to checkt the
> performance
> 
> After reproduce.sh finished, performance down.
> 
> 
> Anyone has the same problem or has some ideas about this?
> 
> Thanks!
> --------------
> hzwulibin
> 2015-11-02
> \x04 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX  \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?
> & )ߢ^[f


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re:  RE: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
  2015-11-02  6:11 ` Chen, Xiaoxi
  2015-11-03  0:43   ` hzwulibin-Re5JQEeQqe8AvxtiuMwx3w
  2015-11-03  0:46   ` hzwulibin
@ 2015-11-03  0:47   ` hzwulibin
  2 siblings, 0 replies; 5+ messages in thread
From: hzwulibin @ 2015-11-03  0:47 UTC (permalink / raw)
  To: ceph-devel

Hi,

Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not growing up 
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..

Thanks!

------------------				 
hzwulibin
2015-11-03

-------------------------------------------------------------
发件人:"Chen, Xiaoxi" <xiaoxi.chen@intel.com>
发送日期:2015-11-02 14:11
收件人:hzwulibin,ceph-devel,ceph-users
抄送:
主题:RE: [performance] why rbd_aio_write latency increase from 4ms to
 7.3ms after the same test

Pre-allocated the volume by "DD" across the entire RBD before you do any performance test:).

In this case, you may want to re-create the RBD, pre-allocate and try again.

> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-
> owner@vger.kernel.org] On Behalf Of hzwulibin
> Sent: Monday, November 2, 2015 1:24 PM
> To: ceph-devel; ceph-users
> Subject: [performance] why rbd_aio_write latency increase from 4ms to
> 7.3ms after the same test
> 
> Hi,
> same environment, after a test script, the io latency(get from sudo ceph --
> admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) increase
> from about 4ms to 7.3ms
> 
> qemu version: debian 2.1.2
> kernel:3.10.45-openstack-amd64
> system: debian 7.8
> ceph: 0.94.5
> VM CPU number: 4  (cpu MHz : 2599.998)
> VM memory size: 16GB
> 9 OSD storage servers, with 4 SSD OSD on each, total 36 OSDs.
> 
> Test scripts in VM:
> # cat reproduce.sh
> #!/bin/bash
> 
> times=20
> for((i=1;i<=$times;i++))
> do
>     tmpdate=`date "+%F-%T"`
>     echo
> "=======================$tmpdate($i/$times)===================
> ===="
>     tmp=$((i%2))
>     if [[ $tmp -eq 0 ]];then
>         echo "############### fio /root/vdb.cfg ###############"
>         fio /root/vdb.cfg
>     else
>         echo "############### fio /root/vdc.cfg ###############"
>         fio /root/vdc.cfg
>     fi
> done
> 
> 
> tmpdate=`date "+%F-%T"`
> echo "############### [$tmpdate] fio /root/vde.cfg ###############"
> fio /root/vde.cfg
> 
> 
> # cat vdb.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdb
> 
> 
> # cat vdc.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdc
> 
> # cat vdd.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vdd
> 
> # cat vde.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
> 
> [disk01]
> filename=/dev/vde
> 
> After run the scripts reproduce.sh, the disks in the VM's IOPS cutdown from
> 12k to 5k, the latency increase from 4ms to 7.3ms.
> 
> run steps:
> 1. create a VM
> 2. create four volumes and attatch to the VM 3. sh reproduce.sh 4. in the
> runtime of  reproduce.sh, run "fio vdd.cfg" or "fio vde.cfg" to checkt the
> performance
> 
> After reproduce.sh finished, performance down.
> 
> 
> Anyone has the same problem or has some ideas about this?
> 
> Thanks!
> --------------
> hzwulibin
> 2015-11-02
> \x04 {.n +       +%  lzwm  b 맲  r  yǩ ׯzX  \x17  ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z \x1e w   ?
> & )ߢ^[f


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2015-11-03  0:47 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-02  5:24 [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test hzwulibin
2015-11-02  6:11 ` Chen, Xiaoxi
2015-11-03  0:43   ` hzwulibin-Re5JQEeQqe8AvxtiuMwx3w
2015-11-03  0:46   ` hzwulibin
2015-11-03  0:47   ` hzwulibin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.