All of lore.kernel.org
 help / color / mirror / Atom feed
* [Annonce]The progress of KeyValueStore in Firely
@ 2014-02-28  2:45 Haomai Wang
       [not found] ` <CACJqLybuA48Jnz6Qwc7cs2kHJO30C4GwazXi8yGp8ZhvfFc2ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Haomai Wang @ 2014-02-28  2:45 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

Hi all,

last release I propose a KeyValueStore prototype(get info from
http://sebastien-han.fr/blog/2013/12/02/ceph-performance-interesting-things-going-on).
It contains some performance results and problems. Now I'd like to
refresh our thoughts on KeyValueStore.

KeyValueStore is pursuing FileStore's performance during this release.
Now things go farther, KeyValueStore did better in rbd
situation(partial write) .

I test KeyValueStore compared to FileStore in a single OSD on Samsung
SSD 840. The config can be viewed
here(http://pad.ceph.com/p/KeyValueStore.conf). The same config file
is applied to both FileStore and KeyValueStore except "osd
objectstore" option.

I use fio which rbd supported from
TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
to test rbd.

The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite
-ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100
-group_reporting -name=ebs_test -pool=openstack -rbdname=image
-clientname=fio -invalidate=0

============================================

FileStore result:
ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64
fio-2.1.4
Starting 1 thread
rbd engine: RBD version: 0.1.8

ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18 2014
  write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec
    slat (usec): min=116, max=2817, avg=195.78, stdev=56.45
    clat (msec): min=8, max=661, avg=39.57, stdev=29.26
     lat (msec): min=9, max=661, avg=39.77, stdev=29.25
    clat percentiles (msec):
     |  1.00th=[   15],  5.00th=[   20], 10.00th=[   23], 20.00th=[   28],
     | 30.00th=[   31], 40.00th=[   35], 50.00th=[   37], 60.00th=[   40],
     | 70.00th=[   43], 80.00th=[   46], 90.00th=[   51], 95.00th=[   58],
     | 99.00th=[  128], 99.50th=[  210], 99.90th=[  457], 99.95th=[  494],
     | 99.99th=[  545]
    bw (KB  /s): min= 2120, max=12656, per=100.00%, avg=6464.27, stdev=1726.55
    lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47%
    lat (msec) : 500=0.34%, 750=0.05%
  cpu          : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%, >=64=82.6%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=70760/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s,
mint=44202msec, maxt=44202msec

Disk stats (read/write):
  sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645, util=0.92%

===============================================

KeyValueStore:
ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64
fio-2.1.4
Starting 1 thread
rbd engine: RBD version: 0.1.8

ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30 2014
  write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec
    slat (usec): min=122, max=3237, avg=184.51, stdev=37.76
    clat (msec): min=10, max=168, avg=40.57, stdev= 5.70
     lat (msec): min=11, max=168, avg=40.75, stdev= 5.71
    clat percentiles (msec):
     |  1.00th=[   34],  5.00th=[   37], 10.00th=[   39], 20.00th=[   39],
     | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[   41],
     | 70.00th=[   42], 80.00th=[   42], 90.00th=[   44], 95.00th=[   45],
     | 99.00th=[   48], 99.50th=[   50], 99.90th=[  163], 99.95th=[  167],
     | 99.99th=[  167]
    bw (KB  /s): min= 4590, max= 7480, per=100.00%, avg=6285.69, stdev=374.22
    lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17%
  cpu          : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%, >=64=99.3%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=111094/d=0, short=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s,
mint=70759msec, maxt=70759msec

Disk stats (read/write):
  sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146, util=0.94%


It's just a simple test, maybe exist some misleadings on the config or
results. But
we can obviously see the conspicuous improvement for KeyValueStore.

In the near future, performance still will be the first thing to
improve especially at write operation(The goal of KeyValueStore is
provided with powerful write performance compared to FileStore), such
as
1. Fine-grained lock in object-level to improve the degree of
parallelism, because KeyValueStore doesn't have Journal to quick the
latency of write transaction, we need to avoid block as far as
possible.
2. Header cache(like inode in filesystem) to quick read.
3. more tests

Then new backend will be added like rocksdb or others. I'd like to see
performance improvements from other backend.

-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found] ` <CACJqLybuA48Jnz6Qwc7cs2kHJO30C4GwazXi8yGp8ZhvfFc2ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-02-28  6:59   ` Alexandre DERUMIER
  2014-03-01  0:04   ` Danny Al-Gaaf
  1 sibling, 0 replies; 10+ messages in thread
From: Alexandre DERUMIER @ 2014-02-28  6:59 UTC (permalink / raw)
  To: Haomai Wang
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

Thanks for the report !

Results seem to be encouraging . (Is is leveldb keystore ?)

Thanks to fio-rbd, it'll be easier to do random io benchmark now !

(I'm waiting to see if rocksdb will improve things in the future)

Regards,

Alexandre
----- Mail original ----- 

De: "Haomai Wang" <haomaiwang@gmail.com> 
À: ceph-users@lists.ceph.com, ceph-devel@vger.kernel.org 
Envoyé: Vendredi 28 Février 2014 03:45:01 
Objet: [ceph-users] [Annonce]The progress of KeyValueStore in Firely 

Hi all, 

last release I propose a KeyValueStore prototype(get info from 
http://sebastien-han.fr/blog/2013/12/02/ceph-performance-interesting-things-going-on). 
It contains some performance results and problems. Now I'd like to 
refresh our thoughts on KeyValueStore. 

KeyValueStore is pursuing FileStore's performance during this release. 
Now things go farther, KeyValueStore did better in rbd 
situation(partial write) . 

I test KeyValueStore compared to FileStore in a single OSD on Samsung 
SSD 840. The config can be viewed 
here(http://pad.ceph.com/p/KeyValueStore.conf). The same config file 
is applied to both FileStore and KeyValueStore except "osd 
objectstore" option. 

I use fio which rbd supported from 
TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine) 
to test rbd. 

The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite 
-ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100 
-group_reporting -name=ebs_test -pool=openstack -rbdname=image 
-clientname=fio -invalidate=0 

============================================ 

FileStore result: 
ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64 
fio-2.1.4 
Starting 1 thread 
rbd engine: RBD version: 0.1.8 

ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18 2014 
write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec 
slat (usec): min=116, max=2817, avg=195.78, stdev=56.45 
clat (msec): min=8, max=661, avg=39.57, stdev=29.26 
lat (msec): min=9, max=661, avg=39.77, stdev=29.25 
clat percentiles (msec): 
| 1.00th=[ 15], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 28], 
| 30.00th=[ 31], 40.00th=[ 35], 50.00th=[ 37], 60.00th=[ 40], 
| 70.00th=[ 43], 80.00th=[ 46], 90.00th=[ 51], 95.00th=[ 58], 
| 99.00th=[ 128], 99.50th=[ 210], 99.90th=[ 457], 99.95th=[ 494], 
| 99.99th=[ 545] 
bw (KB /s): min= 2120, max=12656, per=100.00%, avg=6464.27, stdev=1726.55 
lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47% 
lat (msec) : 500=0.34%, 750=0.05% 
cpu : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216 
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%, >=64=82.6% 
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
complete : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%, >=64=0.0% 
issued : total=r=0/w=70760/d=0, short=r=0/w=0/d=0 
latency : target=0, window=0, percentile=100.00%, depth=64 

Run status group 0 (all jobs): 
WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s, 
mint=44202msec, maxt=44202msec 

Disk stats (read/write): 
sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645, util=0.92% 

=============================================== 

KeyValueStore: 
ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64 
fio-2.1.4 
Starting 1 thread 
rbd engine: RBD version: 0.1.8 

ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30 2014 
write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec 
slat (usec): min=122, max=3237, avg=184.51, stdev=37.76 
clat (msec): min=10, max=168, avg=40.57, stdev= 5.70 
lat (msec): min=11, max=168, avg=40.75, stdev= 5.71 
clat percentiles (msec): 
| 1.00th=[ 34], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 39], 
| 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 
| 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 44], 95.00th=[ 45], 
| 99.00th=[ 48], 99.50th=[ 50], 99.90th=[ 163], 99.95th=[ 167], 
| 99.99th=[ 167] 
bw (KB /s): min= 4590, max= 7480, per=100.00%, avg=6285.69, stdev=374.22 
lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17% 
cpu : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194 
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%, >=64=99.3% 
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%, >=64=0.0% 
issued : total=r=0/w=111094/d=0, short=r=0/w=0/d=0 
latency : target=0, window=0, percentile=100.00%, depth=64 

Run status group 0 (all jobs): 
WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s, 
mint=70759msec, maxt=70759msec 

Disk stats (read/write): 
sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146, util=0.94% 


It's just a simple test, maybe exist some misleadings on the config or 
results. But 
we can obviously see the conspicuous improvement for KeyValueStore. 

In the near future, performance still will be the first thing to 
improve especially at write operation(The goal of KeyValueStore is 
provided with powerful write performance compared to FileStore), such 
as 
1. Fine-grained lock in object-level to improve the degree of 
parallelism, because KeyValueStore doesn't have Journal to quick the 
latency of write transaction, we need to avoid block as far as 
possible. 
2. Header cache(like inode in filesystem) to quick read. 
3. more tests 

Then new backend will be added like rocksdb or others. I'd like to see 
performance improvements from other backend. 

-- 
Best Regards, 

Wheat 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found] ` <CACJqLybuA48Jnz6Qwc7cs2kHJO30C4GwazXi8yGp8ZhvfFc2ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2014-02-28  6:59   ` Alexandre DERUMIER
@ 2014-03-01  0:04   ` Danny Al-Gaaf
       [not found]     ` <531123F6.5070007-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Danny Al-Gaaf @ 2014-03-01  0:04 UTC (permalink / raw)
  To: Haomai Wang, ceph-users-idqoXFIVOFJgJs9I8MT0rw,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA

Hi,

Am 28.02.2014 03:45, schrieb Haomai Wang:
[...]
> I use fio which rbd supported from
> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
> to test rbd.

I would recommend to no longer use this branch, it's outdated. The rbd
engine got contributed back to upstream fio and is now merged [1]. For
more information read [2].

[1] https://github.com/axboe/fio/commits/master
[2]
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html

> 
> The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite
> -ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100
> -group_reporting -name=ebs_test -pool=openstack -rbdname=image
> -clientname=fio -invalidate=0

Don't use runtime and size at the same time, since runtime limits the
size. What we normally do we let the fio job fill up the whole rbd or we
limit it only via runtime.

> ============================================
> 
> FileStore result:
> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64
> fio-2.1.4
> Starting 1 thread
> rbd engine: RBD version: 0.1.8
> 
> ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18 2014
>   write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec
>     slat (usec): min=116, max=2817, avg=195.78, stdev=56.45
>     clat (msec): min=8, max=661, avg=39.57, stdev=29.26
>      lat (msec): min=9, max=661, avg=39.77, stdev=29.25
>     clat percentiles (msec):
>      |  1.00th=[   15],  5.00th=[   20], 10.00th=[   23], 20.00th=[   28],
>      | 30.00th=[   31], 40.00th=[   35], 50.00th=[   37], 60.00th=[   40],
>      | 70.00th=[   43], 80.00th=[   46], 90.00th=[   51], 95.00th=[   58],
>      | 99.00th=[  128], 99.50th=[  210], 99.90th=[  457], 99.95th=[  494],
>      | 99.99th=[  545]
>     bw (KB  /s): min= 2120, max=12656, per=100.00%, avg=6464.27, stdev=1726.55
>     lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47%
>     lat (msec) : 500=0.34%, 750=0.05%
>   cpu          : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%, >=64=82.6%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      complete  : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%, >=64=0.0%
>      issued    : total=r=0/w=70760/d=0, short=r=0/w=0/d=0
>      latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>   WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s,
> mint=44202msec, maxt=44202msec
> 
> Disk stats (read/write):
>   sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645, util=0.92%
> 
> ===============================================
> 
> KeyValueStore:
> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64
> fio-2.1.4
> Starting 1 thread
> rbd engine: RBD version: 0.1.8
> 
> ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30 2014
>   write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec
>     slat (usec): min=122, max=3237, avg=184.51, stdev=37.76
>     clat (msec): min=10, max=168, avg=40.57, stdev= 5.70
>      lat (msec): min=11, max=168, avg=40.75, stdev= 5.71
>     clat percentiles (msec):
>      |  1.00th=[   34],  5.00th=[   37], 10.00th=[   39], 20.00th=[   39],
>      | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[   41],
>      | 70.00th=[   42], 80.00th=[   42], 90.00th=[   44], 95.00th=[   45],
>      | 99.00th=[   48], 99.50th=[   50], 99.90th=[  163], 99.95th=[  167],
>      | 99.99th=[  167]
>     bw (KB  /s): min= 4590, max= 7480, per=100.00%, avg=6285.69, stdev=374.22
>     lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17%
>   cpu          : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194
>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%, >=64=99.3%
>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%, >=64=0.0%
>      issued    : total=r=0/w=111094/d=0, short=r=0/w=0/d=0
>      latency   : target=0, window=0, percentile=100.00%, depth=64
> 
> Run status group 0 (all jobs):
>   WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s,
> mint=70759msec, maxt=70759msec
> 
> Disk stats (read/write):
>   sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146, util=0.94%
> 
> 
> It's just a simple test, maybe exist some misleadings on the config or
> results. But
> we can obviously see the conspicuous improvement for KeyValueStore.

The numbers are hard to compare. Since the tests wrote a different
amount of data. This could influence the numbers.

Do you mean improvements compared to former implementation or to FileStore?

Without a retest with the latest fio rbd engine: there is not so much
difference between KVS and FS atm.

Btw. Nice to see the rbd engine is useful to others ;-)

Regards

Danny

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]     ` <531123F6.5070007-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
@ 2014-03-01  7:00       ` Haomai Wang
       [not found]         ` <CACJqLybtitrDOcTcRXBZYJy552JzeYZWp_d=yWGUofxT7E46+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Haomai Wang @ 2014-03-01  7:00 UTC (permalink / raw)
  To: Danny Al-Gaaf
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

On Sat, Mar 1, 2014 at 8:04 AM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org> wrote:
> Hi,
>
> Am 28.02.2014 03:45, schrieb Haomai Wang:
> [...]
>> I use fio which rbd supported from
>> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
>> to test rbd.
>
> I would recommend to no longer use this branch, it's outdated. The rbd
> engine got contributed back to upstream fio and is now merged [1]. For
> more information read [2].
>
> [1] https://github.com/axboe/fio/commits/master
> [2]
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
>
>>
>> The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite
>> -ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100
>> -group_reporting -name=ebs_test -pool=openstack -rbdname=image
>> -clientname=fio -invalidate=0
>
> Don't use runtime and size at the same time, since runtime limits the
> size. What we normally do we let the fio job fill up the whole rbd or we
> limit it only via runtime.
>
>> ============================================
>>
>> FileStore result:
>> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64
>> fio-2.1.4
>> Starting 1 thread
>> rbd engine: RBD version: 0.1.8
>>
>> ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18 2014
>>   write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec
>>     slat (usec): min=116, max=2817, avg=195.78, stdev=56.45
>>     clat (msec): min=8, max=661, avg=39.57, stdev=29.26
>>      lat (msec): min=9, max=661, avg=39.77, stdev=29.25
>>     clat percentiles (msec):
>>      |  1.00th=[   15],  5.00th=[   20], 10.00th=[   23], 20.00th=[   28],
>>      | 30.00th=[   31], 40.00th=[   35], 50.00th=[   37], 60.00th=[   40],
>>      | 70.00th=[   43], 80.00th=[   46], 90.00th=[   51], 95.00th=[   58],
>>      | 99.00th=[  128], 99.50th=[  210], 99.90th=[  457], 99.95th=[  494],
>>      | 99.99th=[  545]
>>     bw (KB  /s): min= 2120, max=12656, per=100.00%, avg=6464.27, stdev=1726.55
>>     lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47%
>>     lat (msec) : 500=0.34%, 750=0.05%
>>   cpu          : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216
>>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%, >=64=82.6%
>>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>>      complete  : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%, >=64=0.0%
>>      issued    : total=r=0/w=70760/d=0, short=r=0/w=0/d=0
>>      latency   : target=0, window=0, percentile=100.00%, depth=64
>>
>> Run status group 0 (all jobs):
>>   WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s,
>> mint=44202msec, maxt=44202msec
>>
>> Disk stats (read/write):
>>   sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645, util=0.92%
>>
>> ===============================================
>>
>> KeyValueStore:
>> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=64
>> fio-2.1.4
>> Starting 1 thread
>> rbd engine: RBD version: 0.1.8
>>
>> ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30 2014
>>   write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec
>>     slat (usec): min=122, max=3237, avg=184.51, stdev=37.76
>>     clat (msec): min=10, max=168, avg=40.57, stdev= 5.70
>>      lat (msec): min=11, max=168, avg=40.75, stdev= 5.71
>>     clat percentiles (msec):
>>      |  1.00th=[   34],  5.00th=[   37], 10.00th=[   39], 20.00th=[   39],
>>      | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[   41],
>>      | 70.00th=[   42], 80.00th=[   42], 90.00th=[   44], 95.00th=[   45],
>>      | 99.00th=[   48], 99.50th=[   50], 99.90th=[  163], 99.95th=[  167],
>>      | 99.99th=[  167]
>>     bw (KB  /s): min= 4590, max= 7480, per=100.00%, avg=6285.69, stdev=374.22
>>     lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17%
>>   cpu          : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194
>>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%, >=64=99.3%
>>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
>>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%, >=64=0.0%
>>      issued    : total=r=0/w=111094/d=0, short=r=0/w=0/d=0
>>      latency   : target=0, window=0, percentile=100.00%, depth=64
>>
>> Run status group 0 (all jobs):
>>   WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s,
>> mint=70759msec, maxt=70759msec
>>
>> Disk stats (read/write):
>>   sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146, util=0.94%
>>
>>
>> It's just a simple test, maybe exist some misleadings on the config or
>> results. But
>> we can obviously see the conspicuous improvement for KeyValueStore.
>
> The numbers are hard to compare. Since the tests wrote a different
> amount of data. This could influence the numbers.
>
> Do you mean improvements compared to former implementation or to FileStore?
>
> Without a retest with the latest fio rbd engine: there is not so much
> difference between KVS and FS atm.
>
> Btw. Nice to see the rbd engine is useful to others ;-)

Thanks for your advise and jobs on fio-rbd. :)

The test isn't preciseness and just a simple test to show the progress
of kvstore.

>
> Regards
>
> Danny



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]         ` <CACJqLybtitrDOcTcRXBZYJy552JzeYZWp_d=yWGUofxT7E46+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-03  5:36           ` Sushma R
       [not found]             ` <CAOj3taPYaJfdipyiqFavw1AT24ZumrCVWm4FrUnEc1Ki7cah9Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Sushma R @ 2014-06-03  5:36 UTC (permalink / raw)
  To: Haomai Wang
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 6225 bytes --]

Hi Haomai,

I tried to compare the READ performance of FileStore and KeyValueStore
using the internal tool "ceph_smalliobench" and I see KeyValueStore's
performance is approx half that of FileStore. I'm using similar conf file
as yours. Is this the expected behavior or am I missing something?

Thanks,
Sushma


On Fri, Feb 28, 2014 at 11:00 PM, Haomai Wang <haomaiwang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> On Sat, Mar 1, 2014 at 8:04 AM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
> wrote:
> > Hi,
> >
> > Am 28.02.2014 03:45, schrieb Haomai Wang:
> > [...]
> >> I use fio which rbd supported from
> >> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
> >> to test rbd.
> >
> > I would recommend to no longer use this branch, it's outdated. The rbd
> > engine got contributed back to upstream fio and is now merged [1]. For
> > more information read [2].
> >
> > [1] https://github.com/axboe/fio/commits/master
> > [2]
> >
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
> >
> >>
> >> The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite
> >> -ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100
> >> -group_reporting -name=ebs_test -pool=openstack -rbdname=image
> >> -clientname=fio -invalidate=0
> >
> > Don't use runtime and size at the same time, since runtime limits the
> > size. What we normally do we let the fio job fill up the whole rbd or we
> > limit it only via runtime.
> >
> >> ============================================
> >>
> >> FileStore result:
> >> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
> iodepth=64
> >> fio-2.1.4
> >> Starting 1 thread
> >> rbd engine: RBD version: 0.1.8
> >>
> >> ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18
> 2014
> >>   write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec
> >>     slat (usec): min=116, max=2817, avg=195.78, stdev=56.45
> >>     clat (msec): min=8, max=661, avg=39.57, stdev=29.26
> >>      lat (msec): min=9, max=661, avg=39.77, stdev=29.25
> >>     clat percentiles (msec):
> >>      |  1.00th=[   15],  5.00th=[   20], 10.00th=[   23], 20.00th=[
> 28],
> >>      | 30.00th=[   31], 40.00th=[   35], 50.00th=[   37], 60.00th=[
> 40],
> >>      | 70.00th=[   43], 80.00th=[   46], 90.00th=[   51], 95.00th=[
> 58],
> >>      | 99.00th=[  128], 99.50th=[  210], 99.90th=[  457], 99.95th=[
>  494],
> >>      | 99.99th=[  545]
> >>     bw (KB  /s): min= 2120, max=12656, per=100.00%, avg=6464.27,
> stdev=1726.55
> >>     lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47%
> >>     lat (msec) : 500=0.34%, 750=0.05%
> >>   cpu          : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216
> >>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%,
> >=64=82.6%
> >>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
> >>      complete  : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%,
> >=64=0.0%
> >>      issued    : total=r=0/w=70760/d=0, short=r=0/w=0/d=0
> >>      latency   : target=0, window=0, percentile=100.00%, depth=64
> >>
> >> Run status group 0 (all jobs):
> >>   WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s,
> >> mint=44202msec, maxt=44202msec
> >>
> >> Disk stats (read/write):
> >>   sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645, util=0.92%
> >>
> >> ===============================================
> >>
> >> KeyValueStore:
> >> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
> iodepth=64
> >> fio-2.1.4
> >> Starting 1 thread
> >> rbd engine: RBD version: 0.1.8
> >>
> >> ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30
> 2014
> >>   write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec
> >>     slat (usec): min=122, max=3237, avg=184.51, stdev=37.76
> >>     clat (msec): min=10, max=168, avg=40.57, stdev= 5.70
> >>      lat (msec): min=11, max=168, avg=40.75, stdev= 5.71
> >>     clat percentiles (msec):
> >>      |  1.00th=[   34],  5.00th=[   37], 10.00th=[   39], 20.00th=[
> 39],
> >>      | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[
> 41],
> >>      | 70.00th=[   42], 80.00th=[   42], 90.00th=[   44], 95.00th=[
> 45],
> >>      | 99.00th=[   48], 99.50th=[   50], 99.90th=[  163], 99.95th=[
>  167],
> >>      | 99.99th=[  167]
> >>     bw (KB  /s): min= 4590, max= 7480, per=100.00%, avg=6285.69,
> stdev=374.22
> >>     lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17%
> >>   cpu          : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194
> >>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%,
> >=64=99.3%
> >>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >=64=0.0%
> >>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%,
> >=64=0.0%
> >>      issued    : total=r=0/w=111094/d=0, short=r=0/w=0/d=0
> >>      latency   : target=0, window=0, percentile=100.00%, depth=64
> >>
> >> Run status group 0 (all jobs):
> >>   WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s,
> >> mint=70759msec, maxt=70759msec
> >>
> >> Disk stats (read/write):
> >>   sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146,
> util=0.94%
> >>
> >>
> >> It's just a simple test, maybe exist some misleadings on the config or
> >> results. But
> >> we can obviously see the conspicuous improvement for KeyValueStore.
> >
> > The numbers are hard to compare. Since the tests wrote a different
> > amount of data. This could influence the numbers.
> >
> > Do you mean improvements compared to former implementation or to
> FileStore?
> >
> > Without a retest with the latest fio rbd engine: there is not so much
> > difference between KVS and FS atm.
> >
> > Btw. Nice to see the rbd engine is useful to others ;-)
>
> Thanks for your advise and jobs on fio-rbd. :)
>
> The test isn't preciseness and just a simple test to show the progress
> of kvstore.
>
> >
> > Regards
> >
> > Danny
>
>
>
> --
> Best Regards,
>
> Wheat
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 8454 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]             ` <CAOj3taPYaJfdipyiqFavw1AT24ZumrCVWm4FrUnEc1Ki7cah9Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-03  7:06               ` Haomai Wang
       [not found]                 ` <CACJqLyY=BBKKdwOsEoDbo8OfD+o2B2XoSEtmnQ8=nNCm4EaADQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Haomai Wang @ 2014-06-03  7:06 UTC (permalink / raw)
  To: Sushma R
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

I don't know the actual size of "small io". And what's ceph version you used.

But I think it's possible if KeyValueStore only has half performance
compared to FileStore in small io size. A new config value let user
can tunes it will be introduced and maybe help.

All in all, maybe you could tell more about "ceph_smalliobench"

On Tue, Jun 3, 2014 at 1:36 PM, Sushma R <gsushma-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi Haomai,
>
> I tried to compare the READ performance of FileStore and KeyValueStore using
> the internal tool "ceph_smalliobench" and I see KeyValueStore's performance
> is approx half that of FileStore. I'm using similar conf file as yours. Is
> this the expected behavior or am I missing something?
>
> Thanks,
> Sushma
>
>
> On Fri, Feb 28, 2014 at 11:00 PM, Haomai Wang <haomaiwang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>
>> On Sat, Mar 1, 2014 at 8:04 AM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
>> wrote:
>> > Hi,
>> >
>> > Am 28.02.2014 03:45, schrieb Haomai Wang:
>> > [...]
>> >> I use fio which rbd supported from
>> >> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
>> >> to test rbd.
>> >
>> > I would recommend to no longer use this branch, it's outdated. The rbd
>> > engine got contributed back to upstream fio and is now merged [1]. For
>> > more information read [2].
>> >
>> > [1] https://github.com/axboe/fio/commits/master
>> > [2]
>> >
>> > http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
>> >
>> >>
>> >> The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite
>> >> -ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100
>> >> -group_reporting -name=ebs_test -pool=openstack -rbdname=image
>> >> -clientname=fio -invalidate=0
>> >
>> > Don't use runtime and size at the same time, since runtime limits the
>> > size. What we normally do we let the fio job fill up the whole rbd or we
>> > limit it only via runtime.
>> >
>> >> ============================================
>> >>
>> >> FileStore result:
>> >> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
>> >> iodepth=64
>> >> fio-2.1.4
>> >> Starting 1 thread
>> >> rbd engine: RBD version: 0.1.8
>> >>
>> >> ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18
>> >> 2014
>> >>   write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec
>> >>     slat (usec): min=116, max=2817, avg=195.78, stdev=56.45
>> >>     clat (msec): min=8, max=661, avg=39.57, stdev=29.26
>> >>      lat (msec): min=9, max=661, avg=39.77, stdev=29.25
>> >>     clat percentiles (msec):
>> >>      |  1.00th=[   15],  5.00th=[   20], 10.00th=[   23], 20.00th=[
>> >> 28],
>> >>      | 30.00th=[   31], 40.00th=[   35], 50.00th=[   37], 60.00th=[
>> >> 40],
>> >>      | 70.00th=[   43], 80.00th=[   46], 90.00th=[   51], 95.00th=[
>> >> 58],
>> >>      | 99.00th=[  128], 99.50th=[  210], 99.90th=[  457], 99.95th=[
>> >> 494],
>> >>      | 99.99th=[  545]
>> >>     bw (KB  /s): min= 2120, max=12656, per=100.00%, avg=6464.27,
>> >> stdev=1726.55
>> >>     lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47%
>> >>     lat (msec) : 500=0.34%, 750=0.05%
>> >>   cpu          : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216
>> >>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%,
>> >> >=64=82.6%
>> >>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >> >=64=0.0%
>> >>      complete  : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%,
>> >> >=64=0.0%
>> >>      issued    : total=r=0/w=70760/d=0, short=r=0/w=0/d=0
>> >>      latency   : target=0, window=0, percentile=100.00%, depth=64
>> >>
>> >> Run status group 0 (all jobs):
>> >>   WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s,
>> >> mint=44202msec, maxt=44202msec
>> >>
>> >> Disk stats (read/write):
>> >>   sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645,
>> >> util=0.92%
>> >>
>> >> ===============================================
>> >>
>> >> KeyValueStore:
>> >> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
>> >> iodepth=64
>> >> fio-2.1.4
>> >> Starting 1 thread
>> >> rbd engine: RBD version: 0.1.8
>> >>
>> >> ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30
>> >> 2014
>> >>   write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec
>> >>     slat (usec): min=122, max=3237, avg=184.51, stdev=37.76
>> >>     clat (msec): min=10, max=168, avg=40.57, stdev= 5.70
>> >>      lat (msec): min=11, max=168, avg=40.75, stdev= 5.71
>> >>     clat percentiles (msec):
>> >>      |  1.00th=[   34],  5.00th=[   37], 10.00th=[   39], 20.00th=[
>> >> 39],
>> >>      | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[
>> >> 41],
>> >>      | 70.00th=[   42], 80.00th=[   42], 90.00th=[   44], 95.00th=[
>> >> 45],
>> >>      | 99.00th=[   48], 99.50th=[   50], 99.90th=[  163], 99.95th=[
>> >> 167],
>> >>      | 99.99th=[  167]
>> >>     bw (KB  /s): min= 4590, max= 7480, per=100.00%, avg=6285.69,
>> >> stdev=374.22
>> >>     lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17%
>> >>   cpu          : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194
>> >>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%,
>> >> >=64=99.3%
>> >>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
>> >> >=64=0.0%
>> >>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%,
>> >> >=64=0.0%
>> >>      issued    : total=r=0/w=111094/d=0, short=r=0/w=0/d=0
>> >>      latency   : target=0, window=0, percentile=100.00%, depth=64
>> >>
>> >> Run status group 0 (all jobs):
>> >>   WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s,
>> >> mint=70759msec, maxt=70759msec
>> >>
>> >> Disk stats (read/write):
>> >>   sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146,
>> >> util=0.94%
>> >>
>> >>
>> >> It's just a simple test, maybe exist some misleadings on the config or
>> >> results. But
>> >> we can obviously see the conspicuous improvement for KeyValueStore.
>> >
>> > The numbers are hard to compare. Since the tests wrote a different
>> > amount of data. This could influence the numbers.
>> >
>> > Do you mean improvements compared to former implementation or to
>> > FileStore?
>> >
>> > Without a retest with the latest fio rbd engine: there is not so much
>> > difference between KVS and FS atm.
>> >
>> > Btw. Nice to see the rbd engine is useful to others ;-)
>>
>> Thanks for your advise and jobs on fio-rbd. :)
>>
>> The test isn't preciseness and just a simple test to show the progress
>> of kvstore.
>>
>> >
>> > Regards
>> >
>> > Danny
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]                 ` <CACJqLyY=BBKKdwOsEoDbo8OfD+o2B2XoSEtmnQ8=nNCm4EaADQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-03 18:55                   ` Sushma R
       [not found]                     ` <CAOj3taONVV8rMAgu8Cen=eCZ2ycYjSs8u3nfKiJUptdb7ffyEA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Sushma R @ 2014-06-03 18:55 UTC (permalink / raw)
  To: Haomai Wang
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 8828 bytes --]

Haomai,

I'm using the latest ceph master branch.

ceph_smalliobench is a Ceph internal benchmarking tool similar to rados
bench and the performance is more or less similar to that reported by fio.

I tried to use fio with rbd ioengine (
http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html)
and below are the numbers with different workloads on our setup.
Note : fio rbd engine segfaults with randread IO pattern, only with LevelDB
(no issues with FileStore). With FileStore, performance of
ceph_smalliobench and fio-rbd is similar for READs, so the numbers for
randread for LevelDB are with ceph_smalliobench (since fio rbd segfaults).

  I/O Pattern

XFS FileStore

LevelDB



IOPs

Avg. Latency

IOPs

Avg. Latency











4K randwrite

1415

22.55 msec

853

37.48 msec

64K randwrite

311

214.86 msec

328

97.42 msec

4K randread

9477

3.346 msec

3000

11 msec

64K randread

3961

8.072 msec

4000

8 msec


Based on the above, it appears that LevelDB performs better than FileStore
only for 64K random writes - the latency is particularly low compared to
FileStore.
For the rest of the workloads, XFS FileStore seems to perform better.
Can you please let me know any config values that can be tuned for better
performance? Currently I'm using the same ceph.conf as you posted as part
of this thread.

Appreciate all help in this regard.

Thanks,
Sushma

On Tue, Jun 3, 2014 at 12:06 AM, Haomai Wang <haomaiwang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> I don't know the actual size of "small io". And what's ceph version you
> used.
>
> But I think it's possible if KeyValueStore only has half performance
> compared to FileStore in small io size. A new config value let user
> can tunes it will be introduced and maybe help.
>
> All in all, maybe you could tell more about "ceph_smalliobench"
>
> On Tue, Jun 3, 2014 at 1:36 PM, Sushma R <gsushma-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> > Hi Haomai,
> >
> > I tried to compare the READ performance of FileStore and KeyValueStore
> using
> > the internal tool "ceph_smalliobench" and I see KeyValueStore's
> performance
> > is approx half that of FileStore. I'm using similar conf file as yours.
> Is
> > this the expected behavior or am I missing something?
> >
> > Thanks,
> > Sushma
> >
> >
> > On Fri, Feb 28, 2014 at 11:00 PM, Haomai Wang <haomaiwang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
> wrote:
> >>
> >> On Sat, Mar 1, 2014 at 8:04 AM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
> >> wrote:
> >> > Hi,
> >> >
> >> > Am 28.02.2014 03:45, schrieb Haomai Wang:
> >> > [...]
> >> >> I use fio which rbd supported from
> >> >> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
> >> >> to test rbd.
> >> >
> >> > I would recommend to no longer use this branch, it's outdated. The rbd
> >> > engine got contributed back to upstream fio and is now merged [1]. For
> >> > more information read [2].
> >> >
> >> > [1] https://github.com/axboe/fio/commits/master
> >> > [2]
> >> >
> >> >
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
> >> >
> >> >>
> >> >> The fio command: fio -direct=1 -iodepth=64 -thread -rw=randwrite
> >> >> -ioengine=rbd -bs=4k -size=19G -numjobs=1 -runtime=100
> >> >> -group_reporting -name=ebs_test -pool=openstack -rbdname=image
> >> >> -clientname=fio -invalidate=0
> >> >
> >> > Don't use runtime and size at the same time, since runtime limits the
> >> > size. What we normally do we let the fio job fill up the whole rbd or
> we
> >> > limit it only via runtime.
> >> >
> >> >> ============================================
> >> >>
> >> >> FileStore result:
> >> >> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
> >> >> iodepth=64
> >> >> fio-2.1.4
> >> >> Starting 1 thread
> >> >> rbd engine: RBD version: 0.1.8
> >> >>
> >> >> ebs_test: (groupid=0, jobs=1): err= 0: pid=30886: Thu Feb 27 08:09:18
> >> >> 2014
> >> >>   write: io=283040KB, bw=6403.4KB/s, iops=1600, runt= 44202msec
> >> >>     slat (usec): min=116, max=2817, avg=195.78, stdev=56.45
> >> >>     clat (msec): min=8, max=661, avg=39.57, stdev=29.26
> >> >>      lat (msec): min=9, max=661, avg=39.77, stdev=29.25
> >> >>     clat percentiles (msec):
> >> >>      |  1.00th=[   15],  5.00th=[   20], 10.00th=[   23], 20.00th=[
> >> >> 28],
> >> >>      | 30.00th=[   31], 40.00th=[   35], 50.00th=[   37], 60.00th=[
> >> >> 40],
> >> >>      | 70.00th=[   43], 80.00th=[   46], 90.00th=[   51], 95.00th=[
> >> >> 58],
> >> >>      | 99.00th=[  128], 99.50th=[  210], 99.90th=[  457], 99.95th=[
> >> >> 494],
> >> >>      | 99.99th=[  545]
> >> >>     bw (KB  /s): min= 2120, max=12656, per=100.00%, avg=6464.27,
> >> >> stdev=1726.55
> >> >>     lat (msec) : 10=0.01%, 20=5.91%, 50=83.35%, 100=8.88%, 250=1.47%
> >> >>     lat (msec) : 500=0.34%, 750=0.05%
> >> >>   cpu          : usr=29.83%, sys=1.36%, ctx=84002, majf=0, minf=216
> >> >>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=17.4%,
> >> >> >=64=82.6%
> >> >>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >> >=64=0.0%
> >> >>      complete  : 0=0.0%, 4=99.1%, 8=0.5%, 16=0.3%, 32=0.1%, 64=0.1%,
> >> >> >=64=0.0%
> >> >>      issued    : total=r=0/w=70760/d=0, short=r=0/w=0/d=0
> >> >>      latency   : target=0, window=0, percentile=100.00%, depth=64
> >> >>
> >> >> Run status group 0 (all jobs):
> >> >>   WRITE: io=283040KB, aggrb=6403KB/s, minb=6403KB/s, maxb=6403KB/s,
> >> >> mint=44202msec, maxt=44202msec
> >> >>
> >> >> Disk stats (read/write):
> >> >>   sdb: ios=5/9512, merge=0/69, ticks=4/10649, in_queue=10645,
> >> >> util=0.92%
> >> >>
> >> >> ===============================================
> >> >>
> >> >> KeyValueStore:
> >> >> ebs_test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd,
> >> >> iodepth=64
> >> >> fio-2.1.4
> >> >> Starting 1 thread
> >> >> rbd engine: RBD version: 0.1.8
> >> >>
> >> >> ebs_test: (groupid=0, jobs=1): err= 0: pid=29137: Thu Feb 27 08:06:30
> >> >> 2014
> >> >>   write: io=444376KB, bw=6280.2KB/s, iops=1570, runt= 70759msec
> >> >>     slat (usec): min=122, max=3237, avg=184.51, stdev=37.76
> >> >>     clat (msec): min=10, max=168, avg=40.57, stdev= 5.70
> >> >>      lat (msec): min=11, max=168, avg=40.75, stdev= 5.71
> >> >>     clat percentiles (msec):
> >> >>      |  1.00th=[   34],  5.00th=[   37], 10.00th=[   39], 20.00th=[
> >> >> 39],
> >> >>      | 30.00th=[   40], 40.00th=[   40], 50.00th=[   41], 60.00th=[
> >> >> 41],
> >> >>      | 70.00th=[   42], 80.00th=[   42], 90.00th=[   44], 95.00th=[
> >> >> 45],
> >> >>      | 99.00th=[   48], 99.50th=[   50], 99.90th=[  163], 99.95th=[
> >> >> 167],
> >> >>      | 99.99th=[  167]
> >> >>     bw (KB  /s): min= 4590, max= 7480, per=100.00%, avg=6285.69,
> >> >> stdev=374.22
> >> >>     lat (msec) : 20=0.02%, 50=99.58%, 100=0.23%, 250=0.17%
> >> >>   cpu          : usr=29.11%, sys=1.10%, ctx=118564, majf=0, minf=194
> >> >>   IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.7%,
> >> >> >=64=99.3%
> >> >>      submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%,
> >> >> >=64=0.0%
> >> >>      complete  : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.1%,
> >> >> >=64=0.0%
> >> >>      issued    : total=r=0/w=111094/d=0, short=r=0/w=0/d=0
> >> >>      latency   : target=0, window=0, percentile=100.00%, depth=64
> >> >>
> >> >> Run status group 0 (all jobs):
> >> >>   WRITE: io=444376KB, aggrb=6280KB/s, minb=6280KB/s, maxb=6280KB/s,
> >> >> mint=70759msec, maxt=70759msec
> >> >>
> >> >> Disk stats (read/write):
> >> >>   sdb: ios=0/15936, merge=0/272, ticks=0/17157, in_queue=17146,
> >> >> util=0.94%
> >> >>
> >> >>
> >> >> It's just a simple test, maybe exist some misleadings on the config
> or
> >> >> results. But
> >> >> we can obviously see the conspicuous improvement for KeyValueStore.
> >> >
> >> > The numbers are hard to compare. Since the tests wrote a different
> >> > amount of data. This could influence the numbers.
> >> >
> >> > Do you mean improvements compared to former implementation or to
> >> > FileStore?
> >> >
> >> > Without a retest with the latest fio rbd engine: there is not so much
> >> > difference between KVS and FS atm.
> >> >
> >> > Btw. Nice to see the rbd engine is useful to others ;-)
> >>
> >> Thanks for your advise and jobs on fio-rbd. :)
> >>
> >> The test isn't preciseness and just a simple test to show the progress
> >> of kvstore.
> >>
> >> >
> >> > Regards
> >> >
> >> > Danny
> >>
> >>
> >>
> >> --
> >> Best Regards,
> >>
> >> Wheat
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
>
>
>
> --
> Best Regards,
>
> Wheat
>

[-- Attachment #1.2: Type: text/html, Size: 23068 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]                     ` <CAOj3taONVV8rMAgu8Cen=eCZ2ycYjSs8u3nfKiJUptdb7ffyEA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-03 19:34                       ` Danny Al-Gaaf
       [not found]                         ` <538E2361.2040400-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Danny Al-Gaaf @ 2014-06-03 19:34 UTC (permalink / raw)
  To: Sushma R, Haomai Wang
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

Am 03.06.2014 20:55, schrieb Sushma R:
> Haomai,
> 
> I'm using the latest ceph master branch.
> 
> ceph_smalliobench is a Ceph internal benchmarking tool similar to rados
> bench and the performance is more or less similar to that reported by fio.
> 
> I tried to use fio with rbd ioengine (
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html)
> and below are the numbers with different workloads on our setup.
> Note : fio rbd engine segfaults with randread IO pattern, only with LevelDB
> (no issues with FileStore). With FileStore, performance of
> ceph_smalliobench and fio-rbd is similar for READs, so the numbers for
> randread for LevelDB are with ceph_smalliobench (since fio rbd segfaults).

Could you send me a backtrace of the segfault and some info about the
ceph and fio version you used so that I can take a look at it?

Thanks,

Danny

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]                         ` <538E2361.2040400-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
@ 2014-06-03 19:38                           ` Sushma R
       [not found]                             ` <CAOj3taNQeKyicMvmaMU-EcE_abzoZ1fSERDNqONy99jvJLCg-A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 10+ messages in thread
From: Sushma R @ 2014-06-03 19:38 UTC (permalink / raw)
  To: Danny Al-Gaaf
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA


[-- Attachment #1.1: Type: text/plain, Size: 2881 bytes --]

ceph version : master (ceph version 0.80-713-g86754cc
(86754cc78ca570f19f5a68fb634d613f952a22eb))
fio version : fio-2.1.9-20-g290a

gdb backtrace
#0  0x00007ffff6de5249 in AO_fetch_and_add_full (incr=1, p=0x7fff00000018)
at /usr/include/atomic_ops/sysdeps/gcc/x86.h:68
#1  inc (this=0x7fff00000018) at ./include/atomic.h:98
#2  ceph::buffer::ptr::ptr (this=0x7fffecf74820, p=..., o=<optimized out>,
l=0) at common/buffer.cc:575
#3  0x00007ffff6de63df in ceph::buffer::list::append
(this=this@entry=0x7fffc80008e8,
bp=..., off=<optimized out>, len=len@entry=0) at common/buffer.cc:1267
#4  0x00007ffff6de6a44 in ceph::buffer::list::splice (this=0x7fffe4014590,
off=<optimized out>, len=64512, claim_by=0x7fffc80008e8) at
common/buffer.cc:1426
#5  0x00007ffff7b89d45 in
Striper::StripedReadResult::add_partial_sparse_result (this=0x7fffe4014278,
cct=0x7fffe4006f50, bl=..., bl_map=..., bl_off=3670016,
    buffer_extents=...) at osdc/Striper.cc:291
#6  0x00007ffff7b180d8 in librbd::C_AioRead::finish (this=0x7fffe400d6a0,
r=<optimized out>) at librbd/AioCompletion.cc:94
#7  0x00007ffff7b182f9 in Context::complete (this=0x7fffe400d6a0,
r=<optimized out>) at ./include/Context.h:64
#8  0x00007ffff7b1840d in librbd::AioRequest::complete
(this=0x7fffe4014540, r=0) at ./librbd/AioRequest.h:40
#9  0x00007ffff6d3a538 in librados::C_AioComplete::finish
(this=0x7fffdc0025c0, r=<optimized out>) at
./librados/AioCompletionImpl.h:178
#10 0x00007ffff7b182f9 in Context::complete (this=0x7fffdc0025c0,
r=<optimized out>) at ./include/Context.h:64
#11 0x00007ffff6dc85f0 in Finisher::finisher_thread_entry
(this=0x7fffe400c7c8) at common/Finisher.cc:56
#12 0x00007ffff5f49f8e in start_thread (arg=0x7fffecf75700) at
pthread_create.c:311
#13 0x00007ffff5a6fa0d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thanks,
Sushma


On Tue, Jun 3, 2014 at 12:34 PM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
wrote:

> Am 03.06.2014 20:55, schrieb Sushma R:
> > Haomai,
> >
> > I'm using the latest ceph master branch.
> >
> > ceph_smalliobench is a Ceph internal benchmarking tool similar to rados
> > bench and the performance is more or less similar to that reported by
> fio.
> >
> > I tried to use fio with rbd ioengine (
> >
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
> )
> > and below are the numbers with different workloads on our setup.
> > Note : fio rbd engine segfaults with randread IO pattern, only with
> LevelDB
> > (no issues with FileStore). With FileStore, performance of
> > ceph_smalliobench and fio-rbd is similar for READs, so the numbers for
> > randread for LevelDB are with ceph_smalliobench (since fio rbd
> segfaults).
>
> Could you send me a backtrace of the segfault and some info about the
> ceph and fio version you used so that I can take a look at it?
>
> Thanks,
>
> Danny
>
>

[-- Attachment #1.2: Type: text/html, Size: 3753 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [Annonce]The progress of KeyValueStore in Firely
       [not found]                             ` <CAOj3taNQeKyicMvmaMU-EcE_abzoZ1fSERDNqONy99jvJLCg-A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-04  5:02                               ` Haomai Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Haomai Wang @ 2014-06-04  5:02 UTC (permalink / raw)
  To: Sushma R
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

The fix pull request is https://github.com/ceph/ceph/pull/1912/files.

Someone can help to review and merge

On Wed, Jun 4, 2014 at 3:38 AM, Sushma R <gsushma-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> ceph version : master (ceph version 0.80-713-g86754cc
> (86754cc78ca570f19f5a68fb634d613f952a22eb))
> fio version : fio-2.1.9-20-g290a
>
> gdb backtrace
> #0  0x00007ffff6de5249 in AO_fetch_and_add_full (incr=1, p=0x7fff00000018)
> at /usr/include/atomic_ops/sysdeps/gcc/x86.h:68
> #1  inc (this=0x7fff00000018) at ./include/atomic.h:98
> #2  ceph::buffer::ptr::ptr (this=0x7fffecf74820, p=..., o=<optimized out>,
> l=0) at common/buffer.cc:575
> #3  0x00007ffff6de63df in ceph::buffer::list::append
> (this=this@entry=0x7fffc80008e8, bp=..., off=<optimized out>,
> len=len@entry=0) at common/buffer.cc:1267
> #4  0x00007ffff6de6a44 in ceph::buffer::list::splice (this=0x7fffe4014590,
> off=<optimized out>, len=64512, claim_by=0x7fffc80008e8) at
> common/buffer.cc:1426
> #5  0x00007ffff7b89d45 in
> Striper::StripedReadResult::add_partial_sparse_result (this=0x7fffe4014278,
> cct=0x7fffe4006f50, bl=..., bl_map=..., bl_off=3670016,
>     buffer_extents=...) at osdc/Striper.cc:291
> #6  0x00007ffff7b180d8 in librbd::C_AioRead::finish (this=0x7fffe400d6a0,
> r=<optimized out>) at librbd/AioCompletion.cc:94
> #7  0x00007ffff7b182f9 in Context::complete (this=0x7fffe400d6a0,
> r=<optimized out>) at ./include/Context.h:64
> #8  0x00007ffff7b1840d in librbd::AioRequest::complete (this=0x7fffe4014540,
> r=0) at ./librbd/AioRequest.h:40
> #9  0x00007ffff6d3a538 in librados::C_AioComplete::finish
> (this=0x7fffdc0025c0, r=<optimized out>) at
> ./librados/AioCompletionImpl.h:178
> #10 0x00007ffff7b182f9 in Context::complete (this=0x7fffdc0025c0,
> r=<optimized out>) at ./include/Context.h:64
> #11 0x00007ffff6dc85f0 in Finisher::finisher_thread_entry
> (this=0x7fffe400c7c8) at common/Finisher.cc:56
> #12 0x00007ffff5f49f8e in start_thread (arg=0x7fffecf75700) at
> pthread_create.c:311
> #13 0x00007ffff5a6fa0d in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
>
> Thanks,
> Sushma
>
>
> On Tue, Jun 3, 2014 at 12:34 PM, Danny Al-Gaaf <danny.al-gaaf-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
> wrote:
>>
>> Am 03.06.2014 20:55, schrieb Sushma R:
>> > Haomai,
>> >
>> > I'm using the latest ceph master branch.
>> >
>> > ceph_smalliobench is a Ceph internal benchmarking tool similar to rados
>> > bench and the performance is more or less similar to that reported by
>> > fio.
>> >
>> > I tried to use fio with rbd ioengine (
>> >
>> > http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html)
>> > and below are the numbers with different workloads on our setup.
>> > Note : fio rbd engine segfaults with randread IO pattern, only with
>> > LevelDB
>> > (no issues with FileStore). With FileStore, performance of
>> > ceph_smalliobench and fio-rbd is similar for READs, so the numbers for
>> > randread for LevelDB are with ceph_smalliobench (since fio rbd
>> > segfaults).
>>
>> Could you send me a backtrace of the segfault and some info about the
>> ceph and fio version you used so that I can take a look at it?
>>
>> Thanks,
>>
>> Danny
>>
>



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-06-04  5:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-28  2:45 [Annonce]The progress of KeyValueStore in Firely Haomai Wang
     [not found] ` <CACJqLybuA48Jnz6Qwc7cs2kHJO30C4GwazXi8yGp8ZhvfFc2ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-02-28  6:59   ` Alexandre DERUMIER
2014-03-01  0:04   ` Danny Al-Gaaf
     [not found]     ` <531123F6.5070007-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
2014-03-01  7:00       ` Haomai Wang
     [not found]         ` <CACJqLybtitrDOcTcRXBZYJy552JzeYZWp_d=yWGUofxT7E46+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-03  5:36           ` Sushma R
     [not found]             ` <CAOj3taPYaJfdipyiqFavw1AT24ZumrCVWm4FrUnEc1Ki7cah9Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-03  7:06               ` Haomai Wang
     [not found]                 ` <CACJqLyY=BBKKdwOsEoDbo8OfD+o2B2XoSEtmnQ8=nNCm4EaADQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-03 18:55                   ` Sushma R
     [not found]                     ` <CAOj3taONVV8rMAgu8Cen=eCZ2ycYjSs8u3nfKiJUptdb7ffyEA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-03 19:34                       ` Danny Al-Gaaf
     [not found]                         ` <538E2361.2040400-2YacvwyR+KOzQB+pC5nmwQ@public.gmane.org>
2014-06-03 19:38                           ` Sushma R
     [not found]                             ` <CAOj3taNQeKyicMvmaMU-EcE_abzoZ1fSERDNqONy99jvJLCg-A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-04  5:02                               ` Haomai Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.