All of lore.kernel.org
 help / color / mirror / Atom feed
* using cache-tier with writeback mode, raods bench result degrade
@ 2016-01-08 16:06 hnuzhoulin
       [not found] ` <op.yaxfgsdp6jkj1f-Azv1iiBWqJVWD3n+koEi8Ve7Yu2RT3cRrNQQ6b5fDX0@public.gmane.org>
  0 siblings, 1 reply; 2+ messages in thread
From: hnuzhoulin @ 2016-01-08 16:06 UTC (permalink / raw)
  To: ceph-devel-u79uwXL29TY76Z2rM5mHXA
  Cc: 'ceph-users-Qp0mS5GaXlQ@public.gmane.org'

Hi,guyes
Recentlly,I am testing  cache-tier using writeback mode.but I found a  
strange things.
the performance  using rados bench degrade.Is it correct?
If so,how to explain.following some info about my test:

storage node:4 machine,two INTEL SSDSC2BB120G4(one for systaem,the other  
one used as OSD),four sata as OSD.

before using cache-tier:
root@ceph1:~# rados bench -p coldstorage 300 write --no-cleanup
----------------------------------------------------------------
Total time run:         301.236355
Total writes made:      6041
Write size:             4194304
Bandwidth (MB/sec):     80.216

Stddev Bandwidth:       10.5358
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average Latency:        0.797838
Stddev Latency:         0.619098
Max latency:            4.89823
Min latency:            0.158543

root@ceph1:/root/cluster# rados bench -p coldstorage  300 seq
Total time run:        133.563980
Total reads made:     6041
Read size:            4194304
Bandwidth (MB/sec):    180.917

Average Latency:       0.353559
Max latency:           1.83356
Min latency:           0.027878

after configure cache-tier:
root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier add coldstorage  
hotstorage
pool 'hotstorage' is now (or already was) a tier of 'coldstorage'

root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier cache-mode  
hotstorage writeback
set cache-mode for pool 'hotstorage' to writeback

root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier set-overlay  
coldstorage hotstorage
overlay for 'coldstorage' is now (or already was) 'hotstorage'

oot@ubuntu:~# ceph osd dump|grep storage
pool 6 'coldstorage' replicated size 3 min_size 1 crush_ruleset 0  
object_hash rjenkins pg_num 512 pgp_num 512 last_change 216 lfor 216 flags  
hashpspool tiers 7 read_tier 7 write_tier 7 stripe_width 0
pool 7 'hotstorage' replicated size 3 min_size 1 crush_ruleset 1  
object_hash rjenkins pg_num 128 pgp_num 128 last_change 228 flags  
hashpspool,incomplete_clones tier_of 6 cache_mode writeback target_bytes  
100000000000 hit_set bloom{false_positive_probability: 0.05, target_size:  
0, seed: 0} 3600s x6 stripe_width 0
-------------------------------------------------------------
rados bench -p coldstorage 300 write --no-cleanup
Total time run: 302.207573
Total writes made: 4315
Write size: 4194304
Bandwidth (MB/sec): 57.113

Stddev Bandwidth: 23.9375
Max bandwidth (MB/sec): 104
Min bandwidth (MB/sec): 0
Average Latency: 1.1204
Stddev Latency: 0.717092
Max latency: 6.97288
Min latency: 0.158371

root@ubuntu:/# rados bench -p coldstorage 300 seq
Total time run: 153.869741
Total reads made: 4315
Read size: 4194304
Bandwidth (MB/sec): 112.173

Average Latency: 0.570487
Max latency: 1.75137
Min latency: 0.039635


ceph.conf:
--------------------------------------------
[global]
fsid = 4ec1eb64-226c-4d90-8c5c-b6b6644be831
mon_initial_members = ceph2, ceph3, ceph4
mon_host = 10.**.**.241,10.**.**.242,10.**.**.243
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
osd_pool_default_size = 3
osd_pool_default_min_size = 1
auth_supported = cephx
osd_journal_size = 10240
osd_mkfs_type = xfs
osd crush update on start = false

[client]
rbd_cache = true
rbd_cache_writethrough_until_flush = false
rbd_cache_size = 33554432
rbd_cache_max_dirty = 25165824
rbd_cache_target_dirty = 16777216
rbd_cache_max_dirty_age = 1
rbd_cache_block_writes_upfront = false
[osd]
filestore_omap_header_cache_size = 40000
filestore_fd_cache_size = 40000
filestore_fiemap = true
client_readahead_min = 2097152
client_readahead_max_bytes = 0
client_readahead_max_periods = 4
filestore_journal_writeahead = false
filestore_max_sync_interval = 10
filestore_queue_max_ops = 500
filestore_queue_max_bytes = 1048576000
filestore_queue_committing_max_ops = 5000
filestore_queue_committing_max_bytes = 1048576000
keyvaluestore_queue_max_ops = 500
keyvaluestore_queue_max_bytes = 1048576000
journal_queue_max_ops = 30000
journal_queue_max_bytes = 3355443200
osd_op_threads = 20
osd_disk_threads = 8
filestore_op_threads = 4
osd_mount_options_xfs = rw,noatime,nobarrier,inode64,logbsize=256k,delaylog

[mon]
mon_osd_allow_primary_affinity=true

-- 
使用Opera的电子邮件客户端:http://www.opera.com/mail/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: using cache-tier with writeback mode, raods bench result degrade
       [not found] ` <op.yaxfgsdp6jkj1f-Azv1iiBWqJVWD3n+koEi8Ve7Yu2RT3cRrNQQ6b5fDX0@public.gmane.org>
@ 2016-01-08 16:13   ` Wade Holler
  0 siblings, 0 replies; 2+ messages in thread
From: Wade Holler @ 2016-01-08 16:13 UTC (permalink / raw)
  To: hnuzhoulin, ceph-devel-u79uwXL29TY76Z2rM5mHXA; +Cc: ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 5059 bytes --]

My experience is performance degrades dramatically when dirty objects are
flushed.

Best Regards,
Wade


On Fri, Jan 8, 2016 at 11:08 AM hnuzhoulin <hnuzhoulin2-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:

> Hi,guyes
> Recentlly,I am testing  cache-tier using writeback mode.but I found a
> strange things.
> the performance  using rados bench degrade.Is it correct?
> If so,how to explain.following some info about my test:
>
> storage node:4 machine,two INTEL SSDSC2BB120G4(one for systaem,the other
> one used as OSD),four sata as OSD.
>
> before using cache-tier:
> root@ceph1:~# rados bench -p coldstorage 300 write --no-cleanup
> ----------------------------------------------------------------
> Total time run:         301.236355
> Total writes made:      6041
> Write size:             4194304
> Bandwidth (MB/sec):     80.216
>
> Stddev Bandwidth:       10.5358
> Max bandwidth (MB/sec): 104
> Min bandwidth (MB/sec): 0
> Average Latency:        0.797838
> Stddev Latency:         0.619098
> Max latency:            4.89823
> Min latency:            0.158543
>
> root@ceph1:/root/cluster# rados bench -p coldstorage  300 seq
> Total time run:        133.563980
> Total reads made:     6041
> Read size:            4194304
> Bandwidth (MB/sec):    180.917
>
> Average Latency:       0.353559
> Max latency:           1.83356
> Min latency:           0.027878
>
> after configure cache-tier:
> root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier add coldstorage
> hotstorage
> pool 'hotstorage' is now (or already was) a tier of 'coldstorage'
>
> root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier cache-mode
> hotstorage writeback
> set cache-mode for pool 'hotstorage' to writeback
>
> root@ubuntu:~/benchmarkcollect/Monitor# ceph osd tier set-overlay
> coldstorage hotstorage
> overlay for 'coldstorage' is now (or already was) 'hotstorage'
>
> oot@ubuntu:~# ceph osd dump|grep storage
> pool 6 'coldstorage' replicated size 3 min_size 1 crush_ruleset 0
> object_hash rjenkins pg_num 512 pgp_num 512 last_change 216 lfor 216 flags
> hashpspool tiers 7 read_tier 7 write_tier 7 stripe_width 0
> pool 7 'hotstorage' replicated size 3 min_size 1 crush_ruleset 1
> object_hash rjenkins pg_num 128 pgp_num 128 last_change 228 flags
> hashpspool,incomplete_clones tier_of 6 cache_mode writeback target_bytes
> 100000000000 hit_set bloom{false_positive_probability: 0.05, target_size:
> 0, seed: 0} 3600s x6 stripe_width 0
> -------------------------------------------------------------
> rados bench -p coldstorage 300 write --no-cleanup
> Total time run: 302.207573
> Total writes made: 4315
> Write size: 4194304
> Bandwidth (MB/sec): 57.113
>
> Stddev Bandwidth: 23.9375
> Max bandwidth (MB/sec): 104
> Min bandwidth (MB/sec): 0
> Average Latency: 1.1204
> Stddev Latency: 0.717092
> Max latency: 6.97288
> Min latency: 0.158371
>
> root@ubuntu:/# rados bench -p coldstorage 300 seq
> Total time run: 153.869741
> Total reads made: 4315
> Read size: 4194304
> Bandwidth (MB/sec): 112.173
>
> Average Latency: 0.570487
> Max latency: 1.75137
> Min latency: 0.039635
>
>
> ceph.conf:
> --------------------------------------------
> [global]
> fsid = 4ec1eb64-226c-4d90-8c5c-b6b6644be831
> mon_initial_members = ceph2, ceph3, ceph4
> mon_host = 10.**.**.241,10.**.**.242,10.**.**.243
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> filestore_xattr_use_omap = true
> osd_pool_default_size = 3
> osd_pool_default_min_size = 1
> auth_supported = cephx
> osd_journal_size = 10240
> osd_mkfs_type = xfs
> osd crush update on start = false
>
> [client]
> rbd_cache = true
> rbd_cache_writethrough_until_flush = false
> rbd_cache_size = 33554432
> rbd_cache_max_dirty = 25165824
> rbd_cache_target_dirty = 16777216
> rbd_cache_max_dirty_age = 1
> rbd_cache_block_writes_upfront = false
> [osd]
> filestore_omap_header_cache_size = 40000
> filestore_fd_cache_size = 40000
> filestore_fiemap = true
> client_readahead_min = 2097152
> client_readahead_max_bytes = 0
> client_readahead_max_periods = 4
> filestore_journal_writeahead = false
> filestore_max_sync_interval = 10
> filestore_queue_max_ops = 500
> filestore_queue_max_bytes = 1048576000
> filestore_queue_committing_max_ops = 5000
> filestore_queue_committing_max_bytes = 1048576000
> keyvaluestore_queue_max_ops = 500
> keyvaluestore_queue_max_bytes = 1048576000
> journal_queue_max_ops = 30000
> journal_queue_max_bytes = 3355443200
> osd_op_threads = 20
> osd_disk_threads = 8
> filestore_op_threads = 4
> osd_mount_options_xfs = rw,noatime,nobarrier,inode64,logbsize=256k,delaylog
>
> [mon]
> mon_osd_allow_primary_affinity=true
>
> --
> 使用Opera的电子邮件客户端:http://www.opera.com/mail/
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 6037 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-01-08 16:13 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-08 16:06 using cache-tier with writeback mode, raods bench result degrade hnuzhoulin
     [not found] ` <op.yaxfgsdp6jkj1f-Azv1iiBWqJVWD3n+koEi8Ve7Yu2RT3cRrNQQ6b5fDX0@public.gmane.org>
2016-01-08 16:13   ` Wade Holler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.