All of lore.kernel.org
 help / color / mirror / Atom feed
* Severe performance degradation with jewel rbd image
@ 2016-05-25 20:48 Somnath Roy
  2016-05-25 22:47 ` Jason Dillaman
  0 siblings, 1 reply; 18+ messages in thread
From: Somnath Roy @ 2016-05-25 20:48 UTC (permalink / raw)
  To: ceph-devel

Hi Mark/Josh,
As I mentioned in the performance meeting today , if we create rbd image with default 'rbd create' command in jewel , the individual image performance for 4k RW is not scaling up well. But, multiple of rbd images running in parallel aggregated throughput is scaling.
For the same QD and numjob combination, image created with image format 1 (and with hammer like rbd_default_features = 3) is producing *16X* more performance. I did some digging and here is my findings.

Setup:
--------

32 osds (all SSD) over 4 nodes. Pool size = 2 , min_size = 1.

root@stormeap-1:~# ceph -s
    cluster db0febf1-d2b0-4f8d-8f20-43731c134763
     health HEALTH_WARN
            noscrub,nodeep-scrub,sortbitwise flag(s) set
     monmap e1: 1 mons at {a=10.60.194.10:6789/0}
            election epoch 5, quorum 0 a
     osdmap e139: 32 osds: 32 up, 32 in
            flags noscrub,nodeep-scrub,sortbitwise
      pgmap v20532: 2500 pgs, 1 pools, 7421 GB data, 1855 kobjects
            14850 GB used, 208 TB / 223 TB avail
                2500 active+clean

IO profile : Fio rbd with QD 128 and numjob = 10
rbd cache is disabled.

Result:
--------
root@stormeap-1:~# rbd info recovery_test/rbd_degradation
rbd image 'rbd_degradation':
        size 1953 GB in 500000 objects
        order 22 (4096 kB objects)
        block_name_prefix: rb.0.5f5f.6b8b4567
        format: 1

On the above image with format 1 it is giving *~102K iops*

root@stormeap-1:~# rbd info recovery_test/rbd_degradation_with_hammer_features
rbd image 'rbd_degradation_with_hammer_features':
        size 195 GB in 50000 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.5f8d6b8b4567
        format: 2
        features: layering
        flags:

On the above image with hammer rbd features on , it is giving *~105K iops*

root@stormeap-1:~# rbd info recovery_test/rbd_degradation_with_7
rbd image 'rbd_degradation_with_7':
        size 195 GB in 50000 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.5fd86b8b4567
        format: 2
        features: layering, exclusive-lock
        flags:

On the above image with feature 7 (exclusive lock feature on) , it is giving *~8K iops*...So, >12X degradation

Tried with single numjob and QD = 128 , performance bumped up till ~40K..Further increasing QD , performance is not going up.


root@stormeap-1:~# rbd info recovery_test/rbd_degradation_with_15
rbd image 'rbd_degradation_with_15':
        size 195 GB in 50000 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.5fab6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map
        flags:

On the above image with feature 15 (exclusive lock, object map feature on) , it is giving *~8K iops*...So, >12X degradation

Tried with single numjob and QD = 128 , performance bumped up till ~40K..Further increasing QD , performance is not going up.


root@stormeap-1:~# rbd info recovery_test/ceph_recovery_img_1
rbd image 'ceph_recovery_img_1':
        size 4882 GB in 1250000 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.371b6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        flags:

On the above image with feature 61 (Jewel default) , it is giving *~6K iops*...So, *>16X* degradation

Tried with single numjob and QD = 128 , performance bumped up till ~35K..Further increasing QD , performance is not going up.

Summary :
------------

1. It seems exclusive lock feature is degrading performance.

2. It is degrading a bit further on enabling fast-diff, deep-flatten


Let me know if you need more information on this.

Thanks & Regards
Somnath


PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2016-06-01  6:58 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-25 20:48 Severe performance degradation with jewel rbd image Somnath Roy
2016-05-25 22:47 ` Jason Dillaman
2016-05-25 23:08   ` Somnath Roy
2016-05-26  0:50     ` Jason Dillaman
2016-05-26  3:39       ` Somnath Roy
2016-05-26  3:52         ` Jason Dillaman
2016-05-26  4:19           ` Somnath Roy
2016-05-26 13:28             ` Jason Dillaman
2016-05-26 17:47               ` Somnath Roy
2016-05-26 18:02                 ` Samuel Just
2016-05-26 18:17                   ` Jason Dillaman
2016-05-26 18:49                     ` Somnath Roy
2016-05-26 18:52                       ` Jason Dillaman
2016-05-28  7:29                         ` Alexandre DERUMIER
2016-05-28 16:23                           ` Somnath Roy
2016-05-31 19:57                           ` Jason Dillaman
2016-06-01  6:58                             ` Alexandre DERUMIER
2016-05-26  2:01     ` Haomai Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.