From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: OpTracker optimization Date: Tue, 9 Sep 2014 20:33:03 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1276760833==" Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: "Samuel Just (sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org)" , "Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)" Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org --===============1276760833== Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_" --_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi Sam/Sage, As we discussed earlier, enabling the present OpTracker code degrading perf= ormance severely. For example, in my setup a single OSD node with 10 client= s is reaching ~103K read iops with io served from memory while optracking i= s disabled but enabling optracker it is reduced to ~39K iops. Probably, run= ning OSD without enabling OpTracker is not an option for many of Ceph users= . Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist ops_in_flig= ht) and removing some other bottlenecks I am able to match the performance = of OpTracking enabled OSD with OpTracking disabled, but with the expense of= ~1 extra cpu core. In this process I have also fixed the following tracker. http://tracker.ceph.com/issues/9384 and probably http://tracker.ceph.com/issues/8885 too. I have created following pull request for the same. Please review it. https://github.com/ceph/ceph/pull/2440 Thanks & Regards Somnath ________________________________ PLEASE NOTE: The information contained in this electronic mail message is i= ntended only for the use of the designated recipient(s) named above. If the= reader of this message is not the intended recipient, you are hereby notif= ied that you have received this message in error and that any review, disse= mination, distribution, or copying of this message is strictly prohibited. = If you have received this communication in error, please notify the sender = by telephone or e-mail (as shown above) immediately and destroy any and all= copies of this message in your possession (whether hard copies or electron= ically stored copies). --_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi Sam/Sage,

As we discussed earlier, enabling the present OpTrac= ker code degrading performance severely. For example, in my setup a single = OSD node with 10 clients is reaching ~103K read iops with io served from me= mory while optracking is disabled but enabling optracker it is reduced to ~39K iops. Probably, running OSD w= ithout enabling OpTracker is not an option for many of Ceph users.

Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist ops_in_flight) = and removing some other bottlenecks I am able to match the performance of O= pTracking enabled OSD with OpTracking disabled, but with the expense of ~1 = extra cpu core.

In this process I have a= lso fixed the following tracker.

 =

http://tracker.ceph.com/issues/9384

 

and probably http://tracker.ceph.com/issues/8885 too.

 

I have created following pull request for the sam= e. Please review it.

 

https://github.com/ceph/ceph/pull/2440

 

Thanks & Regards

Somnath

 




PLEASE NOTE: The information contained in this electronic mail message is i= ntended only for the use of the designated recipient(s) named above. If the= reader of this message is not the intended recipient, you are hereby notif= ied that you have received this message in error and that any review, dissemination, distribution, or copy= ing of this message is strictly prohibited. If you have received this commu= nication in error, please notify the sender by telephone or e-mail (as show= n above) immediately and destroy any and all copies of this message in your possession (whether hard copies= or electronically stored copies).

--_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_-- --===============1276760833== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ ceph-users mailing list ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com --===============1276760833==--