From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: OpTracker optimization Date: Tue, 9 Sep 2014 20:33:03 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1276760833==" Return-path: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: "Samuel Just (sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org)" , "Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)" Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org --===============1276760833== Content-Language: en-US Content-Type: multipart/alternative; boundary="_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_" --_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Hi Sam/Sage, As we discussed earlier, enabling the present OpTracker code degrading perf= ormance severely. For example, in my setup a single OSD node with 10 client= s is reaching ~103K read iops with io served from memory while optracking i= s disabled but enabling optracker it is reduced to ~39K iops. Probably, run= ning OSD without enabling OpTracker is not an option for many of Ceph users= . Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist ops_in_flig= ht) and removing some other bottlenecks I am able to match the performance = of OpTracking enabled OSD with OpTracking disabled, but with the expense of= ~1 extra cpu core. In this process I have also fixed the following tracker. http://tracker.ceph.com/issues/9384 and probably http://tracker.ceph.com/issues/8885 too. I have created following pull request for the same. Please review it. https://github.com/ceph/ceph/pull/2440 Thanks & Regards Somnath ________________________________ PLEASE NOTE: The information contained in this electronic mail message is i= ntended only for the use of the designated recipient(s) named above. If the= reader of this message is not the intended recipient, you are hereby notif= ied that you have received this message in error and that any review, disse= mination, distribution, or copying of this message is strictly prohibited. = If you have received this communication in error, please notify the sender = by telephone or e-mail (as shown above) immediately and destroy any and all= copies of this message in your possession (whether hard copies or electron= ically stored copies). --_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Hi Sam/Sage,

As we discussed earlier, enabling the present OpTrac= ker code degrading performance severely. For example, in my setup a single = OSD node with 10 clients is reaching ~103K read iops with io served from me= mory while optracking is disabled but enabling optracker it is reduced to ~39K iops. Probably, running OSD w= ithout enabling OpTracker is not an option for many of Ceph users.

Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist ops_in_flight) = and removing some other bottlenecks I am able to match the performance of O= pTracking enabled OSD with OpTracking disabled, but with the expense of ~1 = extra cpu core.

In this process I have a= lso fixed the following tracker.

 =

http://tracker.ceph.com/issues/9384

 

and probably http://tracker.ceph.com/issues/8885 too.

 

I have created following pull request for the sam= e. Please review it.

 

https://github.com/ceph/ceph/pull/2440

 

Thanks & Regards

Somnath

 




PLEASE NOTE: The information contained in this electronic mail message is i= ntended only for the use of the designated recipient(s) named above. If the= reader of this message is not the intended recipient, you are hereby notif= ied that you have received this message in error and that any review, dissemination, distribution, or copy= ing of this message is strictly prohibited. If you have received this commu= nication in error, please notify the sender by telephone or e-mail (as show= n above) immediately and destroy any and all copies of this message in your possession (whether hard copies= or electronically stored copies).

--_000_755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845SACMBXIP02sdcor_-- --===============1276760833== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ ceph-users mailing list ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com --===============1276760833==-- From mboxrd@z Thu Jan 1 00:00:00 1970 From: Samuel Just Subject: Re: OpTracker optimization Date: Wed, 10 Sep 2014 11:16:51 -0700 Message-ID: References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Return-path: Received: from mail-qa0-f43.google.com ([209.85.216.43]:32940 "EHLO mail-qa0-f43.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751263AbaIJSQw (ORCPT ); Wed, 10 Sep 2014 14:16:52 -0400 Received: by mail-qa0-f43.google.com with SMTP id cm18so18057215qab.30 for ; Wed, 10 Sep 2014 11:16:52 -0700 (PDT) In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Somnath Roy Cc: "Sage Weil (sweil@redhat.com)" , "ceph-devel@vger.kernel.org" , "ceph-users@lists.ceph.com" Added a comment about the approach. -Sam On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > Hi Sam/Sage, > > As we discussed earlier, enabling the present OpTracker code degrading > performance severely. For example, in my setup a single OSD node with 10 > clients is reaching ~103K read iops with io served from memory while > optracking is disabled but enabling optracker it is reduced to ~39K iops. > Probably, running OSD without enabling OpTracker is not an option for many > of Ceph users. > > Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist > ops_in_flight) and removing some other bottlenecks I am able to match the > performance of OpTracking enabled OSD with OpTracking disabled, but with the > expense of ~1 extra cpu core. > > In this process I have also fixed the following tracker. > > > > http://tracker.ceph.com/issues/9384 > > > > and probably http://tracker.ceph.com/issues/8885 too. > > > > I have created following pull request for the same. Please review it. > > > > https://github.com/ceph/ceph/pull/2440 > > > > Thanks & Regards > > Somnath > > > > > ________________________________ > > PLEASE NOTE: The information contained in this electronic mail message is > intended only for the use of the designated recipient(s) named above. If the > reader of this message is not the intended recipient, you are hereby > notified that you have received this message in error and that any review, > dissemination, distribution, or copying of this message is strictly > prohibited. If you have received this communication in error, please notify > the sender by telephone or e-mail (as shown above) immediately and destroy > any and all copies of this message in your possession (whether hard copies > or electronically stored copies). > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: Re: OpTracker optimization Date: Wed, 10 Sep 2014 20:30:48 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Samuel Just Cc: "Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)" , "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org Thanks Sam..I responded back :-) -----Original Message----- From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just Sent: Wednesday, September 10, 2014 11:17 AM To: Somnath Roy Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Subject: Re: OpTracker optimization Added a comment about the approach. -Sam On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > Hi Sam/Sage, > > As we discussed earlier, enabling the present OpTracker code degrading > performance severely. For example, in my setup a single OSD node with > 10 clients is reaching ~103K read iops with io served from memory > while optracking is disabled but enabling optracker it is reduced to ~39K iops. > Probably, running OSD without enabling OpTracker is not an option for > many of Ceph users. > > Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist > ops_in_flight) and removing some other bottlenecks I am able to match > the performance of OpTracking enabled OSD with OpTracking disabled, > but with the expense of ~1 extra cpu core. > > In this process I have also fixed the following tracker. > > > > http://tracker.ceph.com/issues/9384 > > > > and probably http://tracker.ceph.com/issues/8885 too. > > > > I have created following pull request for the same. Please review it. > > > > https://github.com/ceph/ceph/pull/2440 > > > > Thanks & Regards > > Somnath > > > > > ________________________________ > > PLEASE NOTE: The information contained in this electronic mail message > is intended only for the use of the designated recipient(s) named > above. If the reader of this message is not the intended recipient, > you are hereby notified that you have received this message in error > and that any review, dissemination, distribution, or copying of this > message is strictly prohibited. If you have received this > communication in error, please notify the sender by telephone or > e-mail (as shown above) immediately and destroy any and all copies of > this message in your possession (whether hard copies or electronically stored copies). > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at http://vger.kernel.org/majordomo-info.html ________________________________ PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). From mboxrd@z Thu Jan 1 00:00:00 1970 From: Samuel Just Subject: Re: OpTracker optimization Date: Wed, 10 Sep 2014 14:36:27 -0700 Message-ID: References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Return-path: Received: from mail-qg0-f48.google.com ([209.85.192.48]:51625 "EHLO mail-qg0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753211AbaIJVg1 convert rfc822-to-8bit (ORCPT ); Wed, 10 Sep 2014 17:36:27 -0400 Received: by mail-qg0-f48.google.com with SMTP id q108so2692351qgd.21 for ; Wed, 10 Sep 2014 14:36:27 -0700 (PDT) In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Somnath Roy Cc: "Sage Weil (sweil@redhat.com)" , "ceph-devel@vger.kernel.org" , "ceph-users@lists.ceph.com" Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. -Sam On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > Thanks Sam..I responded back :-) > > -----Original Message----- > From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just > Sent: Wednesday, September 10, 2014 11:17 AM > To: Somnath Roy > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com > Subject: Re: OpTracker optimization > > Added a comment about the approach. > -Sam > > On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: >> Hi Sam/Sage, >> >> As we discussed earlier, enabling the present OpTracker code degrading >> performance severely. For example, in my setup a single OSD node with >> 10 clients is reaching ~103K read iops with io served from memory >> while optracking is disabled but enabling optracker it is reduced to ~39K iops. >> Probably, running OSD without enabling OpTracker is not an option for >> many of Ceph users. >> >> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist >> ops_in_flight) and removing some other bottlenecks I am able to match >> the performance of OpTracking enabled OSD with OpTracking disabled, >> but with the expense of ~1 extra cpu core. >> >> In this process I have also fixed the following tracker. >> >> >> >> http://tracker.ceph.com/issues/9384 >> >> >> >> and probably http://tracker.ceph.com/issues/8885 too. >> >> >> >> I have created following pull request for the same. Please review it. >> >> >> >> https://github.com/ceph/ceph/pull/2440 >> >> >> >> Thanks & Regards >> >> Somnath >> >> >> >> >> ________________________________ >> >> PLEASE NOTE: The information contained in this electronic mail message >> is intended only for the use of the designated recipient(s) named >> above. If the reader of this message is not the intended recipient, >> you are hereby notified that you have received this message in error >> and that any review, dissemination, distribution, or copying of this >> message is strictly prohibited. If you have received this >> communication in error, please notify the sender by telephone or >> e-mail (as shown above) immediately and destroy any and all copies of >> this message in your possession (whether hard copies or electronically stored copies). >> > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html > > ________________________________ > > PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: Re: OpTracker optimization Date: Wed, 10 Sep 2014 21:38:30 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Samuel Just Cc: "Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)" , "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org Thanks Sam. So, you want me to go with optracker/shadedopWq , right ? Regards Somnath -----Original Message----- From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] Sent: Wednesday, September 10, 2014 2:36 PM To: Somnath Roy Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Subject: Re: OpTracker optimization Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. -Sam On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > Thanks Sam..I responded back :-) > > -----Original Message----- > From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just > Sent: Wednesday, September 10, 2014 11:17 AM > To: Somnath Roy > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: Re: OpTracker optimization > > Added a comment about the approach. > -Sam > > On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: >> Hi Sam/Sage, >> >> As we discussed earlier, enabling the present OpTracker code >> degrading performance severely. For example, in my setup a single OSD >> node with >> 10 clients is reaching ~103K read iops with io served from memory >> while optracking is disabled but enabling optracker it is reduced to ~39K iops. >> Probably, running OSD without enabling OpTracker is not an option for >> many of Ceph users. >> >> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist >> ops_in_flight) and removing some other bottlenecks I am able to match >> the performance of OpTracking enabled OSD with OpTracking disabled, >> but with the expense of ~1 extra cpu core. >> >> In this process I have also fixed the following tracker. >> >> >> >> http://tracker.ceph.com/issues/9384 >> >> >> >> and probably http://tracker.ceph.com/issues/8885 too. >> >> >> >> I have created following pull request for the same. Please review it. >> >> >> >> https://github.com/ceph/ceph/pull/2440 >> >> >> >> Thanks & Regards >> >> Somnath >> >> >> >> >> ________________________________ >> >> PLEASE NOTE: The information contained in this electronic mail >> message is intended only for the use of the designated recipient(s) >> named above. If the reader of this message is not the intended >> recipient, you are hereby notified that you have received this >> message in error and that any review, dissemination, distribution, or >> copying of this message is strictly prohibited. If you have received >> this communication in error, please notify the sender by telephone or >> e-mail (as shown above) immediately and destroy any and all copies of >> this message in your possession (whether hard copies or electronically stored copies). >> > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" > in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo > info at http://vger.kernel.org/majordomo-info.html > > ________________________________ > > PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Samuel Just Subject: Re: OpTracker optimization Date: Wed, 10 Sep 2014 15:07:55 -0700 Message-ID: References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Somnath Roy Cc: "Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)" , "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org I don't quite understand. -Sam On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: > Thanks Sam. > So, you want me to go with optracker/shadedopWq , right ? > > Regards > Somnath > > -----Original Message----- > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > Sent: Wednesday, September 10, 2014 2:36 PM > To: Somnath Roy > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: Re: OpTracker optimization > > Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. > -Sam > > On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: >> Thanks Sam..I responded back :-) >> >> -----Original Message----- >> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just >> Sent: Wednesday, September 10, 2014 11:17 AM >> To: Somnath Roy >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> Subject: Re: OpTracker optimization >> >> Added a comment about the approach. >> -Sam >> >> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: >>> Hi Sam/Sage, >>> >>> As we discussed earlier, enabling the present OpTracker code >>> degrading performance severely. For example, in my setup a single OSD >>> node with >>> 10 clients is reaching ~103K read iops with io served from memory >>> while optracking is disabled but enabling optracker it is reduced to ~39K iops. >>> Probably, running OSD without enabling OpTracker is not an option for >>> many of Ceph users. >>> >>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist >>> ops_in_flight) and removing some other bottlenecks I am able to match >>> the performance of OpTracking enabled OSD with OpTracking disabled, >>> but with the expense of ~1 extra cpu core. >>> >>> In this process I have also fixed the following tracker. >>> >>> >>> >>> http://tracker.ceph.com/issues/9384 >>> >>> >>> >>> and probably http://tracker.ceph.com/issues/8885 too. >>> >>> >>> >>> I have created following pull request for the same. Please review it. >>> >>> >>> >>> https://github.com/ceph/ceph/pull/2440 >>> >>> >>> >>> Thanks & Regards >>> >>> Somnath >>> >>> >>> >>> >>> ________________________________ >>> >>> PLEASE NOTE: The information contained in this electronic mail >>> message is intended only for the use of the designated recipient(s) >>> named above. If the reader of this message is not the intended >>> recipient, you are hereby notified that you have received this >>> message in error and that any review, dissemination, distribution, or >>> copying of this message is strictly prohibited. If you have received >>> this communication in error, please notify the sender by telephone or >>> e-mail (as shown above) immediately and destroy any and all copies of >>> this message in your possession (whether hard copies or electronically stored copies). >>> >> -- >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" >> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo >> info at http://vger.kernel.org/majordomo-info.html >> >> ________________________________ >> >> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). >> From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: RE: OpTracker optimization Date: Wed, 10 Sep 2014 22:13:08 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from mail-bn1bon0083.outbound.protection.outlook.com ([157.56.111.83]:46928 "EHLO na01-bn1-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752241AbaIJWNN (ORCPT ); Wed, 10 Sep 2014 18:13:13 -0400 In-Reply-To: Content-Language: en-US Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Samuel Just Cc: "Sage Weil (sweil@redhat.com)" , "ceph-devel@vger.kernel.org" , "ceph-users@lists.ceph.com" QXMgSSB1bmRlcnN0YW5kLCB5b3Ugd2FudCBtZSB0byBpbXBsZW1lbnQgdGhlIGZvbGxvd2luZy4N Cg0KMS4gIEtlZXAgdGhpcyBpbXBsZW1lbnRhdGlvbiBvbmUgc2hhcmRlZCBvcHRyYWNrZXIgZm9y IHRoZSBpb3MgZ29pbmcgdGhyb3VnaCBtc19kaXNwYXRjaCBwYXRoLg0KDQoyLiBBZGRpdGlvbmFs bHksIGZvciBpb3MgZ29pbmcgdGhyb3VnaCBtc19mYXN0X2Rpc3BhdGNoLCB5b3Ugd2FudCBtZSB0 byBpbXBsZW1lbnQgb3B0cmFja2VyICh3aXRob3V0IGludGVybmFsIHNoYXJkKSBwZXIgb3B3cSBz aGFyZCANCg0KQW0gSSByaWdodCA/DQoNClRoYW5rcyAmIFJlZ2FyZHMNClNvbW5hdGgNCg0KLS0t LS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IFNhbXVlbCBKdXN0IFttYWlsdG86c2FtLmp1 c3RAaW5rdGFuay5jb21dIA0KU2VudDogV2VkbmVzZGF5LCBTZXB0ZW1iZXIgMTAsIDIwMTQgMzow OCBQTQ0KVG86IFNvbW5hdGggUm95DQpDYzogU2FnZSBXZWlsIChzd2VpbEByZWRoYXQuY29tKTsg Y2VwaC1kZXZlbEB2Z2VyLmtlcm5lbC5vcmc7IGNlcGgtdXNlcnNAbGlzdHMuY2VwaC5jb20NClN1 YmplY3Q6IFJlOiBPcFRyYWNrZXIgb3B0aW1pemF0aW9uDQoNCkkgZG9uJ3QgcXVpdGUgdW5kZXJz dGFuZC4NCi1TYW0NCg0KT24gV2VkLCBTZXAgMTAsIDIwMTQgYXQgMjozOCBQTSwgU29tbmF0aCBS b3kgPFNvbW5hdGguUm95QHNhbmRpc2suY29tPiB3cm90ZToNCj4gVGhhbmtzIFNhbS4NCj4gU28s IHlvdSB3YW50IG1lIHRvIGdvIHdpdGggb3B0cmFja2VyL3NoYWRlZG9wV3EgLCByaWdodCA/DQo+ DQo+IFJlZ2FyZHMNCj4gU29tbmF0aA0KPg0KPiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0K PiBGcm9tOiBTYW11ZWwgSnVzdCBbbWFpbHRvOnNhbS5qdXN0QGlua3RhbmsuY29tXQ0KPiBTZW50 OiBXZWRuZXNkYXksIFNlcHRlbWJlciAxMCwgMjAxNCAyOjM2IFBNDQo+IFRvOiBTb21uYXRoIFJv eQ0KPiBDYzogU2FnZSBXZWlsIChzd2VpbEByZWRoYXQuY29tKTsgY2VwaC1kZXZlbEB2Z2VyLmtl cm5lbC5vcmc7IA0KPiBjZXBoLXVzZXJzQGxpc3RzLmNlcGguY29tDQo+IFN1YmplY3Q6IFJlOiBP cFRyYWNrZXIgb3B0aW1pemF0aW9uDQo+DQo+IFJlc3BvbmRlZCB3aXRoIGNvc21ldGljIG5vbnNl bnNlLiAgT25jZSB5b3UndmUgZ290IHRoYXQgYW5kIHRoZSBvdGhlciBjb21tZW50cyBhZGRyZXNz ZWQsIEkgY2FuIHB1dCBpdCBpbiB3aXAtc2FtLXRlc3RpbmcuDQo+IC1TYW0NCj4NCj4gT24gV2Vk LCBTZXAgMTAsIDIwMTQgYXQgMTozMCBQTSwgU29tbmF0aCBSb3kgPFNvbW5hdGguUm95QHNhbmRp c2suY29tPiB3cm90ZToNCj4+IFRoYW5rcyBTYW0uLkkgcmVzcG9uZGVkIGJhY2sgOi0pDQo+Pg0K Pj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4+IEZyb206IGNlcGgtZGV2ZWwtb3duZXJA dmdlci5rZXJuZWwub3JnIA0KPj4gW21haWx0bzpjZXBoLWRldmVsLW93bmVyQHZnZXIua2VybmVs Lm9yZ10gT24gQmVoYWxmIE9mIFNhbXVlbCBKdXN0DQo+PiBTZW50OiBXZWRuZXNkYXksIFNlcHRl bWJlciAxMCwgMjAxNCAxMToxNyBBTQ0KPj4gVG86IFNvbW5hdGggUm95DQo+PiBDYzogU2FnZSBX ZWlsIChzd2VpbEByZWRoYXQuY29tKTsgY2VwaC1kZXZlbEB2Z2VyLmtlcm5lbC5vcmc7IA0KPj4g Y2VwaC11c2Vyc0BsaXN0cy5jZXBoLmNvbQ0KPj4gU3ViamVjdDogUmU6IE9wVHJhY2tlciBvcHRp bWl6YXRpb24NCj4+DQo+PiBBZGRlZCBhIGNvbW1lbnQgYWJvdXQgdGhlIGFwcHJvYWNoLg0KPj4g LVNhbQ0KPj4NCj4+IE9uIFR1ZSwgU2VwIDksIDIwMTQgYXQgMTozMyBQTSwgU29tbmF0aCBSb3kg PFNvbW5hdGguUm95QHNhbmRpc2suY29tPiB3cm90ZToNCj4+PiBIaSBTYW0vU2FnZSwNCj4+Pg0K Pj4+IEFzIHdlIGRpc2N1c3NlZCBlYXJsaWVyLCBlbmFibGluZyB0aGUgcHJlc2VudCBPcFRyYWNr ZXIgY29kZSANCj4+PiBkZWdyYWRpbmcgcGVyZm9ybWFuY2Ugc2V2ZXJlbHkuIEZvciBleGFtcGxl LCBpbiBteSBzZXR1cCBhIHNpbmdsZSANCj4+PiBPU0Qgbm9kZSB3aXRoDQo+Pj4gMTAgY2xpZW50 cyBpcyByZWFjaGluZyB+MTAzSyByZWFkIGlvcHMgd2l0aCBpbyBzZXJ2ZWQgZnJvbSBtZW1vcnkg DQo+Pj4gd2hpbGUgb3B0cmFja2luZyBpcyBkaXNhYmxlZCBidXQgZW5hYmxpbmcgb3B0cmFja2Vy IGl0IGlzIHJlZHVjZWQgdG8gfjM5SyBpb3BzLg0KPj4+IFByb2JhYmx5LCBydW5uaW5nIE9TRCB3 aXRob3V0IGVuYWJsaW5nIE9wVHJhY2tlciBpcyBub3QgYW4gb3B0aW9uIA0KPj4+IGZvciBtYW55 IG9mIENlcGggdXNlcnMuDQo+Pj4NCj4+PiBOb3csIGJ5IHNoYXJkaW5nIHRoZSBPcHRyYWNrZXI6 OiBvcHNfaW5fZmxpZ2h0X2xvY2sgKHRodXMgeGxpc3QNCj4+PiBvcHNfaW5fZmxpZ2h0KSBhbmQg cmVtb3Zpbmcgc29tZSBvdGhlciBib3R0bGVuZWNrcyBJIGFtIGFibGUgdG8gDQo+Pj4gbWF0Y2gg dGhlIHBlcmZvcm1hbmNlIG9mIE9wVHJhY2tpbmcgZW5hYmxlZCBPU0Qgd2l0aCBPcFRyYWNraW5n IA0KPj4+IGRpc2FibGVkLCBidXQgd2l0aCB0aGUgZXhwZW5zZSBvZiB+MSBleHRyYSBjcHUgY29y ZS4NCj4+Pg0KPj4+IEluIHRoaXMgcHJvY2VzcyBJIGhhdmUgYWxzbyBmaXhlZCB0aGUgZm9sbG93 aW5nIHRyYWNrZXIuDQo+Pj4NCj4+Pg0KPj4+DQo+Pj4gaHR0cDovL3RyYWNrZXIuY2VwaC5jb20v aXNzdWVzLzkzODQNCj4+Pg0KPj4+DQo+Pj4NCj4+PiBhbmQgcHJvYmFibHkgaHR0cDovL3RyYWNr ZXIuY2VwaC5jb20vaXNzdWVzLzg4ODUgdG9vLg0KPj4+DQo+Pj4NCj4+Pg0KPj4+IEkgaGF2ZSBj cmVhdGVkIGZvbGxvd2luZyBwdWxsIHJlcXVlc3QgZm9yIHRoZSBzYW1lLiBQbGVhc2UgcmV2aWV3 IGl0Lg0KPj4+DQo+Pj4NCj4+Pg0KPj4+IGh0dHBzOi8vZ2l0aHViLmNvbS9jZXBoL2NlcGgvcHVs bC8yNDQwDQo+Pj4NCj4+Pg0KPj4+DQo+Pj4gVGhhbmtzICYgUmVnYXJkcw0KPj4+DQo+Pj4gU29t bmF0aA0KPj4+DQo+Pj4NCj4+Pg0KPj4+DQo+Pj4gX19fX19fX19fX19fX19fX19fX19fX19fX19f X19fX18NCj4+Pg0KPj4+IFBMRUFTRSBOT1RFOiBUaGUgaW5mb3JtYXRpb24gY29udGFpbmVkIGlu IHRoaXMgZWxlY3Ryb25pYyBtYWlsIA0KPj4+IG1lc3NhZ2UgaXMgaW50ZW5kZWQgb25seSBmb3Ig dGhlIHVzZSBvZiB0aGUgZGVzaWduYXRlZCByZWNpcGllbnQocykgDQo+Pj4gbmFtZWQgYWJvdmUu IElmIHRoZSByZWFkZXIgb2YgdGhpcyBtZXNzYWdlIGlzIG5vdCB0aGUgaW50ZW5kZWQgDQo+Pj4g cmVjaXBpZW50LCB5b3UgYXJlIGhlcmVieSBub3RpZmllZCB0aGF0IHlvdSBoYXZlIHJlY2VpdmVk IHRoaXMgDQo+Pj4gbWVzc2FnZSBpbiBlcnJvciBhbmQgdGhhdCBhbnkgcmV2aWV3LCBkaXNzZW1p bmF0aW9uLCBkaXN0cmlidXRpb24sIA0KPj4+IG9yIGNvcHlpbmcgb2YgdGhpcyBtZXNzYWdlIGlz IHN0cmljdGx5IHByb2hpYml0ZWQuIElmIHlvdSBoYXZlIA0KPj4+IHJlY2VpdmVkIHRoaXMgY29t bXVuaWNhdGlvbiBpbiBlcnJvciwgcGxlYXNlIG5vdGlmeSB0aGUgc2VuZGVyIGJ5IA0KPj4+IHRl bGVwaG9uZSBvciBlLW1haWwgKGFzIHNob3duIGFib3ZlKSBpbW1lZGlhdGVseSBhbmQgZGVzdHJv eSBhbnkgYW5kIA0KPj4+IGFsbCBjb3BpZXMgb2YgdGhpcyBtZXNzYWdlIGluIHlvdXIgcG9zc2Vz c2lvbiAod2hldGhlciBoYXJkIGNvcGllcyBvciBlbGVjdHJvbmljYWxseSBzdG9yZWQgY29waWVz KS4NCj4+Pg0KPj4gLS0NCj4+IFRvIHVuc3Vic2NyaWJlIGZyb20gdGhpcyBsaXN0OiBzZW5kIHRo ZSBsaW5lICJ1bnN1YnNjcmliZSBjZXBoLWRldmVsIg0KPj4gaW4gdGhlIGJvZHkgb2YgYSBtZXNz YWdlIHRvIG1ham9yZG9tb0B2Z2VyLmtlcm5lbC5vcmcgTW9yZSBtYWpvcmRvbW8gDQo+PiBpbmZv IGF0ICBodHRwOi8vdmdlci5rZXJuZWwub3JnL21ham9yZG9tby1pbmZvLmh0bWwNCj4+DQo+PiBf X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXw0KPj4NCj4+IFBMRUFTRSBOT1RFOiBUaGUg aW5mb3JtYXRpb24gY29udGFpbmVkIGluIHRoaXMgZWxlY3Ryb25pYyBtYWlsIG1lc3NhZ2UgaXMg aW50ZW5kZWQgb25seSBmb3IgdGhlIHVzZSBvZiB0aGUgZGVzaWduYXRlZCByZWNpcGllbnQocykg bmFtZWQgYWJvdmUuIElmIHRoZSByZWFkZXIgb2YgdGhpcyBtZXNzYWdlIGlzIG5vdCB0aGUgaW50 ZW5kZWQgcmVjaXBpZW50LCB5b3UgYXJlIGhlcmVieSBub3RpZmllZCB0aGF0IHlvdSBoYXZlIHJl Y2VpdmVkIHRoaXMgbWVzc2FnZSBpbiBlcnJvciBhbmQgdGhhdCBhbnkgcmV2aWV3LCBkaXNzZW1p bmF0aW9uLCBkaXN0cmlidXRpb24sIG9yIGNvcHlpbmcgb2YgdGhpcyBtZXNzYWdlIGlzIHN0cmlj dGx5IHByb2hpYml0ZWQuIElmIHlvdSBoYXZlIHJlY2VpdmVkIHRoaXMgY29tbXVuaWNhdGlvbiBp biBlcnJvciwgcGxlYXNlIG5vdGlmeSB0aGUgc2VuZGVyIGJ5IHRlbGVwaG9uZSBvciBlLW1haWwg KGFzIHNob3duIGFib3ZlKSBpbW1lZGlhdGVseSBhbmQgZGVzdHJveSBhbnkgYW5kIGFsbCBjb3Bp ZXMgb2YgdGhpcyBtZXNzYWdlIGluIHlvdXIgcG9zc2Vzc2lvbiAod2hldGhlciBoYXJkIGNvcGll cyBvciBlbGVjdHJvbmljYWxseSBzdG9yZWQgY29waWVzKS4NCj4+DQo= From mboxrd@z Thu Jan 1 00:00:00 1970 From: Samuel Just Subject: Re: OpTracker optimization Date: Wed, 10 Sep 2014 15:25:13 -0700 Message-ID: References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Somnath Roy Cc: "Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)" , "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org Oh, I changed my mind, your approach is fine. I was unclear. Currently, I just need you to address the other comments. -Sam On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: > As I understand, you want me to implement the following. > > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. > > 2. Additionally, for ios going through ms_fast_dispatch, you want me to implement optracker (without internal shard) per opwq shard > > Am I right ? > > Thanks & Regards > Somnath > > -----Original Message----- > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > Sent: Wednesday, September 10, 2014 3:08 PM > To: Somnath Roy > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: Re: OpTracker optimization > > I don't quite understand. > -Sam > > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: >> Thanks Sam. >> So, you want me to go with optracker/shadedopWq , right ? >> >> Regards >> Somnath >> >> -----Original Message----- >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] >> Sent: Wednesday, September 10, 2014 2:36 PM >> To: Somnath Roy >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> Subject: Re: OpTracker optimization >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. >> -Sam >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: >>> Thanks Sam..I responded back :-) >>> >>> -----Original Message----- >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just >>> Sent: Wednesday, September 10, 2014 11:17 AM >>> To: Somnath Roy >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >>> Subject: Re: OpTracker optimization >>> >>> Added a comment about the approach. >>> -Sam >>> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: >>>> Hi Sam/Sage, >>>> >>>> As we discussed earlier, enabling the present OpTracker code >>>> degrading performance severely. For example, in my setup a single >>>> OSD node with >>>> 10 clients is reaching ~103K read iops with io served from memory >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops. >>>> Probably, running OSD without enabling OpTracker is not an option >>>> for many of Ceph users. >>>> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist >>>> ops_in_flight) and removing some other bottlenecks I am able to >>>> match the performance of OpTracking enabled OSD with OpTracking >>>> disabled, but with the expense of ~1 extra cpu core. >>>> >>>> In this process I have also fixed the following tracker. >>>> >>>> >>>> >>>> http://tracker.ceph.com/issues/9384 >>>> >>>> >>>> >>>> and probably http://tracker.ceph.com/issues/8885 too. >>>> >>>> >>>> >>>> I have created following pull request for the same. Please review it. >>>> >>>> >>>> >>>> https://github.com/ceph/ceph/pull/2440 >>>> >>>> >>>> >>>> Thanks & Regards >>>> >>>> Somnath >>>> >>>> >>>> >>>> >>>> ________________________________ >>>> >>>> PLEASE NOTE: The information contained in this electronic mail >>>> message is intended only for the use of the designated recipient(s) >>>> named above. If the reader of this message is not the intended >>>> recipient, you are hereby notified that you have received this >>>> message in error and that any review, dissemination, distribution, >>>> or copying of this message is strictly prohibited. If you have >>>> received this communication in error, please notify the sender by >>>> telephone or e-mail (as shown above) immediately and destroy any and >>>> all copies of this message in your possession (whether hard copies or electronically stored copies). >>>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo >>> info at http://vger.kernel.org/majordomo-info.html >>> >>> ________________________________ >>> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). >>> From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: RE: OpTracker optimization Date: Thu, 11 Sep 2014 01:52:16 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C7E4@SACMBXIP02.sdcorp.global.sandisk.com> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Return-path: Received: from mail-bl2on0056.outbound.protection.outlook.com ([65.55.169.56]:1122 "EHLO na01-bl2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750970AbaIKBwW (ORCPT ); Wed, 10 Sep 2014 21:52:22 -0400 In-Reply-To: Content-Language: en-US Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Samuel Just Cc: "Sage Weil (sweil@redhat.com)" , "ceph-devel@vger.kernel.org" , "ceph-users@lists.ceph.com" U2FtL1NhZ2UsDQpJIGhhdmUgaW5jb3Jwb3JhdGVkIGFsbCBvZiB5b3VyIGNvbW1lbnRzLiBQbGVh c2UgaGF2ZSBhIGxvb2sgYXQgdGhlIHNhbWUgcHVsbCByZXF1ZXN0Lg0KDQpodHRwczovL2dpdGh1 Yi5jb20vY2VwaC9jZXBoL3B1bGwvMjQ0MA0KDQpUaGFua3MgJiBSZWdhcmRzDQpTb21uYXRoDQoN Ci0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpGcm9tOiBTYW11ZWwgSnVzdCBbbWFpbHRvOnNh bS5qdXN0QGlua3RhbmsuY29tXSANClNlbnQ6IFdlZG5lc2RheSwgU2VwdGVtYmVyIDEwLCAyMDE0 IDM6MjUgUE0NClRvOiBTb21uYXRoIFJveQ0KQ2M6IFNhZ2UgV2VpbCAoc3dlaWxAcmVkaGF0LmNv bSk7IGNlcGgtZGV2ZWxAdmdlci5rZXJuZWwub3JnOyBjZXBoLXVzZXJzQGxpc3RzLmNlcGguY29t DQpTdWJqZWN0OiBSZTogT3BUcmFja2VyIG9wdGltaXphdGlvbg0KDQpPaCwgSSBjaGFuZ2VkIG15 IG1pbmQsIHlvdXIgYXBwcm9hY2ggaXMgZmluZS4gIEkgd2FzIHVuY2xlYXIuDQpDdXJyZW50bHks IEkganVzdCBuZWVkIHlvdSB0byBhZGRyZXNzIHRoZSBvdGhlciBjb21tZW50cy4NCi1TYW0NCg0K T24gV2VkLCBTZXAgMTAsIDIwMTQgYXQgMzoxMyBQTSwgU29tbmF0aCBSb3kgPFNvbW5hdGguUm95 QHNhbmRpc2suY29tPiB3cm90ZToNCj4gQXMgSSB1bmRlcnN0YW5kLCB5b3Ugd2FudCBtZSB0byBp bXBsZW1lbnQgdGhlIGZvbGxvd2luZy4NCj4NCj4gMS4gIEtlZXAgdGhpcyBpbXBsZW1lbnRhdGlv biBvbmUgc2hhcmRlZCBvcHRyYWNrZXIgZm9yIHRoZSBpb3MgZ29pbmcgdGhyb3VnaCBtc19kaXNw YXRjaCBwYXRoLg0KPg0KPiAyLiBBZGRpdGlvbmFsbHksIGZvciBpb3MgZ29pbmcgdGhyb3VnaCBt c19mYXN0X2Rpc3BhdGNoLCB5b3Ugd2FudCBtZSANCj4gdG8gaW1wbGVtZW50IG9wdHJhY2tlciAo d2l0aG91dCBpbnRlcm5hbCBzaGFyZCkgcGVyIG9wd3Egc2hhcmQNCj4NCj4gQW0gSSByaWdodCA/ DQo+DQo+IFRoYW5rcyAmIFJlZ2FyZHMNCj4gU29tbmF0aA0KPg0KPiAtLS0tLU9yaWdpbmFsIE1l c3NhZ2UtLS0tLQ0KPiBGcm9tOiBTYW11ZWwgSnVzdCBbbWFpbHRvOnNhbS5qdXN0QGlua3Rhbmsu Y29tXQ0KPiBTZW50OiBXZWRuZXNkYXksIFNlcHRlbWJlciAxMCwgMjAxNCAzOjA4IFBNDQo+IFRv OiBTb21uYXRoIFJveQ0KPiBDYzogU2FnZSBXZWlsIChzd2VpbEByZWRoYXQuY29tKTsgY2VwaC1k ZXZlbEB2Z2VyLmtlcm5lbC5vcmc7IA0KPiBjZXBoLXVzZXJzQGxpc3RzLmNlcGguY29tDQo+IFN1 YmplY3Q6IFJlOiBPcFRyYWNrZXIgb3B0aW1pemF0aW9uDQo+DQo+IEkgZG9uJ3QgcXVpdGUgdW5k ZXJzdGFuZC4NCj4gLVNhbQ0KPg0KPiBPbiBXZWQsIFNlcCAxMCwgMjAxNCBhdCAyOjM4IFBNLCBT b21uYXRoIFJveSA8U29tbmF0aC5Sb3lAc2FuZGlzay5jb20+IHdyb3RlOg0KPj4gVGhhbmtzIFNh bS4NCj4+IFNvLCB5b3Ugd2FudCBtZSB0byBnbyB3aXRoIG9wdHJhY2tlci9zaGFkZWRvcFdxICwg cmlnaHQgPw0KPj4NCj4+IFJlZ2FyZHMNCj4+IFNvbW5hdGgNCj4+DQo+PiAtLS0tLU9yaWdpbmFs IE1lc3NhZ2UtLS0tLQ0KPj4gRnJvbTogU2FtdWVsIEp1c3QgW21haWx0bzpzYW0uanVzdEBpbmt0 YW5rLmNvbV0NCj4+IFNlbnQ6IFdlZG5lc2RheSwgU2VwdGVtYmVyIDEwLCAyMDE0IDI6MzYgUE0N Cj4+IFRvOiBTb21uYXRoIFJveQ0KPj4gQ2M6IFNhZ2UgV2VpbCAoc3dlaWxAcmVkaGF0LmNvbSk7 IGNlcGgtZGV2ZWxAdmdlci5rZXJuZWwub3JnOyANCj4+IGNlcGgtdXNlcnNAbGlzdHMuY2VwaC5j b20NCj4+IFN1YmplY3Q6IFJlOiBPcFRyYWNrZXIgb3B0aW1pemF0aW9uDQo+Pg0KPj4gUmVzcG9u ZGVkIHdpdGggY29zbWV0aWMgbm9uc2Vuc2UuICBPbmNlIHlvdSd2ZSBnb3QgdGhhdCBhbmQgdGhl IG90aGVyIGNvbW1lbnRzIGFkZHJlc3NlZCwgSSBjYW4gcHV0IGl0IGluIHdpcC1zYW0tdGVzdGlu Zy4NCj4+IC1TYW0NCj4+DQo+PiBPbiBXZWQsIFNlcCAxMCwgMjAxNCBhdCAxOjMwIFBNLCBTb21u YXRoIFJveSA8U29tbmF0aC5Sb3lAc2FuZGlzay5jb20+IHdyb3RlOg0KPj4+IFRoYW5rcyBTYW0u LkkgcmVzcG9uZGVkIGJhY2sgOi0pDQo+Pj4NCj4+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0t LQ0KPj4+IEZyb206IGNlcGgtZGV2ZWwtb3duZXJAdmdlci5rZXJuZWwub3JnIA0KPj4+IFttYWls dG86Y2VwaC1kZXZlbC1vd25lckB2Z2VyLmtlcm5lbC5vcmddIE9uIEJlaGFsZiBPZiBTYW11ZWwg SnVzdA0KPj4+IFNlbnQ6IFdlZG5lc2RheSwgU2VwdGVtYmVyIDEwLCAyMDE0IDExOjE3IEFNDQo+ Pj4gVG86IFNvbW5hdGggUm95DQo+Pj4gQ2M6IFNhZ2UgV2VpbCAoc3dlaWxAcmVkaGF0LmNvbSk7 IGNlcGgtZGV2ZWxAdmdlci5rZXJuZWwub3JnOyANCj4+PiBjZXBoLXVzZXJzQGxpc3RzLmNlcGgu Y29tDQo+Pj4gU3ViamVjdDogUmU6IE9wVHJhY2tlciBvcHRpbWl6YXRpb24NCj4+Pg0KPj4+IEFk ZGVkIGEgY29tbWVudCBhYm91dCB0aGUgYXBwcm9hY2guDQo+Pj4gLVNhbQ0KPj4+DQo+Pj4gT24g VHVlLCBTZXAgOSwgMjAxNCBhdCAxOjMzIFBNLCBTb21uYXRoIFJveSA8U29tbmF0aC5Sb3lAc2Fu ZGlzay5jb20+IHdyb3RlOg0KPj4+PiBIaSBTYW0vU2FnZSwNCj4+Pj4NCj4+Pj4gQXMgd2UgZGlz Y3Vzc2VkIGVhcmxpZXIsIGVuYWJsaW5nIHRoZSBwcmVzZW50IE9wVHJhY2tlciBjb2RlIA0KPj4+ PiBkZWdyYWRpbmcgcGVyZm9ybWFuY2Ugc2V2ZXJlbHkuIEZvciBleGFtcGxlLCBpbiBteSBzZXR1 cCBhIHNpbmdsZSANCj4+Pj4gT1NEIG5vZGUgd2l0aA0KPj4+PiAxMCBjbGllbnRzIGlzIHJlYWNo aW5nIH4xMDNLIHJlYWQgaW9wcyB3aXRoIGlvIHNlcnZlZCBmcm9tIG1lbW9yeSANCj4+Pj4gd2hp bGUgb3B0cmFja2luZyBpcyBkaXNhYmxlZCBidXQgZW5hYmxpbmcgb3B0cmFja2VyIGl0IGlzIHJl ZHVjZWQgdG8gfjM5SyBpb3BzLg0KPj4+PiBQcm9iYWJseSwgcnVubmluZyBPU0Qgd2l0aG91dCBl bmFibGluZyBPcFRyYWNrZXIgaXMgbm90IGFuIG9wdGlvbiANCj4+Pj4gZm9yIG1hbnkgb2YgQ2Vw aCB1c2Vycy4NCj4+Pj4NCj4+Pj4gTm93LCBieSBzaGFyZGluZyB0aGUgT3B0cmFja2VyOjogb3Bz X2luX2ZsaWdodF9sb2NrICh0aHVzIHhsaXN0DQo+Pj4+IG9wc19pbl9mbGlnaHQpIGFuZCByZW1v dmluZyBzb21lIG90aGVyIGJvdHRsZW5lY2tzIEkgYW0gYWJsZSB0byANCj4+Pj4gbWF0Y2ggdGhl IHBlcmZvcm1hbmNlIG9mIE9wVHJhY2tpbmcgZW5hYmxlZCBPU0Qgd2l0aCBPcFRyYWNraW5nIA0K Pj4+PiBkaXNhYmxlZCwgYnV0IHdpdGggdGhlIGV4cGVuc2Ugb2YgfjEgZXh0cmEgY3B1IGNvcmUu DQo+Pj4+DQo+Pj4+IEluIHRoaXMgcHJvY2VzcyBJIGhhdmUgYWxzbyBmaXhlZCB0aGUgZm9sbG93 aW5nIHRyYWNrZXIuDQo+Pj4+DQo+Pj4+DQo+Pj4+DQo+Pj4+IGh0dHA6Ly90cmFja2VyLmNlcGgu Y29tL2lzc3Vlcy85Mzg0DQo+Pj4+DQo+Pj4+DQo+Pj4+DQo+Pj4+IGFuZCBwcm9iYWJseSBodHRw Oi8vdHJhY2tlci5jZXBoLmNvbS9pc3N1ZXMvODg4NSB0b28uDQo+Pj4+DQo+Pj4+DQo+Pj4+DQo+ Pj4+IEkgaGF2ZSBjcmVhdGVkIGZvbGxvd2luZyBwdWxsIHJlcXVlc3QgZm9yIHRoZSBzYW1lLiBQ bGVhc2UgcmV2aWV3IGl0Lg0KPj4+Pg0KPj4+Pg0KPj4+Pg0KPj4+PiBodHRwczovL2dpdGh1Yi5j b20vY2VwaC9jZXBoL3B1bGwvMjQ0MA0KPj4+Pg0KPj4+Pg0KPj4+Pg0KPj4+PiBUaGFua3MgJiBS ZWdhcmRzDQo+Pj4+DQo+Pj4+IFNvbW5hdGgNCj4+Pj4NCj4+Pj4NCj4+Pj4NCj4+Pj4NCj4+Pj4g X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18NCj4+Pj4NCj4+Pj4gUExFQVNFIE5PVEU6 IFRoZSBpbmZvcm1hdGlvbiBjb250YWluZWQgaW4gdGhpcyBlbGVjdHJvbmljIG1haWwgDQo+Pj4+ IG1lc3NhZ2UgaXMgaW50ZW5kZWQgb25seSBmb3IgdGhlIHVzZSBvZiB0aGUgZGVzaWduYXRlZCBy ZWNpcGllbnQocykgDQo+Pj4+IG5hbWVkIGFib3ZlLiBJZiB0aGUgcmVhZGVyIG9mIHRoaXMgbWVz c2FnZSBpcyBub3QgdGhlIGludGVuZGVkIA0KPj4+PiByZWNpcGllbnQsIHlvdSBhcmUgaGVyZWJ5 IG5vdGlmaWVkIHRoYXQgeW91IGhhdmUgcmVjZWl2ZWQgdGhpcyANCj4+Pj4gbWVzc2FnZSBpbiBl cnJvciBhbmQgdGhhdCBhbnkgcmV2aWV3LCBkaXNzZW1pbmF0aW9uLCBkaXN0cmlidXRpb24sIA0K Pj4+PiBvciBjb3B5aW5nIG9mIHRoaXMgbWVzc2FnZSBpcyBzdHJpY3RseSBwcm9oaWJpdGVkLiBJ ZiB5b3UgaGF2ZSANCj4+Pj4gcmVjZWl2ZWQgdGhpcyBjb21tdW5pY2F0aW9uIGluIGVycm9yLCBw bGVhc2Ugbm90aWZ5IHRoZSBzZW5kZXIgYnkgDQo+Pj4+IHRlbGVwaG9uZSBvciBlLW1haWwgKGFz IHNob3duIGFib3ZlKSBpbW1lZGlhdGVseSBhbmQgZGVzdHJveSBhbnkgDQo+Pj4+IGFuZCBhbGwg Y29waWVzIG9mIHRoaXMgbWVzc2FnZSBpbiB5b3VyIHBvc3Nlc3Npb24gKHdoZXRoZXIgaGFyZCBj b3BpZXMgb3IgZWxlY3Ryb25pY2FsbHkgc3RvcmVkIGNvcGllcykuDQo+Pj4+DQo+Pj4gLS0NCj4+ PiBUbyB1bnN1YnNjcmliZSBmcm9tIHRoaXMgbGlzdDogc2VuZCB0aGUgbGluZSAidW5zdWJzY3Jp YmUgY2VwaC1kZXZlbCINCj4+PiBpbiB0aGUgYm9keSBvZiBhIG1lc3NhZ2UgdG8gbWFqb3Jkb21v QHZnZXIua2VybmVsLm9yZyBNb3JlIG1ham9yZG9tbyANCj4+PiBpbmZvIGF0ICBodHRwOi8vdmdl ci5rZXJuZWwub3JnL21ham9yZG9tby1pbmZvLmh0bWwNCj4+Pg0KPj4+IF9fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fDQo+Pj4NCj4+PiBQTEVBU0UgTk9URTogVGhlIGluZm9ybWF0aW9u IGNvbnRhaW5lZCBpbiB0aGlzIGVsZWN0cm9uaWMgbWFpbCBtZXNzYWdlIGlzIGludGVuZGVkIG9u bHkgZm9yIHRoZSB1c2Ugb2YgdGhlIGRlc2lnbmF0ZWQgcmVjaXBpZW50KHMpIG5hbWVkIGFib3Zl LiBJZiB0aGUgcmVhZGVyIG9mIHRoaXMgbWVzc2FnZSBpcyBub3QgdGhlIGludGVuZGVkIHJlY2lw aWVudCwgeW91IGFyZSBoZXJlYnkgbm90aWZpZWQgdGhhdCB5b3UgaGF2ZSByZWNlaXZlZCB0aGlz IG1lc3NhZ2UgaW4gZXJyb3IgYW5kIHRoYXQgYW55IHJldmlldywgZGlzc2VtaW5hdGlvbiwgZGlz dHJpYnV0aW9uLCBvciBjb3B5aW5nIG9mIHRoaXMgbWVzc2FnZSBpcyBzdHJpY3RseSBwcm9oaWJp dGVkLiBJZiB5b3UgaGF2ZSByZWNlaXZlZCB0aGlzIGNvbW11bmljYXRpb24gaW4gZXJyb3IsIHBs ZWFzZSBub3RpZnkgdGhlIHNlbmRlciBieSB0ZWxlcGhvbmUgb3IgZS1tYWlsIChhcyBzaG93biBh Ym92ZSkgaW1tZWRpYXRlbHkgYW5kIGRlc3Ryb3kgYW55IGFuZCBhbGwgY29waWVzIG9mIHRoaXMg bWVzc2FnZSBpbiB5b3VyIHBvc3Nlc3Npb24gKHdoZXRoZXIgaGFyZCBjb3BpZXMgb3IgZWxlY3Ry b25pY2FsbHkgc3RvcmVkIGNvcGllcykuDQo+Pj4NCg== From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sage Weil Subject: RE: OpTracker optimization Date: Wed, 10 Sep 2014 20:33:22 -0700 (PDT) Message-ID: References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C7E4@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Return-path: Received: from cobra.newdream.net ([66.33.216.30]:53304 "EHLO cobra.newdream.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750967AbaIKDdX (ORCPT ); Wed, 10 Sep 2014 23:33:23 -0400 In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C7E4@SACMBXIP02.sdcorp.global.sandisk.com> Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Somnath Roy Cc: Samuel Just , "ceph-devel@vger.kernel.org" , "ceph-users@lists.ceph.com" I had two substantiative comments on the first patch and then some trivial whitespace nits. Otherwise looks good! tahnks- sage On Thu, 11 Sep 2014, Somnath Roy wrote: > Sam/Sage, > I have incorporated all of your comments. Please have a look at the same pull request. > > https://github.com/ceph/ceph/pull/2440 > > Thanks & Regards > Somnath > > -----Original Message----- > From: Samuel Just [mailto:sam.just@inktank.com] > Sent: Wednesday, September 10, 2014 3:25 PM > To: Somnath Roy > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com > Subject: Re: OpTracker optimization > > Oh, I changed my mind, your approach is fine. I was unclear. > Currently, I just need you to address the other comments. > -Sam > > On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: > > As I understand, you want me to implement the following. > > > > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. > > > > 2. Additionally, for ios going through ms_fast_dispatch, you want me > > to implement optracker (without internal shard) per opwq shard > > > > Am I right ? > > > > Thanks & Regards > > Somnath > > > > -----Original Message----- > > From: Samuel Just [mailto:sam.just@inktank.com] > > Sent: Wednesday, September 10, 2014 3:08 PM > > To: Somnath Roy > > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; > > ceph-users@lists.ceph.com > > Subject: Re: OpTracker optimization > > > > I don't quite understand. > > -Sam > > > > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: > >> Thanks Sam. > >> So, you want me to go with optracker/shadedopWq , right ? > >> > >> Regards > >> Somnath > >> > >> -----Original Message----- > >> From: Samuel Just [mailto:sam.just@inktank.com] > >> Sent: Wednesday, September 10, 2014 2:36 PM > >> To: Somnath Roy > >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; > >> ceph-users@lists.ceph.com > >> Subject: Re: OpTracker optimization > >> > >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. > >> -Sam > >> > >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > >>> Thanks Sam..I responded back :-) > >>> > >>> -----Original Message----- > >>> From: ceph-devel-owner@vger.kernel.org > >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just > >>> Sent: Wednesday, September 10, 2014 11:17 AM > >>> To: Somnath Roy > >>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; > >>> ceph-users@lists.ceph.com > >>> Subject: Re: OpTracker optimization > >>> > >>> Added a comment about the approach. > >>> -Sam > >>> > >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > >>>> Hi Sam/Sage, > >>>> > >>>> As we discussed earlier, enabling the present OpTracker code > >>>> degrading performance severely. For example, in my setup a single > >>>> OSD node with > >>>> 10 clients is reaching ~103K read iops with io served from memory > >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops. > >>>> Probably, running OSD without enabling OpTracker is not an option > >>>> for many of Ceph users. > >>>> > >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist > >>>> ops_in_flight) and removing some other bottlenecks I am able to > >>>> match the performance of OpTracking enabled OSD with OpTracking > >>>> disabled, but with the expense of ~1 extra cpu core. > >>>> > >>>> In this process I have also fixed the following tracker. > >>>> > >>>> > >>>> > >>>> http://tracker.ceph.com/issues/9384 > >>>> > >>>> > >>>> > >>>> and probably http://tracker.ceph.com/issues/8885 too. > >>>> > >>>> > >>>> > >>>> I have created following pull request for the same. Please review it. > >>>> > >>>> > >>>> > >>>> https://github.com/ceph/ceph/pull/2440 > >>>> > >>>> > >>>> > >>>> Thanks & Regards > >>>> > >>>> Somnath > >>>> > >>>> > >>>> > >>>> > >>>> ________________________________ > >>>> > >>>> PLEASE NOTE: The information contained in this electronic mail > >>>> message is intended only for the use of the designated recipient(s) > >>>> named above. If the reader of this message is not the intended > >>>> recipient, you are hereby notified that you have received this > >>>> message in error and that any review, dissemination, distribution, > >>>> or copying of this message is strictly prohibited. If you have > >>>> received this communication in error, please notify the sender by > >>>> telephone or e-mail (as shown above) immediately and destroy any > >>>> and all copies of this message in your possession (whether hard copies or electronically stored copies). > >>>> > >>> -- > >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" > >>> in the body of a message to majordomo@vger.kernel.org More majordomo > >>> info at http://vger.kernel.org/majordomo-info.html > >>> > >>> ________________________________ > >>> > >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >>> > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: Re: OpTracker optimization Date: Thu, 11 Sep 2014 18:30:00 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783CED4@SACMBXIP02.sdcorp.global.sandisk.com> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C7E4@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Sage Weil Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org Sam/Sage, I have addressed all of your comments and pushed the changes to the same pull request. https://github.com/ceph/ceph/pull/2440 Thanks & Regards Somnath -----Original Message----- From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] Sent: Wednesday, September 10, 2014 8:33 PM To: Somnath Roy Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Subject: RE: OpTracker optimization I had two substantiative comments on the first patch and then some trivial whitespace nits. Otherwise looks good! tahnks- sage On Thu, 11 Sep 2014, Somnath Roy wrote: > Sam/Sage, > I have incorporated all of your comments. Please have a look at the same pull request. > > https://github.com/ceph/ceph/pull/2440 > > Thanks & Regards > Somnath > > -----Original Message----- > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > Sent: Wednesday, September 10, 2014 3:25 PM > To: Somnath Roy > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: Re: OpTracker optimization > > Oh, I changed my mind, your approach is fine. I was unclear. > Currently, I just need you to address the other comments. > -Sam > > On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: > > As I understand, you want me to implement the following. > > > > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. > > > > 2. Additionally, for ios going through ms_fast_dispatch, you want me > > to implement optracker (without internal shard) per opwq shard > > > > Am I right ? > > > > Thanks & Regards > > Somnath > > > > -----Original Message----- > > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > > Sent: Wednesday, September 10, 2014 3:08 PM > > To: Somnath Roy > > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > > Subject: Re: OpTracker optimization > > > > I don't quite understand. > > -Sam > > > > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: > >> Thanks Sam. > >> So, you want me to go with optracker/shadedopWq , right ? > >> > >> Regards > >> Somnath > >> > >> -----Original Message----- > >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> Sent: Wednesday, September 10, 2014 2:36 PM > >> To: Somnath Roy > >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> Subject: Re: OpTracker optimization > >> > >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. > >> -Sam > >> > >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > >>> Thanks Sam..I responded back :-) > >>> > >>> -----Original Message----- > >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just > >>> Sent: Wednesday, September 10, 2014 11:17 AM > >>> To: Somnath Roy > >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >>> Subject: Re: OpTracker optimization > >>> > >>> Added a comment about the approach. > >>> -Sam > >>> > >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > >>>> Hi Sam/Sage, > >>>> > >>>> As we discussed earlier, enabling the present OpTracker code > >>>> degrading performance severely. For example, in my setup a single > >>>> OSD node with > >>>> 10 clients is reaching ~103K read iops with io served from memory > >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops. > >>>> Probably, running OSD without enabling OpTracker is not an option > >>>> for many of Ceph users. > >>>> > >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist > >>>> ops_in_flight) and removing some other bottlenecks I am able to > >>>> match the performance of OpTracking enabled OSD with OpTracking > >>>> disabled, but with the expense of ~1 extra cpu core. > >>>> > >>>> In this process I have also fixed the following tracker. > >>>> > >>>> > >>>> > >>>> http://tracker.ceph.com/issues/9384 > >>>> > >>>> > >>>> > >>>> and probably http://tracker.ceph.com/issues/8885 too. > >>>> > >>>> > >>>> > >>>> I have created following pull request for the same. Please review it. > >>>> > >>>> > >>>> > >>>> https://github.com/ceph/ceph/pull/2440 > >>>> > >>>> > >>>> > >>>> Thanks & Regards > >>>> > >>>> Somnath > >>>> > >>>> > >>>> > >>>> > >>>> ________________________________ > >>>> > >>>> PLEASE NOTE: The information contained in this electronic mail > >>>> message is intended only for the use of the designated > >>>> recipient(s) named above. If the reader of this message is not > >>>> the intended recipient, you are hereby notified that you have > >>>> received this message in error and that any review, > >>>> dissemination, distribution, or copying of this message is > >>>> strictly prohibited. If you have received this communication in > >>>> error, please notify the sender by telephone or e-mail (as shown > >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >>>> > >>> -- > >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" > >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More > >>> majordomo info at http://vger.kernel.org/majordomo-info.html > >>> > >>> ________________________________ > >>> > >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >>> > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Samuel Just Subject: Re: OpTracker optimization Date: Thu, 11 Sep 2014 11:30:59 -0700 Message-ID: References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C7E4@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783CED4@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Return-path: Received: from mail-qa0-f41.google.com ([209.85.216.41]:38897 "EHLO mail-qa0-f41.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751091AbaIKSgZ convert rfc822-to-8bit (ORCPT ); Thu, 11 Sep 2014 14:36:25 -0400 Received: by mail-qa0-f41.google.com with SMTP id f12so3843714qad.0 for ; Thu, 11 Sep 2014 11:36:22 -0700 (PDT) In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783CED4@SACMBXIP02.sdcorp.global.sandisk.com> Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Somnath Roy Cc: Sage Weil , "ceph-devel@vger.kernel.org" , "ceph-users@lists.ceph.com" Just added it to wip-sam-testing. -Sam On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy wrote: > Sam/Sage, > I have addressed all of your comments and pushed the changes to the same pull request. > > https://github.com/ceph/ceph/pull/2440 > > Thanks & Regards > Somnath > > -----Original Message----- > From: Sage Weil [mailto:sweil@redhat.com] > Sent: Wednesday, September 10, 2014 8:33 PM > To: Somnath Roy > Cc: Samuel Just; ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com > Subject: RE: OpTracker optimization > > I had two substantiative comments on the first patch and then some trivial > whitespace nits. Otherwise looks good! > > tahnks- > sage > > On Thu, 11 Sep 2014, Somnath Roy wrote: > >> Sam/Sage, >> I have incorporated all of your comments. Please have a look at the same pull request. >> >> https://github.com/ceph/ceph/pull/2440 >> >> Thanks & Regards >> Somnath >> >> -----Original Message----- >> From: Samuel Just [mailto:sam.just@inktank.com] >> Sent: Wednesday, September 10, 2014 3:25 PM >> To: Somnath Roy >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; >> ceph-users@lists.ceph.com >> Subject: Re: OpTracker optimization >> >> Oh, I changed my mind, your approach is fine. I was unclear. >> Currently, I just need you to address the other comments. >> -Sam >> >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: >> > As I understand, you want me to implement the following. >> > >> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. >> > >> > 2. Additionally, for ios going through ms_fast_dispatch, you want me >> > to implement optracker (without internal shard) per opwq shard >> > >> > Am I right ? >> > >> > Thanks & Regards >> > Somnath >> > >> > -----Original Message----- >> > From: Samuel Just [mailto:sam.just@inktank.com] >> > Sent: Wednesday, September 10, 2014 3:08 PM >> > To: Somnath Roy >> > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; >> > ceph-users@lists.ceph.com >> > Subject: Re: OpTracker optimization >> > >> > I don't quite understand. >> > -Sam >> > >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: >> >> Thanks Sam. >> >> So, you want me to go with optracker/shadedopWq , right ? >> >> >> >> Regards >> >> Somnath >> >> >> >> -----Original Message----- >> >> From: Samuel Just [mailto:sam.just@inktank.com] >> >> Sent: Wednesday, September 10, 2014 2:36 PM >> >> To: Somnath Roy >> >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; >> >> ceph-users@lists.ceph.com >> >> Subject: Re: OpTracker optimization >> >> >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. >> >> -Sam >> >> >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: >> >>> Thanks Sam..I responded back :-) >> >>> >> >>> -----Original Message----- >> >>> From: ceph-devel-owner@vger.kernel.org >> >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just >> >>> Sent: Wednesday, September 10, 2014 11:17 AM >> >>> To: Somnath Roy >> >>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; >> >>> ceph-users@lists.ceph.com >> >>> Subject: Re: OpTracker optimization >> >>> >> >>> Added a comment about the approach. >> >>> -Sam >> >>> >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: >> >>>> Hi Sam/Sage, >> >>>> >> >>>> As we discussed earlier, enabling the present OpTracker code >> >>>> degrading performance severely. For example, in my setup a single >> >>>> OSD node with >> >>>> 10 clients is reaching ~103K read iops with io served from memory >> >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops. >> >>>> Probably, running OSD without enabling OpTracker is not an option >> >>>> for many of Ceph users. >> >>>> >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist >> >>>> ops_in_flight) and removing some other bottlenecks I am able to >> >>>> match the performance of OpTracking enabled OSD with OpTracking >> >>>> disabled, but with the expense of ~1 extra cpu core. >> >>>> >> >>>> In this process I have also fixed the following tracker. >> >>>> >> >>>> >> >>>> >> >>>> http://tracker.ceph.com/issues/9384 >> >>>> >> >>>> >> >>>> >> >>>> and probably http://tracker.ceph.com/issues/8885 too. >> >>>> >> >>>> >> >>>> >> >>>> I have created following pull request for the same. Please review it. >> >>>> >> >>>> >> >>>> >> >>>> https://github.com/ceph/ceph/pull/2440 >> >>>> >> >>>> >> >>>> >> >>>> Thanks & Regards >> >>>> >> >>>> Somnath >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> ________________________________ >> >>>> >> >>>> PLEASE NOTE: The information contained in this electronic mail >> >>>> message is intended only for the use of the designated >> >>>> recipient(s) named above. If the reader of this message is not >> >>>> the intended recipient, you are hereby notified that you have >> >>>> received this message in error and that any review, >> >>>> dissemination, distribution, or copying of this message is >> >>>> strictly prohibited. If you have received this communication in >> >>>> error, please notify the sender by telephone or e-mail (as shown >> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). >> >>>> >> >>> -- >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" >> >>> in the body of a message to majordomo@vger.kernel.org More >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html >> >>> >> >>> ________________________________ >> >>> >> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). >> >>> >> From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: Re: OpTracker optimization Date: Sat, 13 Sep 2014 08:03:52 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783E75D@SACMBXIP02.sdcorp.global.sandisk.com> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783A845@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C56D@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C7E4@SACMBXIP02.sdcorp.global.sandisk.com> <755F6B91B3BE364F9BCA11EA3F9E0C6F2783CED4@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Samuel Just Cc: Sage Weil , "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org Sam/Sage, I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ? Thanks & Regards Somnath -----Original Message----- From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] Sent: Thursday, September 11, 2014 11:31 AM To: Somnath Roy Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Subject: Re: OpTracker optimization Just added it to wip-sam-testing. -Sam On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy wrote: > Sam/Sage, > I have addressed all of your comments and pushed the changes to the same pull request. > > https://github.com/ceph/ceph/pull/2440 > > Thanks & Regards > Somnath > > -----Original Message----- > From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] > Sent: Wednesday, September 10, 2014 8:33 PM > To: Somnath Roy > Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: RE: OpTracker optimization > > I had two substantiative comments on the first patch and then some trivial > whitespace nits. Otherwise looks good! > > tahnks- > sage > > On Thu, 11 Sep 2014, Somnath Roy wrote: > >> Sam/Sage, >> I have incorporated all of your comments. Please have a look at the same pull request. >> >> https://github.com/ceph/ceph/pull/2440 >> >> Thanks & Regards >> Somnath >> >> -----Original Message----- >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] >> Sent: Wednesday, September 10, 2014 3:25 PM >> To: Somnath Roy >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> Subject: Re: OpTracker optimization >> >> Oh, I changed my mind, your approach is fine. I was unclear. >> Currently, I just need you to address the other comments. >> -Sam >> >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: >> > As I understand, you want me to implement the following. >> > >> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. >> > >> > 2. Additionally, for ios going through ms_fast_dispatch, you want >> > me to implement optracker (without internal shard) per opwq shard >> > >> > Am I right ? >> > >> > Thanks & Regards >> > Somnath >> > >> > -----Original Message----- >> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] >> > Sent: Wednesday, September 10, 2014 3:08 PM >> > To: Somnath Roy >> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> > Subject: Re: OpTracker optimization >> > >> > I don't quite understand. >> > -Sam >> > >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: >> >> Thanks Sam. >> >> So, you want me to go with optracker/shadedopWq , right ? >> >> >> >> Regards >> >> Somnath >> >> >> >> -----Original Message----- >> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] >> >> Sent: Wednesday, September 10, 2014 2:36 PM >> >> To: Somnath Roy >> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> >> Subject: Re: OpTracker optimization >> >> >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. >> >> -Sam >> >> >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: >> >>> Thanks Sam..I responded back :-) >> >>> >> >>> -----Original Message----- >> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel >> >>> Just >> >>> Sent: Wednesday, September 10, 2014 11:17 AM >> >>> To: Somnath Roy >> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; >> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> >>> Subject: Re: OpTracker optimization >> >>> >> >>> Added a comment about the approach. >> >>> -Sam >> >>> >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: >> >>>> Hi Sam/Sage, >> >>>> >> >>>> As we discussed earlier, enabling the present OpTracker code >> >>>> degrading performance severely. For example, in my setup a >> >>>> single OSD node with >> >>>> 10 clients is reaching ~103K read iops with io served from >> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. >> >>>> Probably, running OSD without enabling OpTracker is not an >> >>>> option for many of Ceph users. >> >>>> >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist >> >>>> ops_in_flight) and removing some other bottlenecks I am able to >> >>>> match the performance of OpTracking enabled OSD with OpTracking >> >>>> disabled, but with the expense of ~1 extra cpu core. >> >>>> >> >>>> In this process I have also fixed the following tracker. >> >>>> >> >>>> >> >>>> >> >>>> http://tracker.ceph.com/issues/9384 >> >>>> >> >>>> >> >>>> >> >>>> and probably http://tracker.ceph.com/issues/8885 too. >> >>>> >> >>>> >> >>>> >> >>>> I have created following pull request for the same. Please review it. >> >>>> >> >>>> >> >>>> >> >>>> https://github.com/ceph/ceph/pull/2440 >> >>>> >> >>>> >> >>>> >> >>>> Thanks & Regards >> >>>> >> >>>> Somnath >> >>>> >> >>>> >> >>>> >> >>>> >> >>>> ________________________________ >> >>>> >> >>>> PLEASE NOTE: The information contained in this electronic mail >> >>>> message is intended only for the use of the designated >> >>>> recipient(s) named above. If the reader of this message is not >> >>>> the intended recipient, you are hereby notified that you have >> >>>> received this message in error and that any review, >> >>>> dissemination, distribution, or copying of this message is >> >>>> strictly prohibited. If you have received this communication in >> >>>> error, please notify the sender by telephone or e-mail (as shown >> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). >> >>>> >> >>> -- >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" >> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html >> >>> >> >>> ________________________________ >> >>> >> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). >> >>> >> From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexandre DERUMIER Subject: Re: [ceph-users] OpTracker optimization Date: Sat, 13 Sep 2014 11:00:42 +0200 (CEST) Message-ID: <0b0459cc-3a8b-485f-9c69-d64c92d6c3dd@mailpro> References: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783E75D@SACMBXIP02.sdcorp.global.sandisk.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: QUOTED-PRINTABLE Return-path: Received: from mailpro.odiso.net ([89.248.209.98]:49521 "EHLO mailpro.odiso.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751765AbaIMJAw convert rfc822-to-8bit (ORCPT ); Sat, 13 Sep 2014 05:00:52 -0400 In-Reply-To: <755F6B91B3BE364F9BCA11EA3F9E0C6F2783E75D@SACMBXIP02.sdcorp.global.sandisk.com> Sender: ceph-devel-owner@vger.kernel.org List-ID: To: Somnath Roy Cc: Sage Weil , ceph-devel@vger.kernel.org, ceph-users@lists.ceph.com, Samuel Just Hi, as ceph user, It could be wonderfull to have it for Giant, optracker performance impact is really huge (See my ssd benchmark on ce= ph user mailing) Regards, Alexandre Derumier ----- Mail original -----=20 De: "Somnath Roy" =20 =C3=80: "Samuel Just" =20 Cc: "Sage Weil" , ceph-devel@vger.kernel.org, ceph-us= ers@lists.ceph.com=20 Envoy=C3=A9: Samedi 13 Septembre 2014 10:03:52=20 Objet: Re: [ceph-users] OpTracker optimization=20 Sam/Sage,=20 I saw Giant is forked off today. We need the pull request (https://gith= ub.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge = this into Giant when it will be ready ?=20 Thanks & Regards=20 Somnath=20 -----Original Message-----=20 =46rom: Samuel Just [mailto:sam.just@inktank.com]=20 Sent: Thursday, September 11, 2014 11:31 AM=20 To: Somnath Roy=20 Cc: Sage Weil; ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com=20 Subject: Re: OpTracker optimization=20 Just added it to wip-sam-testing.=20 -Sam=20 On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy = wrote:=20 > Sam/Sage,=20 > I have addressed all of your comments and pushed the changes to the s= ame pull request.=20 >=20 > https://github.com/ceph/ceph/pull/2440=20 >=20 > Thanks & Regards=20 > Somnath=20 >=20 > -----Original Message-----=20 > From: Sage Weil [mailto:sweil@redhat.com]=20 > Sent: Wednesday, September 10, 2014 8:33 PM=20 > To: Somnath Roy=20 > Cc: Samuel Just; ceph-devel@vger.kernel.org; ceph-users@lists.ceph.co= m=20 > Subject: RE: OpTracker optimization=20 >=20 > I had two substantiative comments on the first patch and then some tr= ivial=20 > whitespace nits. Otherwise looks good!=20 >=20 > tahnks-=20 > sage=20 >=20 > On Thu, 11 Sep 2014, Somnath Roy wrote:=20 >=20 >> Sam/Sage,=20 >> I have incorporated all of your comments. Please have a look at the = same pull request.=20 >>=20 >> https://github.com/ceph/ceph/pull/2440=20 >>=20 >> Thanks & Regards=20 >> Somnath=20 >>=20 >> -----Original Message-----=20 >> From: Samuel Just [mailto:sam.just@inktank.com]=20 >> Sent: Wednesday, September 10, 2014 3:25 PM=20 >> To: Somnath Roy=20 >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;=20 >> ceph-users@lists.ceph.com=20 >> Subject: Re: OpTracker optimization=20 >>=20 >> Oh, I changed my mind, your approach is fine. I was unclear.=20 >> Currently, I just need you to address the other comments.=20 >> -Sam=20 >>=20 >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote:=20 >> > As I understand, you want me to implement the following.=20 >> >=20 >> > 1. Keep this implementation one sharded optracker for the ios goin= g through ms_dispatch path.=20 >> >=20 >> > 2. Additionally, for ios going through ms_fast_dispatch, you want=20 >> > me to implement optracker (without internal shard) per opwq shard=20 >> >=20 >> > Am I right ?=20 >> >=20 >> > Thanks & Regards=20 >> > Somnath=20 >> >=20 >> > -----Original Message-----=20 >> > From: Samuel Just [mailto:sam.just@inktank.com]=20 >> > Sent: Wednesday, September 10, 2014 3:08 PM=20 >> > To: Somnath Roy=20 >> > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;=20 >> > ceph-users@lists.ceph.com=20 >> > Subject: Re: OpTracker optimization=20 >> >=20 >> > I don't quite understand.=20 >> > -Sam=20 >> >=20 >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote:=20 >> >> Thanks Sam.=20 >> >> So, you want me to go with optracker/shadedopWq , right ?=20 >> >>=20 >> >> Regards=20 >> >> Somnath=20 >> >>=20 >> >> -----Original Message-----=20 >> >> From: Samuel Just [mailto:sam.just@inktank.com]=20 >> >> Sent: Wednesday, September 10, 2014 2:36 PM=20 >> >> To: Somnath Roy=20 >> >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;=20 >> >> ceph-users@lists.ceph.com=20 >> >> Subject: Re: OpTracker optimization=20 >> >>=20 >> >> Responded with cosmetic nonsense. Once you've got that and the ot= her comments addressed, I can put it in wip-sam-testing.=20 >> >> -Sam=20 >> >>=20 >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote:=20 >> >>> Thanks Sam..I responded back :-)=20 >> >>>=20 >> >>> -----Original Message-----=20 >> >>> From: ceph-devel-owner@vger.kernel.org=20 >> >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel=20 >> >>> Just=20 >> >>> Sent: Wednesday, September 10, 2014 11:17 AM=20 >> >>> To: Somnath Roy=20 >> >>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;=20 >> >>> ceph-users@lists.ceph.com=20 >> >>> Subject: Re: OpTracker optimization=20 >> >>>=20 >> >>> Added a comment about the approach.=20 >> >>> -Sam=20 >> >>>=20 >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote:=20 >> >>>> Hi Sam/Sage,=20 >> >>>>=20 >> >>>> As we discussed earlier, enabling the present OpTracker code=20 >> >>>> degrading performance severely. For example, in my setup a=20 >> >>>> single OSD node with=20 >> >>>> 10 clients is reaching ~103K read iops with io served from=20 >> >>>> memory while optracking is disabled but enabling optracker it i= s reduced to ~39K iops.=20 >> >>>> Probably, running OSD without enabling OpTracker is not an=20 >> >>>> option for many of Ceph users.=20 >> >>>>=20 >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist= =20 >> >>>> ops_in_flight) and removing some other bottlenecks I am able to= =20 >> >>>> match the performance of OpTracking enabled OSD with OpTracking= =20 >> >>>> disabled, but with the expense of ~1 extra cpu core.=20 >> >>>>=20 >> >>>> In this process I have also fixed the following tracker.=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>> http://tracker.ceph.com/issues/9384=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>> and probably http://tracker.ceph.com/issues/8885 too.=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>> I have created following pull request for the same. Please revi= ew it.=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>> https://github.com/ceph/ceph/pull/2440=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>> Thanks & Regards=20 >> >>>>=20 >> >>>> Somnath=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>>=20 >> >>>> ________________________________=20 >> >>>>=20 >> >>>> PLEASE NOTE: The information contained in this electronic mail=20 >> >>>> message is intended only for the use of the designated=20 >> >>>> recipient(s) named above. If the reader of this message is not=20 >> >>>> the intended recipient, you are hereby notified that you have=20 >> >>>> received this message in error and that any review,=20 >> >>>> dissemination, distribution, or copying of this message is=20 >> >>>> strictly prohibited. If you have received this communication in= =20 >> >>>> error, please notify the sender by telephone or e-mail (as show= n=20 >> >>>> above) immediately and destroy any and all copies of this messa= ge in your possession (whether hard copies or electronically stored cop= ies).=20 >> >>>>=20 >> >>> --=20 >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-d= evel"=20 >> >>> in the body of a message to majordomo@vger.kernel.org More=20 >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html=20 >> >>>=20 >> >>> ________________________________=20 >> >>>=20 >> >>> PLEASE NOTE: The information contained in this electronic mail m= essage is intended only for the use of the designated recipient(s) name= d above. If the reader of this message is not the intended recipient, y= ou are hereby notified that you have received this message in error and= that any review, dissemination, distribution, or copying of this messa= ge is strictly prohibited. If you have received this communication in e= rror, please notify the sender by telephone or e-mail (as shown above) = immediately and destroy any and all copies of this message in your poss= ession (whether hard copies or electronically stored copies).=20 >> >>>=20 >>=20 _______________________________________________=20 ceph-users mailing list=20 ceph-users@lists.ceph.com=20 http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com=20 -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" i= n the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sage Weil Subject: Re: OpTracker optimization Date: Sat, 13 Sep 2014 07:32:06 -0700 (PDT) Message-ID: References: <0b0459cc-3a8b-485f-9c69-d64c92d6c3dd@mailpro> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <0b0459cc-3a8b-485f-9c69-d64c92d6c3dd@mailpro> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Alexandre DERUMIER Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org List-Id: ceph-devel.vger.kernel.org On Sat, 13 Sep 2014, Alexandre DERUMIER wrote: > Hi, > as ceph user, It could be wonderfull to have it for Giant, > optracker performance impact is really huge (See my ssd benchmark on ceph user mailing) Definitely. More importantly, it resolves a few crashes we've observed. It's going through some testing right now, but once that's done it'll go into giant. sage > > Regards, > > Alexandre Derumier > > ----- Mail original ----- > > De: "Somnath Roy" > ?: "Samuel Just" > Cc: "Sage Weil" , ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Envoy?: Samedi 13 Septembre 2014 10:03:52 > Objet: Re: [ceph-users] OpTracker optimization > > Sam/Sage, > I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ? > > Thanks & Regards > Somnath > > -----Original Message----- > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > Sent: Thursday, September 11, 2014 11:31 AM > To: Somnath Roy > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: Re: OpTracker optimization > > Just added it to wip-sam-testing. > -Sam > > On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy wrote: > > Sam/Sage, > > I have addressed all of your comments and pushed the changes to the same pull request. > > > > https://github.com/ceph/ceph/pull/2440 > > > > Thanks & Regards > > Somnath > > > > -----Original Message----- > > From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] > > Sent: Wednesday, September 10, 2014 8:33 PM > > To: Somnath Roy > > Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > > Subject: RE: OpTracker optimization > > > > I had two substantiative comments on the first patch and then some trivial > > whitespace nits. Otherwise looks good! > > > > tahnks- > > sage > > > > On Thu, 11 Sep 2014, Somnath Roy wrote: > > > >> Sam/Sage, > >> I have incorporated all of your comments. Please have a look at the same pull request. > >> > >> https://github.com/ceph/ceph/pull/2440 > >> > >> Thanks & Regards > >> Somnath > >> > >> -----Original Message----- > >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> Sent: Wednesday, September 10, 2014 3:25 PM > >> To: Somnath Roy > >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> Subject: Re: OpTracker optimization > >> > >> Oh, I changed my mind, your approach is fine. I was unclear. > >> Currently, I just need you to address the other comments. > >> -Sam > >> > >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: > >> > As I understand, you want me to implement the following. > >> > > >> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. > >> > > >> > 2. Additionally, for ios going through ms_fast_dispatch, you want > >> > me to implement optracker (without internal shard) per opwq shard > >> > > >> > Am I right ? > >> > > >> > Thanks & Regards > >> > Somnath > >> > > >> > -----Original Message----- > >> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> > Sent: Wednesday, September 10, 2014 3:08 PM > >> > To: Somnath Roy > >> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> > Subject: Re: OpTracker optimization > >> > > >> > I don't quite understand. > >> > -Sam > >> > > >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: > >> >> Thanks Sam. > >> >> So, you want me to go with optracker/shadedopWq , right ? > >> >> > >> >> Regards > >> >> Somnath > >> >> > >> >> -----Original Message----- > >> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> >> Sent: Wednesday, September 10, 2014 2:36 PM > >> >> To: Somnath Roy > >> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> >> Subject: Re: OpTracker optimization > >> >> > >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. > >> >> -Sam > >> >> > >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > >> >>> Thanks Sam..I responded back :-) > >> >>> > >> >>> -----Original Message----- > >> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > >> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel > >> >>> Just > >> >>> Sent: Wednesday, September 10, 2014 11:17 AM > >> >>> To: Somnath Roy > >> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> >>> Subject: Re: OpTracker optimization > >> >>> > >> >>> Added a comment about the approach. > >> >>> -Sam > >> >>> > >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > >> >>>> Hi Sam/Sage, > >> >>>> > >> >>>> As we discussed earlier, enabling the present OpTracker code > >> >>>> degrading performance severely. For example, in my setup a > >> >>>> single OSD node with > >> >>>> 10 clients is reaching ~103K read iops with io served from > >> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. > >> >>>> Probably, running OSD without enabling OpTracker is not an > >> >>>> option for many of Ceph users. > >> >>>> > >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist > >> >>>> ops_in_flight) and removing some other bottlenecks I am able to > >> >>>> match the performance of OpTracking enabled OSD with OpTracking > >> >>>> disabled, but with the expense of ~1 extra cpu core. > >> >>>> > >> >>>> In this process I have also fixed the following tracker. > >> >>>> > >> >>>> > >> >>>> > >> >>>> http://tracker.ceph.com/issues/9384 > >> >>>> > >> >>>> > >> >>>> > >> >>>> and probably http://tracker.ceph.com/issues/8885 too. > >> >>>> > >> >>>> > >> >>>> > >> >>>> I have created following pull request for the same. Please review it. > >> >>>> > >> >>>> > >> >>>> > >> >>>> https://github.com/ceph/ceph/pull/2440 > >> >>>> > >> >>>> > >> >>>> > >> >>>> Thanks & Regards > >> >>>> > >> >>>> Somnath > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> ________________________________ > >> >>>> > >> >>>> PLEASE NOTE: The information contained in this electronic mail > >> >>>> message is intended only for the use of the designated > >> >>>> recipient(s) named above. If the reader of this message is not > >> >>>> the intended recipient, you are hereby notified that you have > >> >>>> received this message in error and that any review, > >> >>>> dissemination, distribution, or copying of this message is > >> >>>> strictly prohibited. If you have received this communication in > >> >>>> error, please notify the sender by telephone or e-mail (as shown > >> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >> >>>> > >> >>> -- > >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" > >> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More > >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html > >> >>> > >> >>> ________________________________ > >> >>> > >> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >> >>> > >> > _______________________________________________ > ceph-users mailing list > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Somnath Roy Subject: Re: OpTracker optimization Date: Sat, 13 Sep 2014 16:19:46 +0000 Message-ID: <755F6B91B3BE364F9BCA11EA3F9E0C6F27841EAE@SACMBXIP02.sdcorp.global.sandisk.com> References: <0b0459cc-3a8b-485f-9c69-d64c92d6c3dd@mailpro> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Language: en-US List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Sage Weil , Alexandre DERUMIER Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org" , "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" List-Id: ceph-devel.vger.kernel.org Thanks Sage! -----Original Message----- From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] Sent: Saturday, September 13, 2014 7:32 AM To: Alexandre DERUMIER Cc: Somnath Roy; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org; Samuel Just Subject: Re: [ceph-users] OpTracker optimization On Sat, 13 Sep 2014, Alexandre DERUMIER wrote: > Hi, > as ceph user, It could be wonderfull to have it for Giant, optracker > performance impact is really huge (See my ssd benchmark on ceph user > mailing) Definitely. More importantly, it resolves a few crashes we've observed. It's going through some testing right now, but once that's done it'll go into giant. sage > > Regards, > > Alexandre Derumier > > ----- Mail original ----- > > De: "Somnath Roy" > ?: "Samuel Just" > Cc: "Sage Weil" , ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Envoy?: Samedi 13 Septembre 2014 10:03:52 > Objet: Re: [ceph-users] OpTracker optimization > > Sam/Sage, > I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ? > > Thanks & Regards > Somnath > > -----Original Message----- > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > Sent: Thursday, September 11, 2014 11:31 AM > To: Somnath Roy > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > Subject: Re: OpTracker optimization > > Just added it to wip-sam-testing. > -Sam > > On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy wrote: > > Sam/Sage, > > I have addressed all of your comments and pushed the changes to the same pull request. > > > > https://github.com/ceph/ceph/pull/2440 > > > > Thanks & Regards > > Somnath > > > > -----Original Message----- > > From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] > > Sent: Wednesday, September 10, 2014 8:33 PM > > To: Somnath Roy > > Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > > Subject: RE: OpTracker optimization > > > > I had two substantiative comments on the first patch and then some > > trivial whitespace nits. Otherwise looks good! > > > > tahnks- > > sage > > > > On Thu, 11 Sep 2014, Somnath Roy wrote: > > > >> Sam/Sage, > >> I have incorporated all of your comments. Please have a look at the same pull request. > >> > >> https://github.com/ceph/ceph/pull/2440 > >> > >> Thanks & Regards > >> Somnath > >> > >> -----Original Message----- > >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> Sent: Wednesday, September 10, 2014 3:25 PM > >> To: Somnath Roy > >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> Subject: Re: OpTracker optimization > >> > >> Oh, I changed my mind, your approach is fine. I was unclear. > >> Currently, I just need you to address the other comments. > >> -Sam > >> > >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy wrote: > >> > As I understand, you want me to implement the following. > >> > > >> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. > >> > > >> > 2. Additionally, for ios going through ms_fast_dispatch, you want > >> > me to implement optracker (without internal shard) per opwq shard > >> > > >> > Am I right ? > >> > > >> > Thanks & Regards > >> > Somnath > >> > > >> > -----Original Message----- > >> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> > Sent: Wednesday, September 10, 2014 3:08 PM > >> > To: Somnath Roy > >> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> > Subject: Re: OpTracker optimization > >> > > >> > I don't quite understand. > >> > -Sam > >> > > >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy wrote: > >> >> Thanks Sam. > >> >> So, you want me to go with optracker/shadedopWq , right ? > >> >> > >> >> Regards > >> >> Somnath > >> >> > >> >> -----Original Message----- > >> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] > >> >> Sent: Wednesday, September 10, 2014 2:36 PM > >> >> To: Somnath Roy > >> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> >> Subject: Re: OpTracker optimization > >> >> > >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. > >> >> -Sam > >> >> > >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy wrote: > >> >>> Thanks Sam..I responded back :-) > >> >>> > >> >>> -----Original Message----- > >> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > >> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel > >> >>> Just > >> >>> Sent: Wednesday, September 10, 2014 11:17 AM > >> >>> To: Somnath Roy > >> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; > >> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > >> >>> Subject: Re: OpTracker optimization > >> >>> > >> >>> Added a comment about the approach. > >> >>> -Sam > >> >>> > >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy wrote: > >> >>>> Hi Sam/Sage, > >> >>>> > >> >>>> As we discussed earlier, enabling the present OpTracker code > >> >>>> degrading performance severely. For example, in my setup a > >> >>>> single OSD node with > >> >>>> 10 clients is reaching ~103K read iops with io served from > >> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. > >> >>>> Probably, running OSD without enabling OpTracker is not an > >> >>>> option for many of Ceph users. > >> >>>> > >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus > >> >>>> xlist > >> >>>> ops_in_flight) and removing some other bottlenecks I am able > >> >>>> to match the performance of OpTracking enabled OSD with > >> >>>> OpTracking disabled, but with the expense of ~1 extra cpu core. > >> >>>> > >> >>>> In this process I have also fixed the following tracker. > >> >>>> > >> >>>> > >> >>>> > >> >>>> http://tracker.ceph.com/issues/9384 > >> >>>> > >> >>>> > >> >>>> > >> >>>> and probably http://tracker.ceph.com/issues/8885 too. > >> >>>> > >> >>>> > >> >>>> > >> >>>> I have created following pull request for the same. Please review it. > >> >>>> > >> >>>> > >> >>>> > >> >>>> https://github.com/ceph/ceph/pull/2440 > >> >>>> > >> >>>> > >> >>>> > >> >>>> Thanks & Regards > >> >>>> > >> >>>> Somnath > >> >>>> > >> >>>> > >> >>>> > >> >>>> > >> >>>> ________________________________ > >> >>>> > >> >>>> PLEASE NOTE: The information contained in this electronic mail > >> >>>> message is intended only for the use of the designated > >> >>>> recipient(s) named above. If the reader of this message is not > >> >>>> the intended recipient, you are hereby notified that you have > >> >>>> received this message in error and that any review, > >> >>>> dissemination, distribution, or copying of this message is > >> >>>> strictly prohibited. If you have received this communication > >> >>>> in error, please notify the sender by telephone or e-mail (as > >> >>>> shown > >> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >> >>>> > >> >>> -- > >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" > >> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More > >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html > >> >>> > >> >>> ________________________________ > >> >>> > >> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). > >> >>> > >> > _______________________________________________ > ceph-users mailing list > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" > in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo > info at http://vger.kernel.org/majordomo-info.html > >