All of lore.kernel.org
 help / color / mirror / Atom feed
* OpTracker optimization
@ 2014-09-09 20:33 Somnath Roy
  2014-09-10 18:16 ` Samuel Just
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-09 20:33 UTC (permalink / raw)
  To: Samuel Just (sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org),
	Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org)
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-idqoXFIVOFJgJs9I8MT0rw


[-- Attachment #1.1: Type: text/plain, Size: 1618 bytes --]

Hi Sam/Sage,
As we discussed earlier, enabling the present OpTracker code degrading performance severely. For example, in my setup a single OSD node with 10 clients is reaching ~103K read iops with io served from memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. Probably, running OSD without enabling OpTracker is not an option for many of Ceph users.
Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist ops_in_flight) and removing some other bottlenecks I am able to match the performance of OpTracking enabled OSD with OpTracking disabled, but with the expense of ~1 extra cpu core.
In this process I have also fixed the following tracker.

http://tracker.ceph.com/issues/9384


and probably http://tracker.ceph.com/issues/8885 too.



I have created following pull request for the same. Please review it.



https://github.com/ceph/ceph/pull/2440



Thanks & Regards

Somnath


________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).


[-- Attachment #1.2: Type: text/html, Size: 4676 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
  2014-09-09 20:33 OpTracker optimization Somnath Roy
@ 2014-09-10 18:16 ` Samuel Just
       [not found]   ` <CA+4uBUbSrYhy8=ZPKZ7dOTh0sNNCs5mC3ttgBH+qoWO+58UdvA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Samuel Just @ 2014-09-10 18:16 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil (sweil@redhat.com), ceph-devel, ceph-users

Added a comment about the approach.
-Sam

On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Hi Sam/Sage,
>
> As we discussed earlier, enabling the present OpTracker code degrading
> performance severely. For example, in my setup a single OSD node with 10
> clients is reaching ~103K read iops with io served from memory while
> optracking is disabled but enabling optracker it is reduced to ~39K iops.
> Probably, running OSD without enabling OpTracker is not an option for many
> of Ceph users.
>
> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
> ops_in_flight) and removing some other bottlenecks I am able to match the
> performance of OpTracking enabled OSD with OpTracking disabled, but with the
> expense of ~1 extra cpu core.
>
> In this process I have also fixed the following tracker.
>
>
>
> http://tracker.ceph.com/issues/9384
>
>
>
> and probably http://tracker.ceph.com/issues/8885 too.
>
>
>
> I have created following pull request for the same. Please review it.
>
>
>
> https://github.com/ceph/ceph/pull/2440
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If the
> reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]   ` <CA+4uBUbSrYhy8=ZPKZ7dOTh0sNNCs5mC3ttgBH+qoWO+58UdvA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-10 20:30     ` Somnath Roy
  2014-09-10 21:36       ` Samuel Just
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-10 20:30 UTC (permalink / raw)
  To: Samuel Just
  Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org),
	ceph-devel-u79uwXL29TY76Z2rM5mHXA,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

Thanks Sam..I responded back :-)

-----Original Message-----
From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just
Sent: Wednesday, September 10, 2014 11:17 AM
To: Somnath Roy
Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
Subject: Re: OpTracker optimization

Added a comment about the approach.
-Sam

On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> Hi Sam/Sage,
>
> As we discussed earlier, enabling the present OpTracker code degrading
> performance severely. For example, in my setup a single OSD node with
> 10 clients is reaching ~103K read iops with io served from memory
> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
> Probably, running OSD without enabling OpTracker is not an option for
> many of Ceph users.
>
> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
> ops_in_flight) and removing some other bottlenecks I am able to match
> the performance of OpTracking enabled OSD with OpTracking disabled,
> but with the expense of ~1 extra cpu core.
>
> In this process I have also fixed the following tracker.
>
>
>
> http://tracker.ceph.com/issues/9384
>
>
>
> and probably http://tracker.ceph.com/issues/8885 too.
>
>
>
> I have created following pull request for the same. Please review it.
>
>
>
> https://github.com/ceph/ceph/pull/2440
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named
> above. If the reader of this message is not the intended recipient,
> you are hereby notified that you have received this message in error
> and that any review, dissemination, distribution, or copying of this
> message is strictly prohibited. If you have received this
> communication in error, please notify the sender by telephone or
> e-mail (as shown above) immediately and destroy any and all copies of
> this message in your possession (whether hard copies or electronically stored copies).
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at  http://vger.kernel.org/majordomo-info.html

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
  2014-09-10 20:30     ` Somnath Roy
@ 2014-09-10 21:36       ` Samuel Just
       [not found]         ` <CA+4uBUbTBksZbRxR9RCTws-O--2N+QnHN5aS_Kx-D7yVeEmiDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Samuel Just @ 2014-09-10 21:36 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil (sweil@redhat.com), ceph-devel, ceph-users

Responded with cosmetic nonsense.  Once you've got that and the other
comments addressed, I can put it in wip-sam-testing.
-Sam

On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Thanks Sam..I responded back :-)
>
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just
> Sent: Wednesday, September 10, 2014 11:17 AM
> To: Somnath Roy
> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com
> Subject: Re: OpTracker optimization
>
> Added a comment about the approach.
> -Sam
>
> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> Hi Sam/Sage,
>>
>> As we discussed earlier, enabling the present OpTracker code degrading
>> performance severely. For example, in my setup a single OSD node with
>> 10 clients is reaching ~103K read iops with io served from memory
>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>> Probably, running OSD without enabling OpTracker is not an option for
>> many of Ceph users.
>>
>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>> ops_in_flight) and removing some other bottlenecks I am able to match
>> the performance of OpTracking enabled OSD with OpTracking disabled,
>> but with the expense of ~1 extra cpu core.
>>
>> In this process I have also fixed the following tracker.
>>
>>
>>
>> http://tracker.ceph.com/issues/9384
>>
>>
>>
>> and probably http://tracker.ceph.com/issues/8885 too.
>>
>>
>>
>> I have created following pull request for the same. Please review it.
>>
>>
>>
>> https://github.com/ceph/ceph/pull/2440
>>
>>
>>
>> Thanks & Regards
>>
>> Somnath
>>
>>
>>
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message
>> is intended only for the use of the designated recipient(s) named
>> above. If the reader of this message is not the intended recipient,
>> you are hereby notified that you have received this message in error
>> and that any review, dissemination, distribution, or copying of this
>> message is strictly prohibited. If you have received this
>> communication in error, please notify the sender by telephone or
>> e-mail (as shown above) immediately and destroy any and all copies of
>> this message in your possession (whether hard copies or electronically stored copies).
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]         ` <CA+4uBUbTBksZbRxR9RCTws-O--2N+QnHN5aS_Kx-D7yVeEmiDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-10 21:38           ` Somnath Roy
       [not found]             ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-10 21:38 UTC (permalink / raw)
  To: Samuel Just
  Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org),
	ceph-devel-u79uwXL29TY76Z2rM5mHXA,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

Thanks Sam.
So, you want me to go with optracker/shadedopWq , right ?

Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] 
Sent: Wednesday, September 10, 2014 2:36 PM
To: Somnath Roy
Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
Subject: Re: OpTracker optimization

Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
-Sam

On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> Thanks Sam..I responded back :-)
>
> -----Original Message-----
> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org 
> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just
> Sent: Wednesday, September 10, 2014 11:17 AM
> To: Somnath Roy
> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: OpTracker optimization
>
> Added a comment about the approach.
> -Sam
>
> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> Hi Sam/Sage,
>>
>> As we discussed earlier, enabling the present OpTracker code 
>> degrading performance severely. For example, in my setup a single OSD 
>> node with
>> 10 clients is reaching ~103K read iops with io served from memory 
>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>> Probably, running OSD without enabling OpTracker is not an option for 
>> many of Ceph users.
>>
>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>> ops_in_flight) and removing some other bottlenecks I am able to match 
>> the performance of OpTracking enabled OSD with OpTracking disabled, 
>> but with the expense of ~1 extra cpu core.
>>
>> In this process I have also fixed the following tracker.
>>
>>
>>
>> http://tracker.ceph.com/issues/9384
>>
>>
>>
>> and probably http://tracker.ceph.com/issues/8885 too.
>>
>>
>>
>> I have created following pull request for the same. Please review it.
>>
>>
>>
>> https://github.com/ceph/ceph/pull/2440
>>
>>
>>
>> Thanks & Regards
>>
>> Somnath
>>
>>
>>
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail 
>> message is intended only for the use of the designated recipient(s) 
>> named above. If the reader of this message is not the intended 
>> recipient, you are hereby notified that you have received this 
>> message in error and that any review, dissemination, distribution, or 
>> copying of this message is strictly prohibited. If you have received 
>> this communication in error, please notify the sender by telephone or 
>> e-mail (as shown above) immediately and destroy any and all copies of 
>> this message in your possession (whether hard copies or electronically stored copies).
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]             ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
@ 2014-09-10 22:07               ` Samuel Just
  2014-09-10 22:13                 ` Somnath Roy
  0 siblings, 1 reply; 16+ messages in thread
From: Samuel Just @ 2014-09-10 22:07 UTC (permalink / raw)
  To: Somnath Roy
  Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org),
	ceph-devel-u79uwXL29TY76Z2rM5mHXA,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

I don't quite understand.
-Sam

On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> Thanks Sam.
> So, you want me to go with optracker/shadedopWq , right ?
>
> Regards
> Somnath
>
> -----Original Message-----
> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> Sent: Wednesday, September 10, 2014 2:36 PM
> To: Somnath Roy
> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: OpTracker optimization
>
> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
> -Sam
>
> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> Thanks Sam..I responded back :-)
>>
>> -----Original Message-----
>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just
>> Sent: Wednesday, September 10, 2014 11:17 AM
>> To: Somnath Roy
>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org;
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> Subject: Re: OpTracker optimization
>>
>> Added a comment about the approach.
>> -Sam
>>
>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>>> Hi Sam/Sage,
>>>
>>> As we discussed earlier, enabling the present OpTracker code
>>> degrading performance severely. For example, in my setup a single OSD
>>> node with
>>> 10 clients is reaching ~103K read iops with io served from memory
>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>>> Probably, running OSD without enabling OpTracker is not an option for
>>> many of Ceph users.
>>>
>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>>> ops_in_flight) and removing some other bottlenecks I am able to match
>>> the performance of OpTracking enabled OSD with OpTracking disabled,
>>> but with the expense of ~1 extra cpu core.
>>>
>>> In this process I have also fixed the following tracker.
>>>
>>>
>>>
>>> http://tracker.ceph.com/issues/9384
>>>
>>>
>>>
>>> and probably http://tracker.ceph.com/issues/8885 too.
>>>
>>>
>>>
>>> I have created following pull request for the same. Please review it.
>>>
>>>
>>>
>>> https://github.com/ceph/ceph/pull/2440
>>>
>>>
>>>
>>> Thanks & Regards
>>>
>>> Somnath
>>>
>>>
>>>
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail
>>> message is intended only for the use of the designated recipient(s)
>>> named above. If the reader of this message is not the intended
>>> recipient, you are hereby notified that you have received this
>>> message in error and that any review, dissemination, distribution, or
>>> copying of this message is strictly prohibited. If you have received
>>> this communication in error, please notify the sender by telephone or
>>> e-mail (as shown above) immediately and destroy any and all copies of
>>> this message in your possession (whether hard copies or electronically stored copies).
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo
>> info at  http://vger.kernel.org/majordomo-info.html
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: OpTracker optimization
  2014-09-10 22:07               ` Samuel Just
@ 2014-09-10 22:13                 ` Somnath Roy
       [not found]                   ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-10 22:13 UTC (permalink / raw)
  To: Samuel Just; +Cc: Sage Weil (sweil@redhat.com), ceph-devel, ceph-users

As I understand, you want me to implement the following.

1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.

2. Additionally, for ios going through ms_fast_dispatch, you want me to implement optracker (without internal shard) per opwq shard 

Am I right ?

Thanks & Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.just@inktank.com] 
Sent: Wednesday, September 10, 2014 3:08 PM
To: Somnath Roy
Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com
Subject: Re: OpTracker optimization

I don't quite understand.
-Sam

On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Thanks Sam.
> So, you want me to go with optracker/shadedopWq , right ?
>
> Regards
> Somnath
>
> -----Original Message-----
> From: Samuel Just [mailto:sam.just@inktank.com]
> Sent: Wednesday, September 10, 2014 2:36 PM
> To: Somnath Roy
> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
> ceph-users@lists.ceph.com
> Subject: Re: OpTracker optimization
>
> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
> -Sam
>
> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> Thanks Sam..I responded back :-)
>>
>> -----Original Message-----
>> From: ceph-devel-owner@vger.kernel.org 
>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just
>> Sent: Wednesday, September 10, 2014 11:17 AM
>> To: Somnath Roy
>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>> ceph-users@lists.ceph.com
>> Subject: Re: OpTracker optimization
>>
>> Added a comment about the approach.
>> -Sam
>>
>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>>> Hi Sam/Sage,
>>>
>>> As we discussed earlier, enabling the present OpTracker code 
>>> degrading performance severely. For example, in my setup a single 
>>> OSD node with
>>> 10 clients is reaching ~103K read iops with io served from memory 
>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>>> Probably, running OSD without enabling OpTracker is not an option 
>>> for many of Ceph users.
>>>
>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>>> ops_in_flight) and removing some other bottlenecks I am able to 
>>> match the performance of OpTracking enabled OSD with OpTracking 
>>> disabled, but with the expense of ~1 extra cpu core.
>>>
>>> In this process I have also fixed the following tracker.
>>>
>>>
>>>
>>> http://tracker.ceph.com/issues/9384
>>>
>>>
>>>
>>> and probably http://tracker.ceph.com/issues/8885 too.
>>>
>>>
>>>
>>> I have created following pull request for the same. Please review it.
>>>
>>>
>>>
>>> https://github.com/ceph/ceph/pull/2440
>>>
>>>
>>>
>>> Thanks & Regards
>>>
>>> Somnath
>>>
>>>
>>>
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail 
>>> message is intended only for the use of the designated recipient(s) 
>>> named above. If the reader of this message is not the intended 
>>> recipient, you are hereby notified that you have received this 
>>> message in error and that any review, dissemination, distribution, 
>>> or copying of this message is strictly prohibited. If you have 
>>> received this communication in error, please notify the sender by 
>>> telephone or e-mail (as shown above) immediately and destroy any and 
>>> all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>> info at  http://vger.kernel.org/majordomo-info.html
>>
>> ________________________________
>>
>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]                   ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
@ 2014-09-10 22:25                     ` Samuel Just
  2014-09-11  1:52                       ` Somnath Roy
  0 siblings, 1 reply; 16+ messages in thread
From: Samuel Just @ 2014-09-10 22:25 UTC (permalink / raw)
  To: Somnath Roy
  Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org),
	ceph-devel-u79uwXL29TY76Z2rM5mHXA,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

Oh, I changed my mind, your approach is fine.  I was unclear.
Currently, I just need you to address the other comments.
-Sam

On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> As I understand, you want me to implement the following.
>
> 1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
>
> 2. Additionally, for ios going through ms_fast_dispatch, you want me to implement optracker (without internal shard) per opwq shard
>
> Am I right ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> Sent: Wednesday, September 10, 2014 3:08 PM
> To: Somnath Roy
> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: OpTracker optimization
>
> I don't quite understand.
> -Sam
>
> On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> Thanks Sam.
>> So, you want me to go with optracker/shadedopWq , right ?
>>
>> Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>> Sent: Wednesday, September 10, 2014 2:36 PM
>> To: Somnath Roy
>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org;
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> Subject: Re: OpTracker optimization
>>
>> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
>> -Sam
>>
>> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>>> Thanks Sam..I responded back :-)
>>>
>>> -----Original Message-----
>>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just
>>> Sent: Wednesday, September 10, 2014 11:17 AM
>>> To: Somnath Roy
>>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org;
>>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>> Subject: Re: OpTracker optimization
>>>
>>> Added a comment about the approach.
>>> -Sam
>>>
>>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>>>> Hi Sam/Sage,
>>>>
>>>> As we discussed earlier, enabling the present OpTracker code
>>>> degrading performance severely. For example, in my setup a single
>>>> OSD node with
>>>> 10 clients is reaching ~103K read iops with io served from memory
>>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>>>> Probably, running OSD without enabling OpTracker is not an option
>>>> for many of Ceph users.
>>>>
>>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>>>> ops_in_flight) and removing some other bottlenecks I am able to
>>>> match the performance of OpTracking enabled OSD with OpTracking
>>>> disabled, but with the expense of ~1 extra cpu core.
>>>>
>>>> In this process I have also fixed the following tracker.
>>>>
>>>>
>>>>
>>>> http://tracker.ceph.com/issues/9384
>>>>
>>>>
>>>>
>>>> and probably http://tracker.ceph.com/issues/8885 too.
>>>>
>>>>
>>>>
>>>> I have created following pull request for the same. Please review it.
>>>>
>>>>
>>>>
>>>> https://github.com/ceph/ceph/pull/2440
>>>>
>>>>
>>>>
>>>> Thanks & Regards
>>>>
>>>> Somnath
>>>>
>>>>
>>>>
>>>>
>>>> ________________________________
>>>>
>>>> PLEASE NOTE: The information contained in this electronic mail
>>>> message is intended only for the use of the designated recipient(s)
>>>> named above. If the reader of this message is not the intended
>>>> recipient, you are hereby notified that you have received this
>>>> message in error and that any review, dissemination, distribution,
>>>> or copying of this message is strictly prohibited. If you have
>>>> received this communication in error, please notify the sender by
>>>> telephone or e-mail (as shown above) immediately and destroy any and
>>>> all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo
>>> info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: OpTracker optimization
  2014-09-10 22:25                     ` Samuel Just
@ 2014-09-11  1:52                       ` Somnath Roy
  2014-09-11  3:33                         ` Sage Weil
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-11  1:52 UTC (permalink / raw)
  To: Samuel Just; +Cc: Sage Weil (sweil@redhat.com), ceph-devel, ceph-users

Sam/Sage,
I have incorporated all of your comments. Please have a look at the same pull request.

https://github.com/ceph/ceph/pull/2440

Thanks & Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.just@inktank.com] 
Sent: Wednesday, September 10, 2014 3:25 PM
To: Somnath Roy
Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com
Subject: Re: OpTracker optimization

Oh, I changed my mind, your approach is fine.  I was unclear.
Currently, I just need you to address the other comments.
-Sam

On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> As I understand, you want me to implement the following.
>
> 1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
>
> 2. Additionally, for ios going through ms_fast_dispatch, you want me 
> to implement optracker (without internal shard) per opwq shard
>
> Am I right ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Samuel Just [mailto:sam.just@inktank.com]
> Sent: Wednesday, September 10, 2014 3:08 PM
> To: Somnath Roy
> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
> ceph-users@lists.ceph.com
> Subject: Re: OpTracker optimization
>
> I don't quite understand.
> -Sam
>
> On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> Thanks Sam.
>> So, you want me to go with optracker/shadedopWq , right ?
>>
>> Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Samuel Just [mailto:sam.just@inktank.com]
>> Sent: Wednesday, September 10, 2014 2:36 PM
>> To: Somnath Roy
>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>> ceph-users@lists.ceph.com
>> Subject: Re: OpTracker optimization
>>
>> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
>> -Sam
>>
>> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>>> Thanks Sam..I responded back :-)
>>>
>>> -----Original Message-----
>>> From: ceph-devel-owner@vger.kernel.org 
>>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just
>>> Sent: Wednesday, September 10, 2014 11:17 AM
>>> To: Somnath Roy
>>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>>> ceph-users@lists.ceph.com
>>> Subject: Re: OpTracker optimization
>>>
>>> Added a comment about the approach.
>>> -Sam
>>>
>>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>>>> Hi Sam/Sage,
>>>>
>>>> As we discussed earlier, enabling the present OpTracker code 
>>>> degrading performance severely. For example, in my setup a single 
>>>> OSD node with
>>>> 10 clients is reaching ~103K read iops with io served from memory 
>>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>>>> Probably, running OSD without enabling OpTracker is not an option 
>>>> for many of Ceph users.
>>>>
>>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>>>> ops_in_flight) and removing some other bottlenecks I am able to 
>>>> match the performance of OpTracking enabled OSD with OpTracking 
>>>> disabled, but with the expense of ~1 extra cpu core.
>>>>
>>>> In this process I have also fixed the following tracker.
>>>>
>>>>
>>>>
>>>> http://tracker.ceph.com/issues/9384
>>>>
>>>>
>>>>
>>>> and probably http://tracker.ceph.com/issues/8885 too.
>>>>
>>>>
>>>>
>>>> I have created following pull request for the same. Please review it.
>>>>
>>>>
>>>>
>>>> https://github.com/ceph/ceph/pull/2440
>>>>
>>>>
>>>>
>>>> Thanks & Regards
>>>>
>>>> Somnath
>>>>
>>>>
>>>>
>>>>
>>>> ________________________________
>>>>
>>>> PLEASE NOTE: The information contained in this electronic mail 
>>>> message is intended only for the use of the designated recipient(s) 
>>>> named above. If the reader of this message is not the intended 
>>>> recipient, you are hereby notified that you have received this 
>>>> message in error and that any review, dissemination, distribution, 
>>>> or copying of this message is strictly prohibited. If you have 
>>>> received this communication in error, please notify the sender by 
>>>> telephone or e-mail (as shown above) immediately and destroy any 
>>>> and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>>> in the body of a message to majordomo@vger.kernel.org More majordomo 
>>> info at  http://vger.kernel.org/majordomo-info.html
>>>
>>> ________________________________
>>>
>>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: OpTracker optimization
  2014-09-11  1:52                       ` Somnath Roy
@ 2014-09-11  3:33                         ` Sage Weil
       [not found]                           ` <alpine.DEB.2.00.1409102032440.17200-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Sage Weil @ 2014-09-11  3:33 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Samuel Just, ceph-devel, ceph-users

I had two substantiative comments on the first patch and then some trivial 
whitespace nits.    Otherwise looks good!

tahnks-
sage

On Thu, 11 Sep 2014, Somnath Roy wrote:

> Sam/Sage,
> I have incorporated all of your comments. Please have a look at the same pull request.
> 
> https://github.com/ceph/ceph/pull/2440
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Samuel Just [mailto:sam.just@inktank.com] 
> Sent: Wednesday, September 10, 2014 3:25 PM
> To: Somnath Roy
> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com
> Subject: Re: OpTracker optimization
> 
> Oh, I changed my mind, your approach is fine.  I was unclear.
> Currently, I just need you to address the other comments.
> -Sam
> 
> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> > As I understand, you want me to implement the following.
> >
> > 1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
> >
> > 2. Additionally, for ios going through ms_fast_dispatch, you want me 
> > to implement optracker (without internal shard) per opwq shard
> >
> > Am I right ?
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Samuel Just [mailto:sam.just@inktank.com]
> > Sent: Wednesday, September 10, 2014 3:08 PM
> > To: Somnath Roy
> > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
> > ceph-users@lists.ceph.com
> > Subject: Re: OpTracker optimization
> >
> > I don't quite understand.
> > -Sam
> >
> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> >> Thanks Sam.
> >> So, you want me to go with optracker/shadedopWq , right ?
> >>
> >> Regards
> >> Somnath
> >>
> >> -----Original Message-----
> >> From: Samuel Just [mailto:sam.just@inktank.com]
> >> Sent: Wednesday, September 10, 2014 2:36 PM
> >> To: Somnath Roy
> >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
> >> ceph-users@lists.ceph.com
> >> Subject: Re: OpTracker optimization
> >>
> >> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
> >> -Sam
> >>
> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> >>> Thanks Sam..I responded back :-)
> >>>
> >>> -----Original Message-----
> >>> From: ceph-devel-owner@vger.kernel.org 
> >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just
> >>> Sent: Wednesday, September 10, 2014 11:17 AM
> >>> To: Somnath Roy
> >>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
> >>> ceph-users@lists.ceph.com
> >>> Subject: Re: OpTracker optimization
> >>>
> >>> Added a comment about the approach.
> >>> -Sam
> >>>
> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> >>>> Hi Sam/Sage,
> >>>>
> >>>> As we discussed earlier, enabling the present OpTracker code 
> >>>> degrading performance severely. For example, in my setup a single 
> >>>> OSD node with
> >>>> 10 clients is reaching ~103K read iops with io served from memory 
> >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
> >>>> Probably, running OSD without enabling OpTracker is not an option 
> >>>> for many of Ceph users.
> >>>>
> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
> >>>> ops_in_flight) and removing some other bottlenecks I am able to 
> >>>> match the performance of OpTracking enabled OSD with OpTracking 
> >>>> disabled, but with the expense of ~1 extra cpu core.
> >>>>
> >>>> In this process I have also fixed the following tracker.
> >>>>
> >>>>
> >>>>
> >>>> http://tracker.ceph.com/issues/9384
> >>>>
> >>>>
> >>>>
> >>>> and probably http://tracker.ceph.com/issues/8885 too.
> >>>>
> >>>>
> >>>>
> >>>> I have created following pull request for the same. Please review it.
> >>>>
> >>>>
> >>>>
> >>>> https://github.com/ceph/ceph/pull/2440
> >>>>
> >>>>
> >>>>
> >>>> Thanks & Regards
> >>>>
> >>>> Somnath
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> ________________________________
> >>>>
> >>>> PLEASE NOTE: The information contained in this electronic mail 
> >>>> message is intended only for the use of the designated recipient(s) 
> >>>> named above. If the reader of this message is not the intended 
> >>>> recipient, you are hereby notified that you have received this 
> >>>> message in error and that any review, dissemination, distribution, 
> >>>> or copying of this message is strictly prohibited. If you have 
> >>>> received this communication in error, please notify the sender by 
> >>>> telephone or e-mail (as shown above) immediately and destroy any 
> >>>> and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>>>
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >>> in the body of a message to majordomo@vger.kernel.org More majordomo 
> >>> info at  http://vger.kernel.org/majordomo-info.html
> >>>
> >>> ________________________________
> >>>
> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>>
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]                           ` <alpine.DEB.2.00.1409102032440.17200-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2014-09-11 18:30                             ` Somnath Roy
  2014-09-11 18:30                               ` Samuel Just
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-11 18:30 UTC (permalink / raw)
  To: Sage Weil
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-idqoXFIVOFJgJs9I8MT0rw

Sam/Sage,
I have addressed all of your comments and pushed the changes to the same pull request.

https://github.com/ceph/ceph/pull/2440

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] 
Sent: Wednesday, September 10, 2014 8:33 PM
To: Somnath Roy
Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
Subject: RE: OpTracker optimization

I had two substantiative comments on the first patch and then some trivial 
whitespace nits.    Otherwise looks good!

tahnks-
sage

On Thu, 11 Sep 2014, Somnath Roy wrote:

> Sam/Sage,
> I have incorporated all of your comments. Please have a look at the same pull request.
> 
> https://github.com/ceph/ceph/pull/2440
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> Sent: Wednesday, September 10, 2014 3:25 PM
> To: Somnath Roy
> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: OpTracker optimization
> 
> Oh, I changed my mind, your approach is fine.  I was unclear.
> Currently, I just need you to address the other comments.
> -Sam
> 
> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> > As I understand, you want me to implement the following.
> >
> > 1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
> >
> > 2. Additionally, for ios going through ms_fast_dispatch, you want me 
> > to implement optracker (without internal shard) per opwq shard
> >
> > Am I right ?
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> > Sent: Wednesday, September 10, 2014 3:08 PM
> > To: Somnath Roy
> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > Subject: Re: OpTracker optimization
> >
> > I don't quite understand.
> > -Sam
> >
> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> >> Thanks Sam.
> >> So, you want me to go with optracker/shadedopWq , right ?
> >>
> >> Regards
> >> Somnath
> >>
> >> -----Original Message-----
> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> >> Sent: Wednesday, September 10, 2014 2:36 PM
> >> To: Somnath Roy
> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> Subject: Re: OpTracker optimization
> >>
> >> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
> >> -Sam
> >>
> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> >>> Thanks Sam..I responded back :-)
> >>>
> >>> -----Original Message-----
> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org 
> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel Just
> >>> Sent: Wednesday, September 10, 2014 11:17 AM
> >>> To: Somnath Roy
> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >>> Subject: Re: OpTracker optimization
> >>>
> >>> Added a comment about the approach.
> >>> -Sam
> >>>
> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> >>>> Hi Sam/Sage,
> >>>>
> >>>> As we discussed earlier, enabling the present OpTracker code 
> >>>> degrading performance severely. For example, in my setup a single 
> >>>> OSD node with
> >>>> 10 clients is reaching ~103K read iops with io served from memory 
> >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
> >>>> Probably, running OSD without enabling OpTracker is not an option 
> >>>> for many of Ceph users.
> >>>>
> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
> >>>> ops_in_flight) and removing some other bottlenecks I am able to 
> >>>> match the performance of OpTracking enabled OSD with OpTracking 
> >>>> disabled, but with the expense of ~1 extra cpu core.
> >>>>
> >>>> In this process I have also fixed the following tracker.
> >>>>
> >>>>
> >>>>
> >>>> http://tracker.ceph.com/issues/9384
> >>>>
> >>>>
> >>>>
> >>>> and probably http://tracker.ceph.com/issues/8885 too.
> >>>>
> >>>>
> >>>>
> >>>> I have created following pull request for the same. Please review it.
> >>>>
> >>>>
> >>>>
> >>>> https://github.com/ceph/ceph/pull/2440
> >>>>
> >>>>
> >>>>
> >>>> Thanks & Regards
> >>>>
> >>>> Somnath
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> ________________________________
> >>>>
> >>>> PLEASE NOTE: The information contained in this electronic mail 
> >>>> message is intended only for the use of the designated 
> >>>> recipient(s) named above. If the reader of this message is not 
> >>>> the intended recipient, you are hereby notified that you have 
> >>>> received this message in error and that any review, 
> >>>> dissemination, distribution, or copying of this message is 
> >>>> strictly prohibited. If you have received this communication in 
> >>>> error, please notify the sender by telephone or e-mail (as shown 
> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>>>
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More 
> >>> majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>>
> >>> ________________________________
> >>>
> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
> >>>
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
  2014-09-11 18:30                             ` Somnath Roy
@ 2014-09-11 18:30                               ` Samuel Just
       [not found]                                 ` <CA+4uBUYRdp=VFc1T=WPf7HRXgfD7MEqEM5yhEKRe7_M_s_dh-w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Samuel Just @ 2014-09-11 18:30 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, ceph-devel, ceph-users

Just added it to wip-sam-testing.
-Sam

On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
> Sam/Sage,
> I have addressed all of your comments and pushed the changes to the same pull request.
>
> https://github.com/ceph/ceph/pull/2440
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Sage Weil [mailto:sweil@redhat.com]
> Sent: Wednesday, September 10, 2014 8:33 PM
> To: Somnath Roy
> Cc: Samuel Just; ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com
> Subject: RE: OpTracker optimization
>
> I had two substantiative comments on the first patch and then some trivial
> whitespace nits.    Otherwise looks good!
>
> tahnks-
> sage
>
> On Thu, 11 Sep 2014, Somnath Roy wrote:
>
>> Sam/Sage,
>> I have incorporated all of your comments. Please have a look at the same pull request.
>>
>> https://github.com/ceph/ceph/pull/2440
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Samuel Just [mailto:sam.just@inktank.com]
>> Sent: Wednesday, September 10, 2014 3:25 PM
>> To: Somnath Roy
>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;
>> ceph-users@lists.ceph.com
>> Subject: Re: OpTracker optimization
>>
>> Oh, I changed my mind, your approach is fine.  I was unclear.
>> Currently, I just need you to address the other comments.
>> -Sam
>>
>> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> > As I understand, you want me to implement the following.
>> >
>> > 1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
>> >
>> > 2. Additionally, for ios going through ms_fast_dispatch, you want me
>> > to implement optracker (without internal shard) per opwq shard
>> >
>> > Am I right ?
>> >
>> > Thanks & Regards
>> > Somnath
>> >
>> > -----Original Message-----
>> > From: Samuel Just [mailto:sam.just@inktank.com]
>> > Sent: Wednesday, September 10, 2014 3:08 PM
>> > To: Somnath Roy
>> > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;
>> > ceph-users@lists.ceph.com
>> > Subject: Re: OpTracker optimization
>> >
>> > I don't quite understand.
>> > -Sam
>> >
>> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> >> Thanks Sam.
>> >> So, you want me to go with optracker/shadedopWq , right ?
>> >>
>> >> Regards
>> >> Somnath
>> >>
>> >> -----Original Message-----
>> >> From: Samuel Just [mailto:sam.just@inktank.com]
>> >> Sent: Wednesday, September 10, 2014 2:36 PM
>> >> To: Somnath Roy
>> >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;
>> >> ceph-users@lists.ceph.com
>> >> Subject: Re: OpTracker optimization
>> >>
>> >> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
>> >> -Sam
>> >>
>> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> >>> Thanks Sam..I responded back :-)
>> >>>
>> >>> -----Original Message-----
>> >>> From: ceph-devel-owner@vger.kernel.org
>> >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel Just
>> >>> Sent: Wednesday, September 10, 2014 11:17 AM
>> >>> To: Somnath Roy
>> >>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org;
>> >>> ceph-users@lists.ceph.com
>> >>> Subject: Re: OpTracker optimization
>> >>>
>> >>> Added a comment about the approach.
>> >>> -Sam
>> >>>
>> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote:
>> >>>> Hi Sam/Sage,
>> >>>>
>> >>>> As we discussed earlier, enabling the present OpTracker code
>> >>>> degrading performance severely. For example, in my setup a single
>> >>>> OSD node with
>> >>>> 10 clients is reaching ~103K read iops with io served from memory
>> >>>> while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>> >>>> Probably, running OSD without enabling OpTracker is not an option
>> >>>> for many of Ceph users.
>> >>>>
>> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>> >>>> ops_in_flight) and removing some other bottlenecks I am able to
>> >>>> match the performance of OpTracking enabled OSD with OpTracking
>> >>>> disabled, but with the expense of ~1 extra cpu core.
>> >>>>
>> >>>> In this process I have also fixed the following tracker.
>> >>>>
>> >>>>
>> >>>>
>> >>>> http://tracker.ceph.com/issues/9384
>> >>>>
>> >>>>
>> >>>>
>> >>>> and probably http://tracker.ceph.com/issues/8885 too.
>> >>>>
>> >>>>
>> >>>>
>> >>>> I have created following pull request for the same. Please review it.
>> >>>>
>> >>>>
>> >>>>
>> >>>> https://github.com/ceph/ceph/pull/2440
>> >>>>
>> >>>>
>> >>>>
>> >>>> Thanks & Regards
>> >>>>
>> >>>> Somnath
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> ________________________________
>> >>>>
>> >>>> PLEASE NOTE: The information contained in this electronic mail
>> >>>> message is intended only for the use of the designated
>> >>>> recipient(s) named above. If the reader of this message is not
>> >>>> the intended recipient, you are hereby notified that you have
>> >>>> received this message in error and that any review,
>> >>>> dissemination, distribution, or copying of this message is
>> >>>> strictly prohibited. If you have received this communication in
>> >>>> error, please notify the sender by telephone or e-mail (as shown
>> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>> >>>>
>> >>> --
>> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> >>> in the body of a message to majordomo@vger.kernel.org More
>> >>> majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >>>
>> >>> ________________________________
>> >>>
>> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>> >>>
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]                                 ` <CA+4uBUYRdp=VFc1T=WPf7HRXgfD7MEqEM5yhEKRe7_M_s_dh-w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-09-13  8:03                                   ` Somnath Roy
  2014-09-13  9:00                                     ` [ceph-users] " Alexandre DERUMIER
  0 siblings, 1 reply; 16+ messages in thread
From: Somnath Roy @ 2014-09-13  8:03 UTC (permalink / raw)
  To: Samuel Just
  Cc: Sage Weil, ceph-devel-u79uwXL29TY76Z2rM5mHXA,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

Sam/Sage,
I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ?

Thanks & Regards
Somnath

-----Original Message-----
From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] 
Sent: Thursday, September 11, 2014 11:31 AM
To: Somnath Roy
Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
Subject: Re: OpTracker optimization

Just added it to wip-sam-testing.
-Sam

On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
> Sam/Sage,
> I have addressed all of your comments and pushed the changes to the same pull request.
>
> https://github.com/ceph/ceph/pull/2440
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org]
> Sent: Wednesday, September 10, 2014 8:33 PM
> To: Somnath Roy
> Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: RE: OpTracker optimization
>
> I had two substantiative comments on the first patch and then some trivial
> whitespace nits.    Otherwise looks good!
>
> tahnks-
> sage
>
> On Thu, 11 Sep 2014, Somnath Roy wrote:
>
>> Sam/Sage,
>> I have incorporated all of your comments. Please have a look at the same pull request.
>>
>> https://github.com/ceph/ceph/pull/2440
>>
>> Thanks & Regards
>> Somnath
>>
>> -----Original Message-----
>> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>> Sent: Wednesday, September 10, 2014 3:25 PM
>> To: Somnath Roy
>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> Subject: Re: OpTracker optimization
>>
>> Oh, I changed my mind, your approach is fine.  I was unclear.
>> Currently, I just need you to address the other comments.
>> -Sam
>>
>> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> > As I understand, you want me to implement the following.
>> >
>> > 1.  Keep this implementation one sharded optracker for the ios going through ms_dispatch path.
>> >
>> > 2. Additionally, for ios going through ms_fast_dispatch, you want 
>> > me to implement optracker (without internal shard) per opwq shard
>> >
>> > Am I right ?
>> >
>> > Thanks & Regards
>> > Somnath
>> >
>> > -----Original Message-----
>> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>> > Sent: Wednesday, September 10, 2014 3:08 PM
>> > To: Somnath Roy
>> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
>> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> > Subject: Re: OpTracker optimization
>> >
>> > I don't quite understand.
>> > -Sam
>> >
>> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> >> Thanks Sam.
>> >> So, you want me to go with optracker/shadedopWq , right ?
>> >>
>> >> Regards
>> >> Somnath
>> >>
>> >> -----Original Message-----
>> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>> >> Sent: Wednesday, September 10, 2014 2:36 PM
>> >> To: Somnath Roy
>> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
>> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> >> Subject: Re: OpTracker optimization
>> >>
>> >> Responded with cosmetic nonsense.  Once you've got that and the other comments addressed, I can put it in wip-sam-testing.
>> >> -Sam
>> >>
>> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> >>> Thanks Sam..I responded back :-)
>> >>>
>> >>> -----Original Message-----
>> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org 
>> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel 
>> >>> Just
>> >>> Sent: Wednesday, September 10, 2014 11:17 AM
>> >>> To: Somnath Roy
>> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
>> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> >>> Subject: Re: OpTracker optimization
>> >>>
>> >>> Added a comment about the approach.
>> >>> -Sam
>> >>>
>> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote:
>> >>>> Hi Sam/Sage,
>> >>>>
>> >>>> As we discussed earlier, enabling the present OpTracker code 
>> >>>> degrading performance severely. For example, in my setup a 
>> >>>> single OSD node with
>> >>>> 10 clients is reaching ~103K read iops with io served from 
>> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops.
>> >>>> Probably, running OSD without enabling OpTracker is not an 
>> >>>> option for many of Ceph users.
>> >>>>
>> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist
>> >>>> ops_in_flight) and removing some other bottlenecks I am able to 
>> >>>> match the performance of OpTracking enabled OSD with OpTracking 
>> >>>> disabled, but with the expense of ~1 extra cpu core.
>> >>>>
>> >>>> In this process I have also fixed the following tracker.
>> >>>>
>> >>>>
>> >>>>
>> >>>> http://tracker.ceph.com/issues/9384
>> >>>>
>> >>>>
>> >>>>
>> >>>> and probably http://tracker.ceph.com/issues/8885 too.
>> >>>>
>> >>>>
>> >>>>
>> >>>> I have created following pull request for the same. Please review it.
>> >>>>
>> >>>>
>> >>>>
>> >>>> https://github.com/ceph/ceph/pull/2440
>> >>>>
>> >>>>
>> >>>>
>> >>>> Thanks & Regards
>> >>>>
>> >>>> Somnath
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> ________________________________
>> >>>>
>> >>>> PLEASE NOTE: The information contained in this electronic mail 
>> >>>> message is intended only for the use of the designated
>> >>>> recipient(s) named above. If the reader of this message is not 
>> >>>> the intended recipient, you are hereby notified that you have 
>> >>>> received this message in error and that any review, 
>> >>>> dissemination, distribution, or copying of this message is 
>> >>>> strictly prohibited. If you have received this communication in 
>> >>>> error, please notify the sender by telephone or e-mail (as shown
>> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>> >>>>
>> >>> --
>> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More 
>> >>> majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >>>
>> >>> ________________________________
>> >>>
>> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>> >>>
>>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [ceph-users] OpTracker optimization
  2014-09-13  8:03                                   ` Somnath Roy
@ 2014-09-13  9:00                                     ` Alexandre DERUMIER
  2014-09-13 14:32                                       ` Sage Weil
  0 siblings, 1 reply; 16+ messages in thread
From: Alexandre DERUMIER @ 2014-09-13  9:00 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, ceph-devel, ceph-users, Samuel Just

Hi,
as ceph user, It could be wonderfull to have it for Giant,
optracker performance impact is really huge (See my ssd benchmark on ceph user mailing)

Regards,

Alexandre Derumier

----- Mail original ----- 

De: "Somnath Roy" <Somnath.Roy@sandisk.com> 
À: "Samuel Just" <sam.just@inktank.com> 
Cc: "Sage Weil" <sweil@redhat.com>, ceph-devel@vger.kernel.org, ceph-users@lists.ceph.com 
Envoyé: Samedi 13 Septembre 2014 10:03:52 
Objet: Re: [ceph-users] OpTracker optimization 

Sam/Sage, 
I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ? 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: Samuel Just [mailto:sam.just@inktank.com] 
Sent: Thursday, September 11, 2014 11:31 AM 
To: Somnath Roy 
Cc: Sage Weil; ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com 
Subject: Re: OpTracker optimization 

Just added it to wip-sam-testing. 
-Sam 

On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
> Sam/Sage, 
> I have addressed all of your comments and pushed the changes to the same pull request. 
> 
> https://github.com/ceph/ceph/pull/2440 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Sage Weil [mailto:sweil@redhat.com] 
> Sent: Wednesday, September 10, 2014 8:33 PM 
> To: Somnath Roy 
> Cc: Samuel Just; ceph-devel@vger.kernel.org; ceph-users@lists.ceph.com 
> Subject: RE: OpTracker optimization 
> 
> I had two substantiative comments on the first patch and then some trivial 
> whitespace nits. Otherwise looks good! 
> 
> tahnks- 
> sage 
> 
> On Thu, 11 Sep 2014, Somnath Roy wrote: 
> 
>> Sam/Sage, 
>> I have incorporated all of your comments. Please have a look at the same pull request. 
>> 
>> https://github.com/ceph/ceph/pull/2440 
>> 
>> Thanks & Regards 
>> Somnath 
>> 
>> -----Original Message----- 
>> From: Samuel Just [mailto:sam.just@inktank.com] 
>> Sent: Wednesday, September 10, 2014 3:25 PM 
>> To: Somnath Roy 
>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>> ceph-users@lists.ceph.com 
>> Subject: Re: OpTracker optimization 
>> 
>> Oh, I changed my mind, your approach is fine. I was unclear. 
>> Currently, I just need you to address the other comments. 
>> -Sam 
>> 
>> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
>> > As I understand, you want me to implement the following. 
>> > 
>> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. 
>> > 
>> > 2. Additionally, for ios going through ms_fast_dispatch, you want 
>> > me to implement optracker (without internal shard) per opwq shard 
>> > 
>> > Am I right ? 
>> > 
>> > Thanks & Regards 
>> > Somnath 
>> > 
>> > -----Original Message----- 
>> > From: Samuel Just [mailto:sam.just@inktank.com] 
>> > Sent: Wednesday, September 10, 2014 3:08 PM 
>> > To: Somnath Roy 
>> > Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>> > ceph-users@lists.ceph.com 
>> > Subject: Re: OpTracker optimization 
>> > 
>> > I don't quite understand. 
>> > -Sam 
>> > 
>> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
>> >> Thanks Sam. 
>> >> So, you want me to go with optracker/shadedopWq , right ? 
>> >> 
>> >> Regards 
>> >> Somnath 
>> >> 
>> >> -----Original Message----- 
>> >> From: Samuel Just [mailto:sam.just@inktank.com] 
>> >> Sent: Wednesday, September 10, 2014 2:36 PM 
>> >> To: Somnath Roy 
>> >> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>> >> ceph-users@lists.ceph.com 
>> >> Subject: Re: OpTracker optimization 
>> >> 
>> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. 
>> >> -Sam 
>> >> 
>> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
>> >>> Thanks Sam..I responded back :-) 
>> >>> 
>> >>> -----Original Message----- 
>> >>> From: ceph-devel-owner@vger.kernel.org 
>> >>> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Samuel 
>> >>> Just 
>> >>> Sent: Wednesday, September 10, 2014 11:17 AM 
>> >>> To: Somnath Roy 
>> >>> Cc: Sage Weil (sweil@redhat.com); ceph-devel@vger.kernel.org; 
>> >>> ceph-users@lists.ceph.com 
>> >>> Subject: Re: OpTracker optimization 
>> >>> 
>> >>> Added a comment about the approach. 
>> >>> -Sam 
>> >>> 
>> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy@sandisk.com> wrote: 
>> >>>> Hi Sam/Sage, 
>> >>>> 
>> >>>> As we discussed earlier, enabling the present OpTracker code 
>> >>>> degrading performance severely. For example, in my setup a 
>> >>>> single OSD node with 
>> >>>> 10 clients is reaching ~103K read iops with io served from 
>> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. 
>> >>>> Probably, running OSD without enabling OpTracker is not an 
>> >>>> option for many of Ceph users. 
>> >>>> 
>> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist 
>> >>>> ops_in_flight) and removing some other bottlenecks I am able to 
>> >>>> match the performance of OpTracking enabled OSD with OpTracking 
>> >>>> disabled, but with the expense of ~1 extra cpu core. 
>> >>>> 
>> >>>> In this process I have also fixed the following tracker. 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> http://tracker.ceph.com/issues/9384 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> and probably http://tracker.ceph.com/issues/8885 too. 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> I have created following pull request for the same. Please review it. 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> https://github.com/ceph/ceph/pull/2440 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> Thanks & Regards 
>> >>>> 
>> >>>> Somnath 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> 
>> >>>> ________________________________ 
>> >>>> 
>> >>>> PLEASE NOTE: The information contained in this electronic mail 
>> >>>> message is intended only for the use of the designated 
>> >>>> recipient(s) named above. If the reader of this message is not 
>> >>>> the intended recipient, you are hereby notified that you have 
>> >>>> received this message in error and that any review, 
>> >>>> dissemination, distribution, or copying of this message is 
>> >>>> strictly prohibited. If you have received this communication in 
>> >>>> error, please notify the sender by telephone or e-mail (as shown 
>> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> >>>> 
>> >>> -- 
>> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
>> >>> in the body of a message to majordomo@vger.kernel.org More 
>> >>> majordomo info at http://vger.kernel.org/majordomo-info.html 
>> >>> 
>> >>> ________________________________ 
>> >>> 
>> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
>> >>> 
>> 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
  2014-09-13  9:00                                     ` [ceph-users] " Alexandre DERUMIER
@ 2014-09-13 14:32                                       ` Sage Weil
       [not found]                                         ` <alpine.DEB.2.00.1409130731180.29849-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  0 siblings, 1 reply; 16+ messages in thread
From: Sage Weil @ 2014-09-13 14:32 UTC (permalink / raw)
  To: Alexandre DERUMIER
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-idqoXFIVOFJgJs9I8MT0rw

On Sat, 13 Sep 2014, Alexandre DERUMIER wrote:
> Hi,
> as ceph user, It could be wonderfull to have it for Giant,
> optracker performance impact is really huge (See my ssd benchmark on ceph user mailing)

Definitely.  More importantly, it resolves a few crashes we've observed. 
It's going through some testing right now, but once that's done it'll go 
into giant.

sage


> 
> Regards,
> 
> Alexandre Derumier
> 
> ----- Mail original ----- 
> 
> De: "Somnath Roy" <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> 
> ?: "Samuel Just" <sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org> 
> Cc: "Sage Weil" <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> Envoy?: Samedi 13 Septembre 2014 10:03:52 
> Objet: Re: [ceph-users] OpTracker optimization 
> 
> Sam/Sage, 
> I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ? 
> 
> Thanks & Regards 
> Somnath 
> 
> -----Original Message----- 
> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] 
> Sent: Thursday, September 11, 2014 11:31 AM 
> To: Somnath Roy 
> Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> Subject: Re: OpTracker optimization 
> 
> Just added it to wip-sam-testing. 
> -Sam 
> 
> On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> > Sam/Sage, 
> > I have addressed all of your comments and pushed the changes to the same pull request. 
> > 
> > https://github.com/ceph/ceph/pull/2440 
> > 
> > Thanks & Regards 
> > Somnath 
> > 
> > -----Original Message----- 
> > From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] 
> > Sent: Wednesday, September 10, 2014 8:33 PM 
> > To: Somnath Roy 
> > Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> > Subject: RE: OpTracker optimization 
> > 
> > I had two substantiative comments on the first patch and then some trivial 
> > whitespace nits. Otherwise looks good! 
> > 
> > tahnks- 
> > sage 
> > 
> > On Thu, 11 Sep 2014, Somnath Roy wrote: 
> > 
> >> Sam/Sage, 
> >> I have incorporated all of your comments. Please have a look at the same pull request. 
> >> 
> >> https://github.com/ceph/ceph/pull/2440 
> >> 
> >> Thanks & Regards 
> >> Somnath 
> >> 
> >> -----Original Message----- 
> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] 
> >> Sent: Wednesday, September 10, 2014 3:25 PM 
> >> To: Somnath Roy 
> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> >> Subject: Re: OpTracker optimization 
> >> 
> >> Oh, I changed my mind, your approach is fine. I was unclear. 
> >> Currently, I just need you to address the other comments. 
> >> -Sam 
> >> 
> >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> > As I understand, you want me to implement the following. 
> >> > 
> >> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. 
> >> > 
> >> > 2. Additionally, for ios going through ms_fast_dispatch, you want 
> >> > me to implement optracker (without internal shard) per opwq shard 
> >> > 
> >> > Am I right ? 
> >> > 
> >> > Thanks & Regards 
> >> > Somnath 
> >> > 
> >> > -----Original Message----- 
> >> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] 
> >> > Sent: Wednesday, September 10, 2014 3:08 PM 
> >> > To: Somnath Roy 
> >> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> >> > Subject: Re: OpTracker optimization 
> >> > 
> >> > I don't quite understand. 
> >> > -Sam 
> >> > 
> >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> >> Thanks Sam. 
> >> >> So, you want me to go with optracker/shadedopWq , right ? 
> >> >> 
> >> >> Regards 
> >> >> Somnath 
> >> >> 
> >> >> -----Original Message----- 
> >> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org] 
> >> >> Sent: Wednesday, September 10, 2014 2:36 PM 
> >> >> To: Somnath Roy 
> >> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> >> >> Subject: Re: OpTracker optimization 
> >> >> 
> >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. 
> >> >> -Sam 
> >> >> 
> >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> >>> Thanks Sam..I responded back :-) 
> >> >>> 
> >> >>> -----Original Message----- 
> >> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org 
> >> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel 
> >> >>> Just 
> >> >>> Sent: Wednesday, September 10, 2014 11:17 AM 
> >> >>> To: Somnath Roy 
> >> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> >> >>> Subject: Re: OpTracker optimization 
> >> >>> 
> >> >>> Added a comment about the approach. 
> >> >>> -Sam 
> >> >>> 
> >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> >>>> Hi Sam/Sage, 
> >> >>>> 
> >> >>>> As we discussed earlier, enabling the present OpTracker code 
> >> >>>> degrading performance severely. For example, in my setup a 
> >> >>>> single OSD node with 
> >> >>>> 10 clients is reaching ~103K read iops with io served from 
> >> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops. 
> >> >>>> Probably, running OSD without enabling OpTracker is not an 
> >> >>>> option for many of Ceph users. 
> >> >>>> 
> >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus xlist 
> >> >>>> ops_in_flight) and removing some other bottlenecks I am able to 
> >> >>>> match the performance of OpTracking enabled OSD with OpTracking 
> >> >>>> disabled, but with the expense of ~1 extra cpu core. 
> >> >>>> 
> >> >>>> In this process I have also fixed the following tracker. 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> http://tracker.ceph.com/issues/9384 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> and probably http://tracker.ceph.com/issues/8885 too. 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> I have created following pull request for the same. Please review it. 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> https://github.com/ceph/ceph/pull/2440 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> Thanks & Regards 
> >> >>>> 
> >> >>>> Somnath 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> ________________________________ 
> >> >>>> 
> >> >>>> PLEASE NOTE: The information contained in this electronic mail 
> >> >>>> message is intended only for the use of the designated 
> >> >>>> recipient(s) named above. If the reader of this message is not 
> >> >>>> the intended recipient, you are hereby notified that you have 
> >> >>>> received this message in error and that any review, 
> >> >>>> dissemination, distribution, or copying of this message is 
> >> >>>> strictly prohibited. If you have received this communication in 
> >> >>>> error, please notify the sender by telephone or e-mail (as shown 
> >> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
> >> >>>> 
> >> >>> -- 
> >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> >> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More 
> >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html 
> >> >>> 
> >> >>> ________________________________ 
> >> >>> 
> >> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
> >> >>> 
> >> 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: OpTracker optimization
       [not found]                                         ` <alpine.DEB.2.00.1409130731180.29849-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2014-09-13 16:19                                           ` Somnath Roy
  0 siblings, 0 replies; 16+ messages in thread
From: Somnath Roy @ 2014-09-13 16:19 UTC (permalink / raw)
  To: Sage Weil, Alexandre DERUMIER
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-idqoXFIVOFJgJs9I8MT0rw

Thanks Sage!

-----Original Message-----
From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org] 
Sent: Saturday, September 13, 2014 7:32 AM
To: Alexandre DERUMIER
Cc: Somnath Roy; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org; Samuel Just
Subject: Re: [ceph-users] OpTracker optimization

On Sat, 13 Sep 2014, Alexandre DERUMIER wrote:
> Hi,
> as ceph user, It could be wonderfull to have it for Giant, optracker 
> performance impact is really huge (See my ssd benchmark on ceph user 
> mailing)

Definitely.  More importantly, it resolves a few crashes we've observed. 
It's going through some testing right now, but once that's done it'll go into giant.

sage


> 
> Regards,
> 
> Alexandre Derumier
> 
> ----- Mail original -----
> 
> De: "Somnath Roy" <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> ?: "Samuel Just" <sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org>
> Cc: "Sage Weil" <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>, ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, 
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Envoy?: Samedi 13 Septembre 2014 10:03:52
> Objet: Re: [ceph-users] OpTracker optimization
> 
> Sam/Sage,
> I saw Giant is forked off today. We need the pull request (https://github.com/ceph/ceph/pull/2440) to be in Giant. So, could you please merge this into Giant when it will be ready ? 
> 
> Thanks & Regards
> Somnath
> 
> -----Original Message-----
> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> Sent: Thursday, September 11, 2014 11:31 AM
> To: Somnath Roy
> Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: OpTracker optimization
> 
> Just added it to wip-sam-testing. 
> -Sam
> 
> On Thu, Sep 11, 2014 at 11:30 AM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> > Sam/Sage,
> > I have addressed all of your comments and pushed the changes to the same pull request. 
> > 
> > https://github.com/ceph/ceph/pull/2440
> > 
> > Thanks & Regards
> > Somnath
> > 
> > -----Original Message-----
> > From: Sage Weil [mailto:sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org]
> > Sent: Wednesday, September 10, 2014 8:33 PM
> > To: Somnath Roy
> > Cc: Samuel Just; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > Subject: RE: OpTracker optimization
> > 
> > I had two substantiative comments on the first patch and then some 
> > trivial whitespace nits. Otherwise looks good!
> > 
> > tahnks-
> > sage
> > 
> > On Thu, 11 Sep 2014, Somnath Roy wrote: 
> > 
> >> Sam/Sage,
> >> I have incorporated all of your comments. Please have a look at the same pull request. 
> >> 
> >> https://github.com/ceph/ceph/pull/2440
> >> 
> >> Thanks & Regards
> >> Somnath
> >> 
> >> -----Original Message-----
> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> >> Sent: Wednesday, September 10, 2014 3:25 PM
> >> To: Somnath Roy
> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> Subject: Re: OpTracker optimization
> >> 
> >> Oh, I changed my mind, your approach is fine. I was unclear. 
> >> Currently, I just need you to address the other comments. 
> >> -Sam
> >> 
> >> On Wed, Sep 10, 2014 at 3:13 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> > As I understand, you want me to implement the following. 
> >> > 
> >> > 1. Keep this implementation one sharded optracker for the ios going through ms_dispatch path. 
> >> > 
> >> > 2. Additionally, for ios going through ms_fast_dispatch, you want 
> >> > me to implement optracker (without internal shard) per opwq shard
> >> > 
> >> > Am I right ? 
> >> > 
> >> > Thanks & Regards
> >> > Somnath
> >> > 
> >> > -----Original Message-----
> >> > From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> >> > Sent: Wednesday, September 10, 2014 3:08 PM
> >> > To: Somnath Roy
> >> > Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> > Subject: Re: OpTracker optimization
> >> > 
> >> > I don't quite understand. 
> >> > -Sam
> >> > 
> >> > On Wed, Sep 10, 2014 at 2:38 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> >> Thanks Sam. 
> >> >> So, you want me to go with optracker/shadedopWq , right ? 
> >> >> 
> >> >> Regards
> >> >> Somnath
> >> >> 
> >> >> -----Original Message-----
> >> >> From: Samuel Just [mailto:sam.just-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> >> >> Sent: Wednesday, September 10, 2014 2:36 PM
> >> >> To: Somnath Roy
> >> >> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> >> Subject: Re: OpTracker optimization
> >> >> 
> >> >> Responded with cosmetic nonsense. Once you've got that and the other comments addressed, I can put it in wip-sam-testing. 
> >> >> -Sam
> >> >> 
> >> >> On Wed, Sep 10, 2014 at 1:30 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> >>> Thanks Sam..I responded back :-)
> >> >>> 
> >> >>> -----Original Message-----
> >> >>> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org 
> >> >>> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Samuel 
> >> >>> Just
> >> >>> Sent: Wednesday, September 10, 2014 11:17 AM
> >> >>> To: Somnath Roy
> >> >>> Cc: Sage Weil (sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org); ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; 
> >> >>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> >>> Subject: Re: OpTracker optimization
> >> >>> 
> >> >>> Added a comment about the approach. 
> >> >>> -Sam
> >> >>> 
> >> >>> On Tue, Sep 9, 2014 at 1:33 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org> wrote: 
> >> >>>> Hi Sam/Sage,
> >> >>>> 
> >> >>>> As we discussed earlier, enabling the present OpTracker code 
> >> >>>> degrading performance severely. For example, in my setup a 
> >> >>>> single OSD node with
> >> >>>> 10 clients is reaching ~103K read iops with io served from 
> >> >>>> memory while optracking is disabled but enabling optracker it is reduced to ~39K iops.
> >> >>>> Probably, running OSD without enabling OpTracker is not an 
> >> >>>> option for many of Ceph users.
> >> >>>> 
> >> >>>> Now, by sharding the Optracker:: ops_in_flight_lock (thus 
> >> >>>> xlist
> >> >>>> ops_in_flight) and removing some other bottlenecks I am able 
> >> >>>> to match the performance of OpTracking enabled OSD with 
> >> >>>> OpTracking disabled, but with the expense of ~1 extra cpu core.
> >> >>>> 
> >> >>>> In this process I have also fixed the following tracker. 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> http://tracker.ceph.com/issues/9384
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> and probably http://tracker.ceph.com/issues/8885 too. 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> I have created following pull request for the same. Please review it. 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> https://github.com/ceph/ceph/pull/2440
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> Thanks & Regards
> >> >>>> 
> >> >>>> Somnath
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> 
> >> >>>> ________________________________
> >> >>>> 
> >> >>>> PLEASE NOTE: The information contained in this electronic mail 
> >> >>>> message is intended only for the use of the designated
> >> >>>> recipient(s) named above. If the reader of this message is not 
> >> >>>> the intended recipient, you are hereby notified that you have 
> >> >>>> received this message in error and that any review, 
> >> >>>> dissemination, distribution, or copying of this message is 
> >> >>>> strictly prohibited. If you have received this communication 
> >> >>>> in error, please notify the sender by telephone or e-mail (as 
> >> >>>> shown
> >> >>>> above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
> >> >>>> 
> >> >>> --
> >> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> >> >>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More 
> >> >>> majordomo info at http://vger.kernel.org/majordomo-info.html
> >> >>> 
> >> >>> ________________________________
> >> >>> 
> >> >>> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
> >> >>> 
> >> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2014-09-13 16:19 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-09 20:33 OpTracker optimization Somnath Roy
2014-09-10 18:16 ` Samuel Just
     [not found]   ` <CA+4uBUbSrYhy8=ZPKZ7dOTh0sNNCs5mC3ttgBH+qoWO+58UdvA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-09-10 20:30     ` Somnath Roy
2014-09-10 21:36       ` Samuel Just
     [not found]         ` <CA+4uBUbTBksZbRxR9RCTws-O--2N+QnHN5aS_Kx-D7yVeEmiDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-09-10 21:38           ` Somnath Roy
     [not found]             ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C636-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
2014-09-10 22:07               ` Samuel Just
2014-09-10 22:13                 ` Somnath Roy
     [not found]                   ` <755F6B91B3BE364F9BCA11EA3F9E0C6F2783C65C-cXZ6iGhjG0i+xgsn/SD5JjJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
2014-09-10 22:25                     ` Samuel Just
2014-09-11  1:52                       ` Somnath Roy
2014-09-11  3:33                         ` Sage Weil
     [not found]                           ` <alpine.DEB.2.00.1409102032440.17200-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2014-09-11 18:30                             ` Somnath Roy
2014-09-11 18:30                               ` Samuel Just
     [not found]                                 ` <CA+4uBUYRdp=VFc1T=WPf7HRXgfD7MEqEM5yhEKRe7_M_s_dh-w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-09-13  8:03                                   ` Somnath Roy
2014-09-13  9:00                                     ` [ceph-users] " Alexandre DERUMIER
2014-09-13 14:32                                       ` Sage Weil
     [not found]                                         ` <alpine.DEB.2.00.1409130731180.29849-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2014-09-13 16:19                                           ` Somnath Roy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.