All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: Intel power tuning - 30% throughput performance increase
       [not found]             ` <CA+z5DszC6EmYSPpZsj3O0Neb-qMhfmUDjjXWpfkgukrqTyZa6A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-05-03 23:56               ` Brad Hubbard
  2017-05-04  0:58                 ` [ceph-users] " Haomai Wang
  0 siblings, 1 reply; 6+ messages in thread
From: Brad Hubbard @ 2017-05-03 23:56 UTC (permalink / raw)
  To: ceph-devel; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw

+ceph-devel to get input on whether we want/need to check the value of
/dev/cpu_dma_latency (platform dependant) at startup and issue a
warning, or whether documenting this would suffice?

Any doc contribution would be welcomed.

On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
<blair.bethwaite-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> On 3 May 2017 at 19:07, Dan van der Ster <dan-EOCVfBHj35C+XT7JhA+gdA@public.gmane.org> wrote:
>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>> your 30% boost was when going from throughput-performance to
>> dma_latency=0, right? I'm trying to understand what is the incremental
>> improvement from 1 to 0.
>
> Probably minimal given that represents a state transition latency
> taking only 1us. Presumably the main issue is when the CPU can drop
> into the lower states and the compounding impact of that over time. I
> will do some simple characterisation of that over the next couple of
> weeks and report back...
>
> --
> Cheers,
> ~Blairo
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [ceph-users] Intel power tuning - 30% throughput performance increase
  2017-05-03 23:56               ` Intel power tuning - 30% throughput performance increase Brad Hubbard
@ 2017-05-04  0:58                 ` Haomai Wang
  2017-05-05  0:28                   ` Brad Hubbard
  0 siblings, 1 reply; 6+ messages in thread
From: Haomai Wang @ 2017-05-04  0:58 UTC (permalink / raw)
  To: Brad Hubbard; +Cc: ceph-devel, ceph-users

refer to https://github.com/ceph/ceph/pull/5013

On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard <bhubbard@redhat.com> wrote:
> +ceph-devel to get input on whether we want/need to check the value of
> /dev/cpu_dma_latency (platform dependant) at startup and issue a
> warning, or whether documenting this would suffice?
>
> Any doc contribution would be welcomed.
>
> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
> <blair.bethwaite@gmail.com> wrote:
>> On 3 May 2017 at 19:07, Dan van der Ster <dan@vanderster.com> wrote:
>>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>>> your 30% boost was when going from throughput-performance to
>>> dma_latency=0, right? I'm trying to understand what is the incremental
>>> improvement from 1 to 0.
>>
>> Probably minimal given that represents a state transition latency
>> taking only 1us. Presumably the main issue is when the CPU can drop
>> into the lower states and the compounding impact of that over time. I
>> will do some simple characterisation of that over the next couple of
>> weeks and report back...
>>
>> --
>> Cheers,
>> ~Blairo
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Cheers,
> Brad
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [ceph-users] Intel power tuning - 30% throughput performance increase
  2017-05-04  0:58                 ` [ceph-users] " Haomai Wang
@ 2017-05-05  0:28                   ` Brad Hubbard
       [not found]                     ` <CA+z5DsyGBGZ2PyqnJtOZq3etixjx86e=WE0TCkdkRYbDvbBQaA@mail.gmail.com>
  2017-05-19  7:23                     ` [ceph-users] " Xiaoxi Chen
  0 siblings, 2 replies; 6+ messages in thread
From: Brad Hubbard @ 2017-05-05  0:28 UTC (permalink / raw)
  To: Haomai Wang; +Cc: ceph-devel, ceph-users

On Thu, May 4, 2017 at 10:58 AM, Haomai Wang <haomai@xsky.com> wrote:
> refer to https://github.com/ceph/ceph/pull/5013

How about we issue a warning about possible performance implications
if we detect this is not set to 1 *or* 0 at startup?

>
> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard <bhubbard@redhat.com> wrote:
>> +ceph-devel to get input on whether we want/need to check the value of
>> /dev/cpu_dma_latency (platform dependant) at startup and issue a
>> warning, or whether documenting this would suffice?
>>
>> Any doc contribution would be welcomed.
>>
>> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
>> <blair.bethwaite@gmail.com> wrote:
>>> On 3 May 2017 at 19:07, Dan van der Ster <dan@vanderster.com> wrote:
>>>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>>>> your 30% boost was when going from throughput-performance to
>>>> dma_latency=0, right? I'm trying to understand what is the incremental
>>>> improvement from 1 to 0.
>>>
>>> Probably minimal given that represents a state transition latency
>>> taking only 1us. Presumably the main issue is when the CPU can drop
>>> into the lower states and the compounding impact of that over time. I
>>> will do some simple characterisation of that over the next couple of
>>> weeks and report back...
>>>
>>> --
>>> Cheers,
>>> ~Blairo
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Cheers,
>> Brad
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Cheers,
Brad

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Intel power tuning - 30% throughput performance increase
       [not found]                       ` <CA+z5DsyGBGZ2PyqnJtOZq3etixjx86e=WE0TCkdkRYbDvbBQaA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-05-05  1:25                         ` Blair Bethwaite
  0 siblings, 0 replies; 6+ messages in thread
From: Blair Bethwaite @ 2017-05-05  1:25 UTC (permalink / raw)
  To: Brad Hubbard; +Cc: Ceph Development, ceph-users-idqoXFIVOFJgJs9I8MT0rw


[-- Attachment #1.1: Type: text/plain, Size: 2269 bytes --]

Sounds good, but could also have a config option to set it before dropping
root?

On 4 May 2017 20:28, "Brad Hubbard" <bhubbard-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

On Thu, May 4, 2017 at 10:58 AM, Haomai Wang <haomai-p1s0O0zx/60@public.gmane.org> wrote:
> refer to https://github.com/ceph/ceph/pull/5013

How about we issue a warning about possible performance implications
if we detect this is not set to 1 *or* 0 at startup?

>
> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard <bhubbard-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> +ceph-devel to get input on whether we want/need to check the value of
>> /dev/cpu_dma_latency (platform dependant) at startup and issue a
>> warning, or whether documenting this would suffice?
>>
>> Any doc contribution would be welcomed.
>>
>> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
>> <blair.bethwaite-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>>> On 3 May 2017 at 19:07, Dan van der Ster <dan-EOCVfBHj35C+XT7JhA+gdA@public.gmane.org> wrote:
>>>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>>>> your 30% boost was when going from throughput-performance to
>>>> dma_latency=0, right? I'm trying to understand what is the incremental
>>>> improvement from 1 to 0.
>>>
>>> Probably minimal given that represents a state transition latency
>>> taking only 1us. Presumably the main issue is when the CPU can drop
>>> into the lower states and the compounding impact of that over time. I
>>> will do some simple characterisation of that over the next couple of
>>> weeks and report back...
>>>
>>> --
>>> Cheers,
>>> ~Blairo
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> --
>> Cheers,
>> Brad
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Cheers,
Brad
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #1.2: Type: text/html, Size: 4110 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [ceph-users] Intel power tuning - 30% throughput performance increase
  2017-05-05  0:28                   ` Brad Hubbard
       [not found]                     ` <CA+z5DsyGBGZ2PyqnJtOZq3etixjx86e=WE0TCkdkRYbDvbBQaA@mail.gmail.com>
@ 2017-05-19  7:23                     ` Xiaoxi Chen
       [not found]                       ` <CALJWc2tv8wRWYciRfMsGFV5wZk2eko5dNRdWm=A5jHOCVV1XGg@mail.gmail.com>
  1 sibling, 1 reply; 6+ messages in thread
From: Xiaoxi Chen @ 2017-05-19  7:23 UTC (permalink / raw)
  To: Brad Hubbard; +Cc: Haomai Wang, ceph-devel, ceph-users

would be better to document it first on "Known system-wise tuning
knobs" in the doc?


2017-05-05 8:28 GMT+08:00 Brad Hubbard <bhubbard@redhat.com>:
> On Thu, May 4, 2017 at 10:58 AM, Haomai Wang <haomai@xsky.com> wrote:
>> refer to https://github.com/ceph/ceph/pull/5013
>
> How about we issue a warning about possible performance implications
> if we detect this is not set to 1 *or* 0 at startup?
>
>>
>> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard <bhubbard@redhat.com> wrote:
>>> +ceph-devel to get input on whether we want/need to check the value of
>>> /dev/cpu_dma_latency (platform dependant) at startup and issue a
>>> warning, or whether documenting this would suffice?
>>>
>>> Any doc contribution would be welcomed.
>>>
>>> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
>>> <blair.bethwaite@gmail.com> wrote:
>>>> On 3 May 2017 at 19:07, Dan van der Ster <dan@vanderster.com> wrote:
>>>>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>>>>> your 30% boost was when going from throughput-performance to
>>>>> dma_latency=0, right? I'm trying to understand what is the incremental
>>>>> improvement from 1 to 0.
>>>>
>>>> Probably minimal given that represents a state transition latency
>>>> taking only 1us. Presumably the main issue is when the CPU can drop
>>>> into the lower states and the compounding impact of that over time. I
>>>> will do some simple characterisation of that over the next couple of
>>>> weeks and report back...
>>>>
>>>> --
>>>> Cheers,
>>>> ~Blairo
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> --
>>> Cheers,
>>> Brad
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Cheers,
> Brad
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [ceph-users] Intel power tuning - 30% throughput performance increase
       [not found]                       ` <CALJWc2tv8wRWYciRfMsGFV5wZk2eko5dNRdWm=A5jHOCVV1XGg@mail.gmail.com>
@ 2017-05-20  0:14                         ` Brad Hubbard
  0 siblings, 0 replies; 6+ messages in thread
From: Brad Hubbard @ 2017-05-20  0:14 UTC (permalink / raw)
  To: xiaoguang fan; +Cc: Xiaoxi Chen, Haomai Wang, ceph-devel, ceph-users

It just came to my attention that Intel has advised Red Hat never to
lock in C0 as it may affect the life expectancy of server components
such as fans and the CPUs themselves.

FYI, YMMV.

On Fri, May 19, 2017 at 5:53 PM, xiaoguang fan
<fanxiaoguang008@gmail.com> wrote:
> I have done a test about close c-stat , but performance didn't increase
>
>
>
> 2017-05-19 15:23 GMT+08:00 Xiaoxi Chen <superdebuger@gmail.com>:
>>
>> would be better to document it first on "Known system-wise tuning
>> knobs" in the doc?
>>
>>
>> 2017-05-05 8:28 GMT+08:00 Brad Hubbard <bhubbard@redhat.com>:
>> > On Thu, May 4, 2017 at 10:58 AM, Haomai Wang <haomai@xsky.com> wrote:
>> >> refer to https://github.com/ceph/ceph/pull/5013
>> >
>> > How about we issue a warning about possible performance implications
>> > if we detect this is not set to 1 *or* 0 at startup?
>> >
>> >>
>> >> On Thu, May 4, 2017 at 7:56 AM, Brad Hubbard <bhubbard@redhat.com>
>> >> wrote:
>> >>> +ceph-devel to get input on whether we want/need to check the value of
>> >>> /dev/cpu_dma_latency (platform dependant) at startup and issue a
>> >>> warning, or whether documenting this would suffice?
>> >>>
>> >>> Any doc contribution would be welcomed.
>> >>>
>> >>> On Wed, May 3, 2017 at 7:18 PM, Blair Bethwaite
>> >>> <blair.bethwaite@gmail.com> wrote:
>> >>>> On 3 May 2017 at 19:07, Dan van der Ster <dan@vanderster.com> wrote:
>> >>>>> Whether cpu_dma_latency should be 0 or 1, I'm not sure yet. I assume
>> >>>>> your 30% boost was when going from throughput-performance to
>> >>>>> dma_latency=0, right? I'm trying to understand what is the
>> >>>>> incremental
>> >>>>> improvement from 1 to 0.
>> >>>>
>> >>>> Probably minimal given that represents a state transition latency
>> >>>> taking only 1us. Presumably the main issue is when the CPU can drop
>> >>>> into the lower states and the compounding impact of that over time. I
>> >>>> will do some simple characterisation of that over the next couple of
>> >>>> weeks and report back...
>> >>>>
>> >>>> --
>> >>>> Cheers,
>> >>>> ~Blairo
>> >>>> _______________________________________________
>> >>>> ceph-users mailing list
>> >>>> ceph-users@lists.ceph.com
>> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Cheers,
>> >>> Brad
>> >>> _______________________________________________
>> >>> ceph-users mailing list
>> >>> ceph-users@lists.ceph.com
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > --
>> > Cheers,
>> > Brad
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>



-- 
Cheers,
Brad

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-05-20  0:14 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CA+z5Dsx1Fwkxxk=5N3u_Ox6m+VC_vW2oexWsF29ispcm+Hf8bA@mail.gmail.com>
     [not found] ` <CABZ+qqk-cN25u8vf-XWoTincF5M90_JjBGnedHvnfAcL8CUDwQ@mail.gmail.com>
     [not found]   ` <CA+z5DswHcgddrbYnwDH7o+58sASO5jZA5ZK+-NQ60Rw+bPRKkA@mail.gmail.com>
     [not found]     ` <CABZ+qq=+1SvbPhZOBCS0Be4Z=6LXsgck2AgK+o40BOU_BwDngQ@mail.gmail.com>
     [not found]       ` <CA+z5DsydfEGGzmKT+mf7BqpkwsD5CwL0hNMTW2OXFPeJjDWSGQ@mail.gmail.com>
     [not found]         ` <CABZ+qqmu2C4jdu50X2u38K9cb57c0=BdfKG0BhrjJfTQYOPcjg@mail.gmail.com>
     [not found]           ` <CA+z5DszC6EmYSPpZsj3O0Neb-qMhfmUDjjXWpfkgukrqTyZa6A@mail.gmail.com>
     [not found]             ` <CA+z5DszC6EmYSPpZsj3O0Neb-qMhfmUDjjXWpfkgukrqTyZa6A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-05-03 23:56               ` Intel power tuning - 30% throughput performance increase Brad Hubbard
2017-05-04  0:58                 ` [ceph-users] " Haomai Wang
2017-05-05  0:28                   ` Brad Hubbard
     [not found]                     ` <CA+z5DsyGBGZ2PyqnJtOZq3etixjx86e=WE0TCkdkRYbDvbBQaA@mail.gmail.com>
     [not found]                       ` <CA+z5DsyGBGZ2PyqnJtOZq3etixjx86e=WE0TCkdkRYbDvbBQaA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-05-05  1:25                         ` Blair Bethwaite
2017-05-19  7:23                     ` [ceph-users] " Xiaoxi Chen
     [not found]                       ` <CALJWc2tv8wRWYciRfMsGFV5wZk2eko5dNRdWm=A5jHOCVV1XGg@mail.gmail.com>
2017-05-20  0:14                         ` Brad Hubbard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.