All of lore.kernel.org
 help / color / mirror / Atom feed
* Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
@ 2014-06-13 14:59 Stefan Priebe
       [not found] ` <539B11C4.1010400-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Priebe @ 2014-06-13 14:59 UTC (permalink / raw)
  To: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

Hi,

while testint firefly i cam into the sitation where i had a client where 
the latest dumpling packages where installed (0.67.9).

As my pool has hashppool false and the tunables are set to default it 
can talk to my firefly ceph sotrage.

For random 4k writes using fio with librbd and 32 jobs and an iodepth of 32.

I get these results:

librbd / librados2 from dumpling:
   write: io=3020.9MB, bw=103083KB/s, iops=25770, runt= 30008msec
   WRITE: io=3020.9MB, aggrb=103082KB/s, minb=103082KB/s, 
maxb=103082KB/s, mint=30008msec, maxt=30008msec

librbd / librados2 from firefly:
   write: io=7344.3MB, bw=83537KB/s, iops=20884, runt= 90026msec
   WRITE: io=7344.3MB, aggrb=83537KB/s, minb=83537KB/s, maxb=83537KB/s, 
mint=90026msec, maxt=90026msec

Stefan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found] ` <539B11C4.1010400-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
@ 2014-06-26  0:17   ` Gregory Farnum
  2014-06-26  2:53     ` [ceph-users] " Christian Balzer
       [not found]     ` <CAPYLRzjL1JrgsThUejDHQsG3qcK1ZTXgcO2mHEuoHN-FbwVwsQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 2 replies; 13+ messages in thread
From: Gregory Farnum @ 2014-06-26  0:17 UTC (permalink / raw)
  To: Stefan Priebe; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

Sorry we let this drop; we've all been busy traveling and things.

There have been a lot of changes to librados between Dumpling and
Firefly, but we have no idea what would have made it slower. Can you
provide more details about how you were running these tests?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Fri, Jun 13, 2014 at 7:59 AM, Stefan Priebe <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
> Hi,
>
> while testint firefly i cam into the sitation where i had a client where the
> latest dumpling packages where installed (0.67.9).
>
> As my pool has hashppool false and the tunables are set to default it can
> talk to my firefly ceph sotrage.
>
> For random 4k writes using fio with librbd and 32 jobs and an iodepth of 32.
>
> I get these results:
>
> librbd / librados2 from dumpling:
>   write: io=3020.9MB, bw=103083KB/s, iops=25770, runt= 30008msec
>   WRITE: io=3020.9MB, aggrb=103082KB/s, minb=103082KB/s, maxb=103082KB/s,
> mint=30008msec, maxt=30008msec
>
> librbd / librados2 from firefly:
>   write: io=7344.3MB, bw=83537KB/s, iops=20884, runt= 90026msec
>   WRITE: io=7344.3MB, aggrb=83537KB/s, minb=83537KB/s, maxb=83537KB/s,
> mint=90026msec, maxt=90026msec
>
> Stefan
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
  2014-06-26  0:17   ` Gregory Farnum
@ 2014-06-26  2:53     ` Christian Balzer
       [not found]     ` <CAPYLRzjL1JrgsThUejDHQsG3qcK1ZTXgcO2mHEuoHN-FbwVwsQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 0 replies; 13+ messages in thread
From: Christian Balzer @ 2014-06-26  2:53 UTC (permalink / raw)
  Cc: ceph-devel, ceph-users

On Wed, 25 Jun 2014 17:17:02 -0700 Gregory Farnum wrote:

> Sorry we let this drop; we've all been busy traveling and things.
> 
> There have been a lot of changes to librados between Dumpling and
> Firefly, but we have no idea what would have made it slower. Can you
> provide more details about how you were running these tests?

This sounds a lot like what I saw when going from Emperor to Firefly, see
my post from a month ago:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg10095.html

Though the decrease in IOPS is even higher.

Regards,

Christian


> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> 
> On Fri, Jun 13, 2014 at 7:59 AM, Stefan Priebe <s.priebe@profihost.ag>
> wrote:
> > Hi,
> >
> > while testint firefly i cam into the sitation where i had a client
> > where the latest dumpling packages where installed (0.67.9).
> >
> > As my pool has hashppool false and the tunables are set to default it
> > can talk to my firefly ceph sotrage.
> >
> > For random 4k writes using fio with librbd and 32 jobs and an iodepth
> > of 32.
> >
> > I get these results:
> >
> > librbd / librados2 from dumpling:
> >   write: io=3020.9MB, bw=103083KB/s, iops=25770, runt= 30008msec
> >   WRITE: io=3020.9MB, aggrb=103082KB/s, minb=103082KB/s,
> > maxb=103082KB/s, mint=30008msec, maxt=30008msec
> >
> > librbd / librados2 from firefly:
> >   write: io=7344.3MB, bw=83537KB/s, iops=20884, runt= 90026msec
> >   WRITE: io=7344.3MB, aggrb=83537KB/s, minb=83537KB/s, maxb=83537KB/s,
> > mint=90026msec, maxt=90026msec
> >
> > Stefan
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@gol.com   	Global OnLine Japan/Fusion Communications
http://www.gol.com/

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]     ` <CAPYLRzjL1JrgsThUejDHQsG3qcK1ZTXgcO2mHEuoHN-FbwVwsQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-06-27  6:49       ` Stefan Priebe - Profihost AG
  2014-07-01 22:51         ` [ceph-users] " Gregory Farnum
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Priebe - Profihost AG @ 2014-06-27  6:49 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

Hi Greg,

Am 26.06.2014 02:17, schrieb Gregory Farnum:
> Sorry we let this drop; we've all been busy traveling and things.
> 
> There have been a lot of changes to librados between Dumpling and
> Firefly, but we have no idea what would have made it slower. Can you
> provide more details about how you were running these tests?

it's just a normal fio run:
fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
--readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
--runtime=90 --numjobs=32 --direct=1 --group

Running one time with firefly libs and one time with dumpling libs.
Traget is always the same pool on a firefly ceph storage.

Stefan

> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> 
> 
> On Fri, Jun 13, 2014 at 7:59 AM, Stefan Priebe <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>> Hi,
>>
>> while testint firefly i cam into the sitation where i had a client where the
>> latest dumpling packages where installed (0.67.9).
>>
>> As my pool has hashppool false and the tunables are set to default it can
>> talk to my firefly ceph sotrage.
>>
>> For random 4k writes using fio with librbd and 32 jobs and an iodepth of 32.
>>
>> I get these results:
>>
>> librbd / librados2 from dumpling:
>>   write: io=3020.9MB, bw=103083KB/s, iops=25770, runt= 30008msec
>>   WRITE: io=3020.9MB, aggrb=103082KB/s, minb=103082KB/s, maxb=103082KB/s,
>> mint=30008msec, maxt=30008msec
>>
>> librbd / librados2 from firefly:
>>   write: io=7344.3MB, bw=83537KB/s, iops=20884, runt= 90026msec
>>   WRITE: io=7344.3MB, aggrb=83537KB/s, minb=83537KB/s, maxb=83537KB/s,
>> mint=90026msec, maxt=90026msec
>>
>> Stefan
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
  2014-06-27  6:49       ` Stefan Priebe - Profihost AG
@ 2014-07-01 22:51         ` Gregory Farnum
       [not found]           ` <CAPYLRzhJ_3WkWSFUAx+w+gt5PTLD7SYmSyQ-xPRZ31ahN4a1Ag-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Gregory Farnum @ 2014-07-01 22:51 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: ceph-devel, ceph-users

On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
<s.priebe@profihost.ag> wrote:
> Hi Greg,
>
> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>> Sorry we let this drop; we've all been busy traveling and things.
>>
>> There have been a lot of changes to librados between Dumpling and
>> Firefly, but we have no idea what would have made it slower. Can you
>> provide more details about how you were running these tests?
>
> it's just a normal fio run:
> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
> --runtime=90 --numjobs=32 --direct=1 --group
>
> Running one time with firefly libs and one time with dumpling libs.
> Traget is always the same pool on a firefly ceph storage.

What's the backing cluster you're running against? What kind of CPU
usage do you see with both? 25k IOPS is definitely getting up there,
but I'd like some guidance about whether we're looking for a reduction
in parallelism, or an increase in per-op costs, or something else.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]           ` <CAPYLRzhJ_3WkWSFUAx+w+gt5PTLD7SYmSyQ-xPRZ31ahN4a1Ag-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-07-02 13:01             ` Stefan Priebe - Profihost AG
       [not found]               ` <53B40292.2000108-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Priebe - Profihost AG @ 2014-07-02 13:01 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

Am 02.07.2014 00:51, schrieb Gregory Farnum:
> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>> Hi Greg,
>>
>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>> Sorry we let this drop; we've all been busy traveling and things.
>>>
>>> There have been a lot of changes to librados between Dumpling and
>>> Firefly, but we have no idea what would have made it slower. Can you
>>> provide more details about how you were running these tests?
>>
>> it's just a normal fio run:
>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
>> --runtime=90 --numjobs=32 --direct=1 --group
>>
>> Running one time with firefly libs and one time with dumpling libs.
>> Traget is always the same pool on a firefly ceph storage.
> 
> What's the backing cluster you're running against? What kind of CPU
> usage do you see with both? 25k IOPS is definitely getting up there,
> but I'd like some guidance about whether we're looking for a reduction
> in parallelism, or an increase in per-op costs, or something else.

Hi Greg,

i don't have that test cluster anymore. It had to go into production
with dumpling.

So i can't tell you.

Sorry.

Stefan

> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]               ` <53B40292.2000108-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
@ 2014-07-02 13:07                 ` Haomai Wang
       [not found]                   ` <CACJqLyZYpEJYUn5m5B4LCGGfszY_xPjui5q7HShQuotBkVDt4Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Haomai Wang @ 2014-07-02 13:07 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

Could you give some perf counter from rbd client side? Such as op latency?

On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
<s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
> Am 02.07.2014 00:51, schrieb Gregory Farnum:
>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
>> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>>> Hi Greg,
>>>
>>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>>> Sorry we let this drop; we've all been busy traveling and things.
>>>>
>>>> There have been a lot of changes to librados between Dumpling and
>>>> Firefly, but we have no idea what would have made it slower. Can you
>>>> provide more details about how you were running these tests?
>>>
>>> it's just a normal fio run:
>>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
>>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
>>> --runtime=90 --numjobs=32 --direct=1 --group
>>>
>>> Running one time with firefly libs and one time with dumpling libs.
>>> Traget is always the same pool on a firefly ceph storage.
>>
>> What's the backing cluster you're running against? What kind of CPU
>> usage do you see with both? 25k IOPS is definitely getting up there,
>> but I'd like some guidance about whether we're looking for a reduction
>> in parallelism, or an increase in per-op costs, or something else.
>
> Hi Greg,
>
> i don't have that test cluster anymore. It had to go into production
> with dumpling.
>
> So i can't tell you.
>
> Sorry.
>
> Stefan
>
>> -Greg
>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Best Regards,

Wheat

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]                   ` <CACJqLyZYpEJYUn5m5B4LCGGfszY_xPjui5q7HShQuotBkVDt4Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-07-02 13:12                     ` Stefan Priebe - Profihost AG
  2014-07-02 14:00                       ` [ceph-users] " Gregory Farnum
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Priebe - Profihost AG @ 2014-07-02 13:12 UTC (permalink / raw)
  To: Haomai Wang; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users


Am 02.07.2014 15:07, schrieb Haomai Wang:
> Could you give some perf counter from rbd client side? Such as op latency?

Sorry haven't any counters. As this mail was some days unseen - i
thought nobody has an idea or could help.

Stefan

> On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>> Am 02.07.2014 00:51, schrieb Gregory Farnum:
>>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
>>> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>>>> Hi Greg,
>>>>
>>>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>>>> Sorry we let this drop; we've all been busy traveling and things.
>>>>>
>>>>> There have been a lot of changes to librados between Dumpling and
>>>>> Firefly, but we have no idea what would have made it slower. Can you
>>>>> provide more details about how you were running these tests?
>>>>
>>>> it's just a normal fio run:
>>>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
>>>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
>>>> --runtime=90 --numjobs=32 --direct=1 --group
>>>>
>>>> Running one time with firefly libs and one time with dumpling libs.
>>>> Traget is always the same pool on a firefly ceph storage.
>>>
>>> What's the backing cluster you're running against? What kind of CPU
>>> usage do you see with both? 25k IOPS is definitely getting up there,
>>> but I'd like some guidance about whether we're looking for a reduction
>>> in parallelism, or an increase in per-op costs, or something else.
>>
>> Hi Greg,
>>
>> i don't have that test cluster anymore. It had to go into production
>> with dumpling.
>>
>> So i can't tell you.
>>
>> Sorry.
>>
>> Stefan
>>
>>> -Greg
>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
  2014-07-02 13:12                     ` Stefan Priebe - Profihost AG
@ 2014-07-02 14:00                       ` Gregory Farnum
       [not found]                         ` <CAPYLRziFqZYFn4E22oE+-HXXcRppzwY=h2TbDuCErSixzZ=eWQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Gregory Farnum @ 2014-07-02 14:00 UTC (permalink / raw)
  To: Stefan Priebe - Profihost AG; +Cc: Haomai Wang, ceph-devel, ceph-users

Yeah, it's fighting for attention with a lot of other urgent stuff. :(

Anyway, even if you can't look up any details or reproduce at this
time, I'm sure you know what shape the cluster was (number of OSDs,
running on SSDs or hard drives, etc), and that would be useful
guidance. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wed, Jul 2, 2014 at 6:12 AM, Stefan Priebe - Profihost AG
<s.priebe@profihost.ag> wrote:
>
> Am 02.07.2014 15:07, schrieb Haomai Wang:
>> Could you give some perf counter from rbd client side? Such as op latency?
>
> Sorry haven't any counters. As this mail was some days unseen - i
> thought nobody has an idea or could help.
>
> Stefan
>
>> On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
>> <s.priebe@profihost.ag> wrote:
>>> Am 02.07.2014 00:51, schrieb Gregory Farnum:
>>>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
>>>> <s.priebe@profihost.ag> wrote:
>>>>> Hi Greg,
>>>>>
>>>>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>>>>> Sorry we let this drop; we've all been busy traveling and things.
>>>>>>
>>>>>> There have been a lot of changes to librados between Dumpling and
>>>>>> Firefly, but we have no idea what would have made it slower. Can you
>>>>>> provide more details about how you were running these tests?
>>>>>
>>>>> it's just a normal fio run:
>>>>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
>>>>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
>>>>> --runtime=90 --numjobs=32 --direct=1 --group
>>>>>
>>>>> Running one time with firefly libs and one time with dumpling libs.
>>>>> Traget is always the same pool on a firefly ceph storage.
>>>>
>>>> What's the backing cluster you're running against? What kind of CPU
>>>> usage do you see with both? 25k IOPS is definitely getting up there,
>>>> but I'd like some guidance about whether we're looking for a reduction
>>>> in parallelism, or an increase in per-op costs, or something else.
>>>
>>> Hi Greg,
>>>
>>> i don't have that test cluster anymore. It had to go into production
>>> with dumpling.
>>>
>>> So i can't tell you.
>>>
>>> Sorry.
>>>
>>> Stefan
>>>
>>>> -Greg
>>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]                         ` <CAPYLRziFqZYFn4E22oE+-HXXcRppzwY=h2TbDuCErSixzZ=eWQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2014-07-02 19:00                           ` Stefan Priebe
       [not found]                             ` <53B456E3.1060909-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Priebe @ 2014-07-02 19:00 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users


Am 02.07.2014 16:00, schrieb Gregory Farnum:
> Yeah, it's fighting for attention with a lot of other urgent stuff. :(
>
> Anyway, even if you can't look up any details or reproduce at this
> time, I'm sure you know what shape the cluster was (number of OSDs,
> running on SSDs or hard drives, etc), and that would be useful
> guidance. :)

Sure

Number of OSDs: 24
Each OSD has an SSD capable of doing tested with fio before installing 
ceph (70.000 iop/s 4k write, 580MB/s seq. write 1MB blocks)

Single Xeon E5-1620 v2 @ 3.70GHz

48GB RAM

Stefan

> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Wed, Jul 2, 2014 at 6:12 AM, Stefan Priebe - Profihost AG
> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>>
>> Am 02.07.2014 15:07, schrieb Haomai Wang:
>>> Could you give some perf counter from rbd client side? Such as op latency?
>>
>> Sorry haven't any counters. As this mail was some days unseen - i
>> thought nobody has an idea or could help.
>>
>> Stefan
>>
>>> On Wed, Jul 2, 2014 at 9:01 PM, Stefan Priebe - Profihost AG
>>> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>>>> Am 02.07.2014 00:51, schrieb Gregory Farnum:
>>>>> On Thu, Jun 26, 2014 at 11:49 PM, Stefan Priebe - Profihost AG
>>>>> <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>>>>>> Hi Greg,
>>>>>>
>>>>>> Am 26.06.2014 02:17, schrieb Gregory Farnum:
>>>>>>> Sorry we let this drop; we've all been busy traveling and things.
>>>>>>>
>>>>>>> There have been a lot of changes to librados between Dumpling and
>>>>>>> Firefly, but we have no idea what would have made it slower. Can you
>>>>>>> provide more details about how you were running these tests?
>>>>>>
>>>>>> it's just a normal fio run:
>>>>>> fio --ioengine=rbd --bs=4k --name=foo --invalidate=0
>>>>>> --readwrite=randwrite --iodepth=32 --rbdname=fio_test2 --pool=teststor
>>>>>> --runtime=90 --numjobs=32 --direct=1 --group
>>>>>>
>>>>>> Running one time with firefly libs and one time with dumpling libs.
>>>>>> Traget is always the same pool on a firefly ceph storage.
>>>>>
>>>>> What's the backing cluster you're running against? What kind of CPU
>>>>> usage do you see with both? 25k IOPS is definitely getting up there,
>>>>> but I'd like some guidance about whether we're looking for a reduction
>>>>> in parallelism, or an increase in per-op costs, or something else.
>>>>
>>>> Hi Greg,
>>>>
>>>> i don't have that test cluster anymore. It had to go into production
>>>> with dumpling.
>>>>
>>>> So i can't tell you.
>>>>
>>>> Sorry.
>>>>
>>>> Stefan
>>>>
>>>>> -Greg
>>>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]                             ` <53B456E3.1060909-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
@ 2014-07-02 19:36                               ` Gregory Farnum
  2014-07-02 19:44                                 ` [ceph-users] " Stefan Priebe
  0 siblings, 1 reply; 13+ messages in thread
From: Gregory Farnum @ 2014-07-02 19:36 UTC (permalink / raw)
  To: Stefan Priebe; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
>
> Am 02.07.2014 16:00, schrieb Gregory Farnum:
>
>> Yeah, it's fighting for attention with a lot of other urgent stuff. :(
>>
>> Anyway, even if you can't look up any details or reproduce at this
>> time, I'm sure you know what shape the cluster was (number of OSDs,
>> running on SSDs or hard drives, etc), and that would be useful
>> guidance. :)
>
>
> Sure
>
> Number of OSDs: 24
> Each OSD has an SSD capable of doing tested with fio before installing ceph
> (70.000 iop/s 4k write, 580MB/s seq. write 1MB blocks)
>
> Single Xeon E5-1620 v2 @ 3.70GHz
>
> 48GB RAM

Awesome, thanks.

I went through the changelogs on the librados/, osdc/, and msg/
directories to see if I could find any likely change candidates
between Dumpling and Firefly and couldn't see any issues. :( But I
suspect that the sharding changes coming will more than make up the
difference, so you might want to plan on checking that out when it
arrives, even if you don't want to deploy it to production.
-Greg

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [ceph-users] Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
  2014-07-02 19:36                               ` Gregory Farnum
@ 2014-07-02 19:44                                 ` Stefan Priebe
       [not found]                                   ` <53B46106.7050306-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
  0 siblings, 1 reply; 13+ messages in thread
From: Stefan Priebe @ 2014-07-02 19:44 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: Haomai Wang, ceph-devel, ceph-users

Hi Greg,

Am 02.07.2014 21:36, schrieb Gregory Farnum:
> On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe <s.priebe@profihost.ag> wrote:
>>
>> Am 02.07.2014 16:00, schrieb Gregory Farnum:
>>
>>> Yeah, it's fighting for attention with a lot of other urgent stuff. :(
>>>
>>> Anyway, even if you can't look up any details or reproduce at this
>>> time, I'm sure you know what shape the cluster was (number of OSDs,
>>> running on SSDs or hard drives, etc), and that would be useful
>>> guidance. :)
>>
>>
>> Sure
>>
>> Number of OSDs: 24
>> Each OSD has an SSD capable of doing tested with fio before installing ceph
>> (70.000 iop/s 4k write, 580MB/s seq. write 1MB blocks)
>>
>> Single Xeon E5-1620 v2 @ 3.70GHz
>>
>> 48GB RAM
>
> Awesome, thanks.
>
> I went through the changelogs on the librados/, osdc/, and msg/
> directories to see if I could find any likely change candidates
> between Dumpling and Firefly and couldn't see any issues. :( But I
> suspect that the sharding changes coming will more than make up the
> difference, so you might want to plan on checking that out when it
> arrives, even if you don't want to deploy it to production.n

To which changes do you refer? Will they be part or backported of/to 
firefly?

> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling?
       [not found]                                   ` <53B46106.7050306-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
@ 2014-07-02 19:46                                     ` Gregory Farnum
  0 siblings, 0 replies; 13+ messages in thread
From: Gregory Farnum @ 2014-07-02 19:46 UTC (permalink / raw)
  To: Stefan Priebe; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users

On Wed, Jul 2, 2014 at 12:44 PM, Stefan Priebe <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org> wrote:
> Hi Greg,
>
> Am 02.07.2014 21:36, schrieb Gregory Farnum:
>>
>> On Wed, Jul 2, 2014 at 12:00 PM, Stefan Priebe <s.priebe-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
>> wrote:
>>>
>>>
>>> Am 02.07.2014 16:00, schrieb Gregory Farnum:
>>>
>>>> Yeah, it's fighting for attention with a lot of other urgent stuff. :(
>>>>
>>>> Anyway, even if you can't look up any details or reproduce at this
>>>> time, I'm sure you know what shape the cluster was (number of OSDs,
>>>> running on SSDs or hard drives, etc), and that would be useful
>>>> guidance. :)
>>>
>>>
>>>
>>> Sure
>>>
>>> Number of OSDs: 24
>>> Each OSD has an SSD capable of doing tested with fio before installing
>>> ceph
>>> (70.000 iop/s 4k write, 580MB/s seq. write 1MB blocks)
>>>
>>> Single Xeon E5-1620 v2 @ 3.70GHz
>>>
>>> 48GB RAM
>>
>>
>> Awesome, thanks.
>>
>> I went through the changelogs on the librados/, osdc/, and msg/
>> directories to see if I could find any likely change candidates
>> between Dumpling and Firefly and couldn't see any issues. :( But I
>> suspect that the sharding changes coming will more than make up the
>> difference, so you might want to plan on checking that out when it
>> arrives, even if you don't want to deploy it to production.n
>
>
> To which changes do you refer? Will they be part or backported of/to
> firefly?

Yehuda's got a pretty big patchset that is sharding up the "big
Objecter lock" into many smaller mutexes and RWLocks that will make it
much more parallel. He's on vacation just now but I understand it's
almost ready to merge; I don't think it'll be suitable for backport to
firefly, though (it's big).
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2014-07-02 19:46 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-13 14:59 Why is librbd1 / librados2 from Firefly 20% slower than the one from dumpling? Stefan Priebe
     [not found] ` <539B11C4.1010400-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
2014-06-26  0:17   ` Gregory Farnum
2014-06-26  2:53     ` [ceph-users] " Christian Balzer
     [not found]     ` <CAPYLRzjL1JrgsThUejDHQsG3qcK1ZTXgcO2mHEuoHN-FbwVwsQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-06-27  6:49       ` Stefan Priebe - Profihost AG
2014-07-01 22:51         ` [ceph-users] " Gregory Farnum
     [not found]           ` <CAPYLRzhJ_3WkWSFUAx+w+gt5PTLD7SYmSyQ-xPRZ31ahN4a1Ag-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-07-02 13:01             ` Stefan Priebe - Profihost AG
     [not found]               ` <53B40292.2000108-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
2014-07-02 13:07                 ` Haomai Wang
     [not found]                   ` <CACJqLyZYpEJYUn5m5B4LCGGfszY_xPjui5q7HShQuotBkVDt4Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-07-02 13:12                     ` Stefan Priebe - Profihost AG
2014-07-02 14:00                       ` [ceph-users] " Gregory Farnum
     [not found]                         ` <CAPYLRziFqZYFn4E22oE+-HXXcRppzwY=h2TbDuCErSixzZ=eWQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2014-07-02 19:00                           ` Stefan Priebe
     [not found]                             ` <53B456E3.1060909-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
2014-07-02 19:36                               ` Gregory Farnum
2014-07-02 19:44                                 ` [ceph-users] " Stefan Priebe
     [not found]                                   ` <53B46106.7050306-2Lf/h1ldwEHR5kwTpVNS9A@public.gmane.org>
2014-07-02 19:46                                     ` Gregory Farnum

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.