* ceph-osd cpu usage
@ 2012-11-15 10:56 Stefan Priebe - Profihost AG
2012-11-15 11:18 ` Alexandre DERUMIER
2012-11-15 15:14 ` Sage Weil
0 siblings, 2 replies; 12+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-11-15 10:56 UTC (permalink / raw)
To: ceph-devel
[-- Attachment #1: Type: text/plain, Size: 368 bytes --]
Hello list,
my main problem right now is that ceph does not scale for me (more vms
using rbd). It does not scale as the ceph-osd is using all my CPU core
all the time (8 cores) with just 4 SSDs. The SSDs are far away from
being loaded.
What is the best way to find out what the ceph-osd process is doing all
the time?
A gperf graph is attached.
Greets,
Stefan
[-- Attachment #2: out.pdf --]
[-- Type: application/pdf, Size: 19302 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 10:56 ceph-osd cpu usage Stefan Priebe - Profihost AG
@ 2012-11-15 11:18 ` Alexandre DERUMIER
2012-11-15 12:19 ` Stefan Priebe - Profihost AG
2012-11-15 15:14 ` Sage Weil
1 sibling, 1 reply; 12+ messages in thread
From: Alexandre DERUMIER @ 2012-11-15 11:18 UTC (permalink / raw)
To: Stefan Priebe - Profihost AG; +Cc: ceph-devel
cpu usage is same for read and write ?
----- Mail original -----
De: "Stefan Priebe - Profihost AG" <s.priebe@profihost.ag>
À: ceph-devel@vger.kernel.org
Envoyé: Jeudi 15 Novembre 2012 11:56:37
Objet: ceph-osd cpu usage
Hello list,
my main problem right now is that ceph does not scale for me (more vms
using rbd). It does not scale as the ceph-osd is using all my CPU core
all the time (8 cores) with just 4 SSDs. The SSDs are far away from
being loaded.
What is the best way to find out what the ceph-osd process is doing all
the time?
A gperf graph is attached.
Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 11:18 ` Alexandre DERUMIER
@ 2012-11-15 12:19 ` Stefan Priebe - Profihost AG
2012-11-15 15:12 ` Mark Nelson
0 siblings, 1 reply; 12+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-11-15 12:19 UTC (permalink / raw)
To: Alexandre DERUMIER; +Cc: ceph-devel
Am 15.11.2012 12:18, schrieb Alexandre DERUMIER:
> cpu usage is same for read and write ?
no for read it is just around 25%. And i get "full" (limited by rbd /
librbd) 23.000 iops per vm.
> ----- Mail original -----
>
> De: "Stefan Priebe - Profihost AG" <s.priebe@profihost.ag>
> À: ceph-devel@vger.kernel.org
> Envoyé: Jeudi 15 Novembre 2012 11:56:37
> Objet: ceph-osd cpu usage
>
> Hello list,
>
> my main problem right now is that ceph does not scale for me (more vms
> using rbd). It does not scale as the ceph-osd is using all my CPU core
> all the time (8 cores) with just 4 SSDs. The SSDs are far away from
> being loaded.
>
> What is the best way to find out what the ceph-osd process is doing all
> the time?
>
> A gperf graph is attached.
>
> Greets,
> Stefan
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 12:19 ` Stefan Priebe - Profihost AG
@ 2012-11-15 15:12 ` Mark Nelson
2012-11-15 16:09 ` Stefan Priebe
2012-11-15 19:44 ` Stefan Priebe
0 siblings, 2 replies; 12+ messages in thread
From: Mark Nelson @ 2012-11-15 15:12 UTC (permalink / raw)
To: Stefan Priebe - Profihost AG; +Cc: Alexandre DERUMIER, ceph-devel
Out of curiosity, does it help much if you disable crc32c calculations?
Use the "nocrc" option in your ceph.conf file. I've had my eye on
crcutil as an alternative to how we do crc32c now.
http://code.google.com/p/crcutil/
Mark
On 11/15/2012 06:19 AM, Stefan Priebe - Profihost AG wrote:
> Am 15.11.2012 12:18, schrieb Alexandre DERUMIER:
>> cpu usage is same for read and write ?
>
> no for read it is just around 25%. And i get "full" (limited by rbd /
> librbd) 23.000 iops per vm.
>
>
>> ----- Mail original -----
>>
>> De: "Stefan Priebe - Profihost AG" <s.priebe@profihost.ag>
>> À: ceph-devel@vger.kernel.org
>> Envoyé: Jeudi 15 Novembre 2012 11:56:37
>> Objet: ceph-osd cpu usage
>>
>> Hello list,
>>
>> my main problem right now is that ceph does not scale for me (more vms
>> using rbd). It does not scale as the ceph-osd is using all my CPU core
>> all the time (8 cores) with just 4 SSDs. The SSDs are far away from
>> being loaded.
>>
>> What is the best way to find out what the ceph-osd process is doing all
>> the time?
>>
>> A gperf graph is attached.
>>
>> Greets,
>> Stefan
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 10:56 ceph-osd cpu usage Stefan Priebe - Profihost AG
2012-11-15 11:18 ` Alexandre DERUMIER
@ 2012-11-15 15:14 ` Sage Weil
2012-11-15 15:30 ` Stefan Priebe - Profihost AG
2012-11-15 20:26 ` Stefan Priebe
1 sibling, 2 replies; 12+ messages in thread
From: Sage Weil @ 2012-11-15 15:14 UTC (permalink / raw)
To: Stefan Priebe - Profihost AG; +Cc: ceph-devel
On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote:
> Hello list,
>
> my main problem right now is that ceph does not scale for me (more vms using
> rbd). It does not scale as the ceph-osd is using all my CPU core all the time
> (8 cores) with just 4 SSDs. The SSDs are far away from being loaded.
>
> What is the best way to find out what the ceph-osd process is doing all the
> time?
>
> A gperf graph is attached.
Hmm, most significant time seems to be in the allocator and doing
fsetxattr(2) (10%!). Also some path traversal stuff.
Can you try the wip-fd-simple-cache branch, which tries to spend less time
closing and reopening files? I'm curious how much of a different it will
make for you for both IOPS and CPU utilization.
It is also possible to use leveldb for most attrs. If you set
'filestore xattr use omap = true' it should put most attrs in leveldb.
sage
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 15:14 ` Sage Weil
@ 2012-11-15 15:30 ` Stefan Priebe - Profihost AG
2012-11-15 20:26 ` Stefan Priebe
1 sibling, 0 replies; 12+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-11-15 15:30 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
Am 15.11.2012 16:14, schrieb Sage Weil:
> On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote:
>> Hello list,
>>
>> my main problem right now is that ceph does not scale for me (more vms using
>> rbd). It does not scale as the ceph-osd is using all my CPU core all the time
>> (8 cores) with just 4 SSDs. The SSDs are far away from being loaded.
>>
>> What is the best way to find out what the ceph-osd process is doing all the
>> time?
>>
>> A gperf graph is attached.
>
> Hmm, most significant time seems to be in the allocator and doing
> fsetxattr(2) (10%!). Also some path traversal stuff.
>
> Can you try the wip-fd-simple-cache branch, which tries to spend less time
> closing and reopening files? I'm curious how much of a different it will
> make for you for both IOPS and CPU utilization.
Will try that this evening.
> It is also possible to use leveldb for most attrs. If you set
> 'filestore xattr use omap = true' it should put most attrs in leveldb.
Do i have to recreate the cephfs?
Stefan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 15:12 ` Mark Nelson
@ 2012-11-15 16:09 ` Stefan Priebe
2012-11-15 16:52 ` Sage Weil
2012-11-15 19:44 ` Stefan Priebe
1 sibling, 1 reply; 12+ messages in thread
From: Stefan Priebe @ 2012-11-15 16:09 UTC (permalink / raw)
To: Mark Nelson; +Cc: Alexandre DERUMIER, ceph-devel
Am 15.11.2012 16:12, schrieb Mark Nelson:
> Out of curiosity, does it help much if you disable crc32c calculations?
> Use the "nocrc" option in your ceph.conf file. I've had my eye on
> crcutil as an alternative to how we do crc32c now.
>
> http://code.google.com/p/crcutil/
Will try that how and where to i set the nocrc option?
Is it
[global]
nocrc = true
Stefan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 16:09 ` Stefan Priebe
@ 2012-11-15 16:52 ` Sage Weil
0 siblings, 0 replies; 12+ messages in thread
From: Sage Weil @ 2012-11-15 16:52 UTC (permalink / raw)
To: Stefan Priebe; +Cc: Mark Nelson, Alexandre DERUMIER, ceph-devel
On Thu, 15 Nov 2012, Stefan Priebe wrote:
> Am 15.11.2012 16:12, schrieb Mark Nelson:
> > Out of curiosity, does it help much if you disable crc32c calculations?
> > Use the "nocrc" option in your ceph.conf file. I've had my eye on
> > crcutil as an alternative to how we do crc32c now.
> >
> > http://code.google.com/p/crcutil/
>
> Will try that how and where to i set the nocrc option?
>
> Is it
> [global]
> nocrc = true
ms nocrc = true
sage
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 15:12 ` Mark Nelson
2012-11-15 16:09 ` Stefan Priebe
@ 2012-11-15 19:44 ` Stefan Priebe
1 sibling, 0 replies; 12+ messages in thread
From: Stefan Priebe @ 2012-11-15 19:44 UTC (permalink / raw)
To: Mark Nelson; +Cc: Alexandre DERUMIER, ceph-devel
Hi Mark,
Am 15.11.2012 16:12, schrieb Mark Nelson:
> Out of curiosity, does it help much if you disable crc32c calculations?
> Use the "nocrc" option in your ceph.conf file. I've had my eye on
> crcutil as an alternative to how we do crc32c now.
>
> http://code.google.com/p/crcutil/
>
> Mark
This changes nothing. CPU Load doesn't change on osds. This was a bit
tricky ;-) i had to set nocrc on kvm host as well otherwise kvm wasn't
starting due to bad crc.
Greets,
Stefan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 15:14 ` Sage Weil
2012-11-15 15:30 ` Stefan Priebe - Profihost AG
@ 2012-11-15 20:26 ` Stefan Priebe
2012-11-16 8:41 ` Alexandre DERUMIER
1 sibling, 1 reply; 12+ messages in thread
From: Stefan Priebe @ 2012-11-15 20:26 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
Am 15.11.2012 16:14, schrieb Sage Weil:
> On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote:
> Hmm, most significant time seems to be in the allocator and doing
> fsetxattr(2) (10%!). Also some path traversal stuff.
Yes fsetxattr seems to be CPU hungry.
> Can you try the wip-fd-simple-cache branch, which tries to spend less time
> closing and reopening files? I'm curious how much of a different it will
> make for you for both IOPS and CPU utilization.
It seems to give me around 1000 iops across 3 VMs.
> It is also possible to use leveldb for most attrs. If you set
> 'filestore xattr use omap = true' it should put most attrs in leveldb.
Tried this but this raises CPU by 20%.
Any other ideas how to reduce ceph-osd while doing randwrite?
Randread gives me with 3 VMs: 60.000 iops
Randwrite gives me with 3 VMs: 25.000 iops
Stefan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-15 20:26 ` Stefan Priebe
@ 2012-11-16 8:41 ` Alexandre DERUMIER
2012-11-16 9:11 ` Stefan Priebe - Profihost AG
0 siblings, 1 reply; 12+ messages in thread
From: Alexandre DERUMIER @ 2012-11-16 8:41 UTC (permalink / raw)
To: Stefan Priebe; +Cc: ceph-devel, Sage Weil
>>Any other ideas how to reduce ceph-osd while doing randwrite?
>>
>>Randread gives me with 3 VMs: 60.000 iops
>>Randwrite gives me with 3 VMs: 25.000 iops
Great to see that read scale !
For randwrite, what is the bottleneck now with filestore xattr use omap = true ?
Always cpu ?
----- Mail original -----
De: "Stefan Priebe" <s.priebe@profihost.ag>
À: "Sage Weil" <sage@inktank.com>
Cc: ceph-devel@vger.kernel.org
Envoyé: Jeudi 15 Novembre 2012 21:26:06
Objet: Re: ceph-osd cpu usage
Am 15.11.2012 16:14, schrieb Sage Weil:
> On Thu, 15 Nov 2012, Stefan Priebe - Profihost AG wrote:
> Hmm, most significant time seems to be in the allocator and doing
> fsetxattr(2) (10%!). Also some path traversal stuff.
Yes fsetxattr seems to be CPU hungry.
> Can you try the wip-fd-simple-cache branch, which tries to spend less time
> closing and reopening files? I'm curious how much of a different it will
> make for you for both IOPS and CPU utilization.
It seems to give me around 1000 iops across 3 VMs.
> It is also possible to use leveldb for most attrs. If you set
> 'filestore xattr use omap = true' it should put most attrs in leveldb.
Tried this but this raises CPU by 20%.
Any other ideas how to reduce ceph-osd while doing randwrite?
Randread gives me with 3 VMs: 60.000 iops
Randwrite gives me with 3 VMs: 25.000 iops
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: ceph-osd cpu usage
2012-11-16 8:41 ` Alexandre DERUMIER
@ 2012-11-16 9:11 ` Stefan Priebe - Profihost AG
0 siblings, 0 replies; 12+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-11-16 9:11 UTC (permalink / raw)
To: Alexandre DERUMIER; +Cc: ceph-devel, Sage Weil
Am 16.11.2012 09:41, schrieb Alexandre DERUMIER:
>>> Any other ideas how to reduce ceph-osd while doing randwrite?
>>>
>>> Randread gives me with 3 VMs: 60.000 iops
>>> Randwrite gives me with 3 VMs: 25.000 iops
>
> Great to see that read scale !
Yes that works fine.
> For randwrite, what is the bottleneck now with filestore xattr use omap = true ?
Oh no the result was still with default fsetxattr. Filestore xattr use
omap added another 20% load an resultet in much lower iops. Seems i
wasn't clear enough about that.
Stefan
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2012-11-16 9:11 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-11-15 10:56 ceph-osd cpu usage Stefan Priebe - Profihost AG
2012-11-15 11:18 ` Alexandre DERUMIER
2012-11-15 12:19 ` Stefan Priebe - Profihost AG
2012-11-15 15:12 ` Mark Nelson
2012-11-15 16:09 ` Stefan Priebe
2012-11-15 16:52 ` Sage Weil
2012-11-15 19:44 ` Stefan Priebe
2012-11-15 15:14 ` Sage Weil
2012-11-15 15:30 ` Stefan Priebe - Profihost AG
2012-11-15 20:26 ` Stefan Priebe
2012-11-16 8:41 ` Alexandre DERUMIER
2012-11-16 9:11 ` Stefan Priebe - Profihost AG
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.