All of lore.kernel.org
 help / color / mirror / Atom feed
* Scaling RBD module
@ 2013-09-17 22:30 Somnath Roy
  2013-09-19  1:10 ` [ceph-users] " Josh Durgin
  0 siblings, 1 reply; 12+ messages in thread
From: Somnath Roy @ 2013-09-17 22:30 UTC (permalink / raw)
  To: Sage Weil
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

Hi,
I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network.

Here is the status of my cluster.

~/fio_test# ceph -s

  cluster b2e0b4db-6342-490e-9c28-0aadf0188023
   health HEALTH_WARN clock skew detected on mon. <server-name-2>, mon. <server-name-3>
   monmap e1: 3 mons at {<server-name-1>=xxx.xxx.xxx.xxx:6789/0, <server-name-2>=xxx.xxx.xxx.xxx:6789/0, <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
   osdmap e391: 30 osds: 30 up, 30 in
    pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912 MB used, 11145 GB / 11172 GB avail
   mdsmap e1: 0/0/1 up


I started with rados bench command to benchmark the read performance of this Cluster on a large pool (~10K PGs) and found that each rados client has a limitation. Each client can only drive up to a certain mark. Each server  node cpu utilization shows it is  around 85-90% idle and the admin node (from where rados client is running) is around ~80-85% idle. I am trying with 4K object size.

Now, I started running more clients on the admin node and the performance is scaling till it hits the client cpu limit. Server still has the cpu of 30-35% idle. With small object size I must say that the ceph per osd cpu utilization is not promising!

After this, I started testing the rados block interface with kernel rbd module from my admin node.
I have created 8 images mapped on the pool having around 10K PGs and I am not able to scale up the performance by running fio (either by creating a software raid or running on individual /dev/rbd* instances). For example, running multiple fio instances (one in /dev/rbd1 and the other in /dev/rbd2)  the performance I am getting is half of what I am getting if running one instance. Here is my fio job script.

[random-reads]
ioengine=libaio
iodepth=32
filename=/dev/rbd1
rw=randread
bs=4k
direct=1
size=2G
numjobs=64

Let me know if I am following the proper procedure or not.

But, If my understanding is correct, kernel rbd module is acting as a client to the cluster and in one admin node I can run only one of such kernel instance.
If so, I am then limited to the client bottleneck that I stated earlier. The cpu utilization of the server side is around 85-90% idle, so, it is clear that client is not driving.

My question is, is there any way to hit the cluster  with more client from a single box while testing the rbd module ?

Appreciate, if anybody can help me on this.

Thanks & Regards
Somnath



________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ceph-users] Scaling RBD module
  2013-09-17 22:30 Scaling RBD module Somnath Roy
@ 2013-09-19  1:10 ` Josh Durgin
  2013-09-19 19:04   ` Somnath Roy
  0 siblings, 1 reply; 12+ messages in thread
From: Josh Durgin @ 2013-09-19  1:10 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, ceph-devel, Anirban Ray, ceph-users

On 09/17/2013 03:30 PM, Somnath Roy wrote:
> Hi,
> I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network.
>
> Here is the status of my cluster.
>
> ~/fio_test# ceph -s
>
>    cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>     health HEALTH_WARN clock skew detected on mon. <server-name-2>, mon. <server-name-3>
>     monmap e1: 3 mons at {<server-name-1>=xxx.xxx.xxx.xxx:6789/0, <server-name-2>=xxx.xxx.xxx.xxx:6789/0, <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>     osdmap e391: 30 osds: 30 up, 30 in
>      pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912 MB used, 11145 GB / 11172 GB avail
>     mdsmap e1: 0/0/1 up
>
>
> I started with rados bench command to benchmark the read performance of this Cluster on a large pool (~10K PGs) and found that each rados client has a limitation. Each client can only drive up to a certain mark. Each server  node cpu utilization shows it is  around 85-90% idle and the admin node (from where rados client is running) is around ~80-85% idle. I am trying with 4K object size.

Note that rados bench with 4k objects is different from rbd with
4k-sized I/Os - rados bench sends each request to a new object,
while rbd objects are 4M by default.

> Now, I started running more clients on the admin node and the performance is scaling till it hits the client cpu limit. Server still has the cpu of 30-35% idle. With small object size I must say that the ceph per osd cpu utilization is not promising!
>
> After this, I started testing the rados block interface with kernel rbd module from my admin node.
> I have created 8 images mapped on the pool having around 10K PGs and I am not able to scale up the performance by running fio (either by creating a software raid or running on individual /dev/rbd* instances). For example, running multiple fio instances (one in /dev/rbd1 and the other in /dev/rbd2)  the performance I am getting is half of what I am getting if running one instance. Here is my fio job script.
>
> [random-reads]
> ioengine=libaio
> iodepth=32
> filename=/dev/rbd1
> rw=randread
> bs=4k
> direct=1
> size=2G
> numjobs=64
>
> Let me know if I am following the proper procedure or not.
>
> But, If my understanding is correct, kernel rbd module is acting as a client to the cluster and in one admin node I can run only one of such kernel instance.
> If so, I am then limited to the client bottleneck that I stated earlier. The cpu utilization of the server side is around 85-90% idle, so, it is clear that client is not driving.
>
> My question is, is there any way to hit the cluster  with more client from a single box while testing the rbd module ?

You can run multiple librbd instances easily (for example with
multiple runs of the rbd bench-write command).

The kernel rbd driver uses the same rados client instance for multiple
block devices by default. There's an option (noshare) to use a new
rados client instance for a newly mapped device, but it's not exposed
by the rbd cli. You need to use the sysfs interface that 'rbd map' uses
instead.

Once you've used rbd map once on a machine, the kernel will already
have the auth key stored, and you can use:

echo '1.2.3.4:6789 name=admin,key=client.admin,noshare poolname 
imagename' > /sys/bus/rbd/add

Where 1.2.3.4:6789 is the address of a monitor, and you're connecting
as client.admin.

You can use 'rbd unmap' as usual.

Josh

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [ceph-users] Scaling RBD module
  2013-09-19  1:10 ` [ceph-users] " Josh Durgin
@ 2013-09-19 19:04   ` Somnath Roy
  2013-09-19 19:23     ` Josh Durgin
  0 siblings, 1 reply; 12+ messages in thread
From: Somnath Roy @ 2013-09-19 19:04 UTC (permalink / raw)
  To: Josh Durgin; +Cc: Sage Weil, ceph-devel, Anirban Ray, ceph-users

Hi Josh,
Thanks for the information. I am trying to add the following but hitting some permission issue.

root@emsclient:/etc# echo <mon-1>:6789,<mon-2>:6789,<mon-3>:6789 name=admin,key=client.admin,noshare test_rbd ceph_block_test' > /sys/bus/rbd/add
-bash: echo: write error: Operation not permitted

Here is the contents of rbd directory..

root@emsclient:/sys/bus/rbd# ll
total 0
drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
--w-------  1 root root 4096 Sep 19 11:59 add
drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
-rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
--w-------  1 root root 4096 Sep 19 12:03 drivers_probe
--w-------  1 root root 4096 Sep 19 12:03 remove
--w-------  1 root root 4096 Sep 19 11:59 uevent


I checked even if I am logged in as root , I can't write anything on /sys.

Here is the Ubuntu version I am using..

root@emsclient:/etc# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 13.04
Release:        13.04
Codename:       raring

Here is the mount information....

root@emsclient:/etc# mount
/dev/mapper/emsclient--vg-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
/dev/sda1 on /boot type ext2 (rw)
/dev/mapper/emsclient--vg-home on /home type ext4 (rw)


Any idea what went wrong here ?

Thanks & Regards
Somnath

-----Original Message-----
From: Josh Durgin [mailto:josh.durgin@inktank.com]
Sent: Wednesday, September 18, 2013 6:10 PM
To: Somnath Roy
Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling RBD module

On 09/17/2013 03:30 PM, Somnath Roy wrote:
> Hi,
> I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network.
>
> Here is the status of my cluster.
>
> ~/fio_test# ceph -s
>
>    cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>     health HEALTH_WARN clock skew detected on mon. <server-name-2>, mon. <server-name-3>
>     monmap e1: 3 mons at {<server-name-1>=xxx.xxx.xxx.xxx:6789/0, <server-name-2>=xxx.xxx.xxx.xxx:6789/0, <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>     osdmap e391: 30 osds: 30 up, 30 in
>      pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912 MB used, 11145 GB / 11172 GB avail
>     mdsmap e1: 0/0/1 up
>
>
> I started with rados bench command to benchmark the read performance of this Cluster on a large pool (~10K PGs) and found that each rados client has a limitation. Each client can only drive up to a certain mark. Each server  node cpu utilization shows it is  around 85-90% idle and the admin node (from where rados client is running) is around ~80-85% idle. I am trying with 4K object size.

Note that rados bench with 4k objects is different from rbd with 4k-sized I/Os - rados bench sends each request to a new object, while rbd objects are 4M by default.

> Now, I started running more clients on the admin node and the performance is scaling till it hits the client cpu limit. Server still has the cpu of 30-35% idle. With small object size I must say that the ceph per osd cpu utilization is not promising!
>
> After this, I started testing the rados block interface with kernel rbd module from my admin node.
> I have created 8 images mapped on the pool having around 10K PGs and I am not able to scale up the performance by running fio (either by creating a software raid or running on individual /dev/rbd* instances). For example, running multiple fio instances (one in /dev/rbd1 and the other in /dev/rbd2)  the performance I am getting is half of what I am getting if running one instance. Here is my fio job script.
>
> [random-reads]
> ioengine=libaio
> iodepth=32
> filename=/dev/rbd1
> rw=randread
> bs=4k
> direct=1
> size=2G
> numjobs=64
>
> Let me know if I am following the proper procedure or not.
>
> But, If my understanding is correct, kernel rbd module is acting as a client to the cluster and in one admin node I can run only one of such kernel instance.
> If so, I am then limited to the client bottleneck that I stated earlier. The cpu utilization of the server side is around 85-90% idle, so, it is clear that client is not driving.
>
> My question is, is there any way to hit the cluster  with more client from a single box while testing the rbd module ?

You can run multiple librbd instances easily (for example with multiple runs of the rbd bench-write command).

The kernel rbd driver uses the same rados client instance for multiple block devices by default. There's an option (noshare) to use a new rados client instance for a newly mapped device, but it's not exposed by the rbd cli. You need to use the sysfs interface that 'rbd map' uses instead.

Once you've used rbd map once on a machine, the kernel will already have the auth key stored, and you can use:

echo '1.2.3.4:6789 name=admin,key=client.admin,noshare poolname imagename' > /sys/bus/rbd/add

Where 1.2.3.4:6789 is the address of a monitor, and you're connecting as client.admin.

You can use 'rbd unmap' as usual.

Josh


________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [ceph-users] Scaling RBD module
  2013-09-19 19:04   ` Somnath Roy
@ 2013-09-19 19:23     ` Josh Durgin
  2013-09-20  0:03       ` Somnath Roy
  0 siblings, 1 reply; 12+ messages in thread
From: Josh Durgin @ 2013-09-19 19:23 UTC (permalink / raw)
  To: Somnath Roy; +Cc: Sage Weil, ceph-devel, Anirban Ray, ceph-users

On 09/19/2013 12:04 PM, Somnath Roy wrote:
> Hi Josh,
> Thanks for the information. I am trying to add the following but hitting some permission issue.
>
> root@emsclient:/etc# echo <mon-1>:6789,<mon-2>:6789,<mon-3>:6789 name=admin,key=client.admin,noshare test_rbd ceph_block_test' > /sys/bus/rbd/add
> -bash: echo: write error: Operation not permitted

If you check dmesg, it will probably show an error trying to
authenticate to the cluster.

Instead of key=client.admin, you can pass the base64 secret value as
shown in 'ceph auth list' with the secret=XXXXXXXXXXXXXXXXXXXXX option.

BTW, there's a ticket for adding the noshare option to rbd map so using
the sysfs interface like this is never necessary:

http://tracker.ceph.com/issues/6264

Josh

> Here is the contents of rbd directory..
>
> root@emsclient:/sys/bus/rbd# ll
> total 0
> drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
> drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
> --w-------  1 root root 4096 Sep 19 11:59 add
> drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
> drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
> -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
> --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
> --w-------  1 root root 4096 Sep 19 12:03 remove
> --w-------  1 root root 4096 Sep 19 11:59 uevent
>
>
> I checked even if I am logged in as root , I can't write anything on /sys.
>
> Here is the Ubuntu version I am using..
>
> root@emsclient:/etc# lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu 13.04
> Release:        13.04
> Codename:       raring
>
> Here is the mount information....
>
> root@emsclient:/etc# mount
> /dev/mapper/emsclient--vg-root on / type ext4 (rw,errors=remount-ro)
> proc on /proc type proc (rw,noexec,nosuid,nodev)
> sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
> none on /sys/fs/cgroup type tmpfs (rw)
> none on /sys/fs/fuse/connections type fusectl (rw)
> none on /sys/kernel/debug type debugfs (rw)
> none on /sys/kernel/security type securityfs (rw)
> udev on /dev type devtmpfs (rw,mode=0755)
> devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
> tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
> none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
> none on /run/shm type tmpfs (rw,nosuid,nodev)
> none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
> /dev/sda1 on /boot type ext2 (rw)
> /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
>
>
> Any idea what went wrong here ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 18, 2013 6:10 PM
> To: Somnath Roy
> Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray; ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Scaling RBD module
>
> On 09/17/2013 03:30 PM, Somnath Roy wrote:
>> Hi,
>> I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network.
>>
>> Here is the status of my cluster.
>>
>> ~/fio_test# ceph -s
>>
>>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>>      health HEALTH_WARN clock skew detected on mon. <server-name-2>, mon. <server-name-3>
>>      monmap e1: 3 mons at {<server-name-1>=xxx.xxx.xxx.xxx:6789/0, <server-name-2>=xxx.xxx.xxx.xxx:6789/0, <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>>      osdmap e391: 30 osds: 30 up, 30 in
>>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912 MB used, 11145 GB / 11172 GB avail
>>      mdsmap e1: 0/0/1 up
>>
>>
>> I started with rados bench command to benchmark the read performance of this Cluster on a large pool (~10K PGs) and found that each rados client has a limitation. Each client can only drive up to a certain mark. Each server  node cpu utilization shows it is  around 85-90% idle and the admin node (from where rados client is running) is around ~80-85% idle. I am trying with 4K object size.
>
> Note that rados bench with 4k objects is different from rbd with 4k-sized I/Os - rados bench sends each request to a new object, while rbd objects are 4M by default.
>
>> Now, I started running more clients on the admin node and the performance is scaling till it hits the client cpu limit. Server still has the cpu of 30-35% idle. With small object size I must say that the ceph per osd cpu utilization is not promising!
>>
>> After this, I started testing the rados block interface with kernel rbd module from my admin node.
>> I have created 8 images mapped on the pool having around 10K PGs and I am not able to scale up the performance by running fio (either by creating a software raid or running on individual /dev/rbd* instances). For example, running multiple fio instances (one in /dev/rbd1 and the other in /dev/rbd2)  the performance I am getting is half of what I am getting if running one instance. Here is my fio job script.
>>
>> [random-reads]
>> ioengine=libaio
>> iodepth=32
>> filename=/dev/rbd1
>> rw=randread
>> bs=4k
>> direct=1
>> size=2G
>> numjobs=64
>>
>> Let me know if I am following the proper procedure or not.
>>
>> But, If my understanding is correct, kernel rbd module is acting as a client to the cluster and in one admin node I can run only one of such kernel instance.
>> If so, I am then limited to the client bottleneck that I stated earlier. The cpu utilization of the server side is around 85-90% idle, so, it is clear that client is not driving.
>>
>> My question is, is there any way to hit the cluster  with more client from a single box while testing the rbd module ?
>
> You can run multiple librbd instances easily (for example with multiple runs of the rbd bench-write command).
>
> The kernel rbd driver uses the same rados client instance for multiple block devices by default. There's an option (noshare) to use a new rados client instance for a newly mapped device, but it's not exposed by the rbd cli. You need to use the sysfs interface that 'rbd map' uses instead.
>
> Once you've used rbd map once on a machine, the kernel will already have the auth key stored, and you can use:
>
> echo '1.2.3.4:6789 name=admin,key=client.admin,noshare poolname imagename' > /sys/bus/rbd/add
>
> Where 1.2.3.4:6789 is the address of a monitor, and you're connecting as client.admin.
>
> You can use 'rbd unmap' as usual.
>
> Josh
>
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
>


^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [ceph-users] Scaling RBD module
  2013-09-19 19:23     ` Josh Durgin
@ 2013-09-20  0:03       ` Somnath Roy
       [not found]         ` <755F6B91B3BE364F9BCA11EA3F9E0C6F0FC4A738-cXZ6iGhjG0il5HHZYNR2WTJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Somnath Roy @ 2013-09-20  0:03 UTC (permalink / raw)
  To: Josh Durgin; +Cc: Sage Weil, ceph-devel, Anirban Ray, ceph-users

Thanks Josh !
I am able to successfully add this noshare option in the image mapping now. Looking at dmesg output, I found that was indeed the secret key problem. Block performance is scaling now.

Regards
Somnath

-----Original Message-----
From: ceph-devel-owner@vger.kernel.org [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Josh Durgin
Sent: Thursday, September 19, 2013 12:24 PM
To: Somnath Roy
Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling RBD module

On 09/19/2013 12:04 PM, Somnath Roy wrote:
> Hi Josh,
> Thanks for the information. I am trying to add the following but hitting some permission issue.
>
> root@emsclient:/etc# echo <mon-1>:6789,<mon-2>:6789,<mon-3>:6789 
> name=admin,key=client.admin,noshare test_rbd ceph_block_test' > 
> /sys/bus/rbd/add
> -bash: echo: write error: Operation not permitted

If you check dmesg, it will probably show an error trying to authenticate to the cluster.

Instead of key=client.admin, you can pass the base64 secret value as shown in 'ceph auth list' with the secret=XXXXXXXXXXXXXXXXXXXXX option.

BTW, there's a ticket for adding the noshare option to rbd map so using the sysfs interface like this is never necessary:

http://tracker.ceph.com/issues/6264

Josh

> Here is the contents of rbd directory..
>
> root@emsclient:/sys/bus/rbd# ll
> total 0
> drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
> drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
> --w-------  1 root root 4096 Sep 19 11:59 add
> drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
> drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
> -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
> --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
> --w-------  1 root root 4096 Sep 19 12:03 remove
> --w-------  1 root root 4096 Sep 19 11:59 uevent
>
>
> I checked even if I am logged in as root , I can't write anything on /sys.
>
> Here is the Ubuntu version I am using..
>
> root@emsclient:/etc# lsb_release -a
> No LSB modules are available.
> Distributor ID: Ubuntu
> Description:    Ubuntu 13.04
> Release:        13.04
> Codename:       raring
>
> Here is the mount information....
>
> root@emsclient:/etc# mount
> /dev/mapper/emsclient--vg-root on / type ext4 (rw,errors=remount-ro) 
> proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type 
> sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) 
> none on /sys/fs/fuse/connections type fusectl (rw) none on 
> /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type 
> securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on 
> /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
> tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
> none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
> none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type 
> tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
> /dev/sda1 on /boot type ext2 (rw)
> /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
>
>
> Any idea what went wrong here ?
>
> Thanks & Regards
> Somnath
>
> -----Original Message-----
> From: Josh Durgin [mailto:josh.durgin@inktank.com]
> Sent: Wednesday, September 18, 2013 6:10 PM
> To: Somnath Roy
> Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray; 
> ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Scaling RBD module
>
> On 09/17/2013 03:30 PM, Somnath Roy wrote:
>> Hi,
>> I am running Ceph on a 3 node cluster and each of my server node is running 10 OSDs, one for each disk. I have one admin node and all the nodes are connected with 2 X 10G network. One network is for cluster and other one configured as public network.
>>
>> Here is the status of my cluster.
>>
>> ~/fio_test# ceph -s
>>
>>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>>      health HEALTH_WARN clock skew detected on mon. <server-name-2>, mon. <server-name-3>
>>      monmap e1: 3 mons at {<server-name-1>=xxx.xxx.xxx.xxx:6789/0, <server-name-2>=xxx.xxx.xxx.xxx:6789/0, <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>>      osdmap e391: 30 osds: 30 up, 30 in
>>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912 MB used, 11145 GB / 11172 GB avail
>>      mdsmap e1: 0/0/1 up
>>
>>
>> I started with rados bench command to benchmark the read performance of this Cluster on a large pool (~10K PGs) and found that each rados client has a limitation. Each client can only drive up to a certain mark. Each server  node cpu utilization shows it is  around 85-90% idle and the admin node (from where rados client is running) is around ~80-85% idle. I am trying with 4K object size.
>
> Note that rados bench with 4k objects is different from rbd with 4k-sized I/Os - rados bench sends each request to a new object, while rbd objects are 4M by default.
>
>> Now, I started running more clients on the admin node and the performance is scaling till it hits the client cpu limit. Server still has the cpu of 30-35% idle. With small object size I must say that the ceph per osd cpu utilization is not promising!
>>
>> After this, I started testing the rados block interface with kernel rbd module from my admin node.
>> I have created 8 images mapped on the pool having around 10K PGs and I am not able to scale up the performance by running fio (either by creating a software raid or running on individual /dev/rbd* instances). For example, running multiple fio instances (one in /dev/rbd1 and the other in /dev/rbd2)  the performance I am getting is half of what I am getting if running one instance. Here is my fio job script.
>>
>> [random-reads]
>> ioengine=libaio
>> iodepth=32
>> filename=/dev/rbd1
>> rw=randread
>> bs=4k
>> direct=1
>> size=2G
>> numjobs=64
>>
>> Let me know if I am following the proper procedure or not.
>>
>> But, If my understanding is correct, kernel rbd module is acting as a client to the cluster and in one admin node I can run only one of such kernel instance.
>> If so, I am then limited to the client bottleneck that I stated earlier. The cpu utilization of the server side is around 85-90% idle, so, it is clear that client is not driving.
>>
>> My question is, is there any way to hit the cluster  with more client from a single box while testing the rbd module ?
>
> You can run multiple librbd instances easily (for example with multiple runs of the rbd bench-write command).
>
> The kernel rbd driver uses the same rados client instance for multiple block devices by default. There's an option (noshare) to use a new rados client instance for a newly mapped device, but it's not exposed by the rbd cli. You need to use the sysfs interface that 'rbd map' uses instead.
>
> Once you've used rbd map once on a machine, the kernel will already have the auth key stored, and you can use:
>
> echo '1.2.3.4:6789 name=admin,key=client.admin,noshare poolname 
> imagename' > /sys/bus/rbd/add
>
> Where 1.2.3.4:6789 is the address of a monitor, and you're connecting as client.admin.
>
> You can use 'rbd unmap' as usual.
>
> Josh
>
>
> ________________________________
>
> PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
>
>

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@vger.kernel.org More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Scaling RBD module
       [not found]         ` <755F6B91B3BE364F9BCA11EA3F9E0C6F0FC4A738-cXZ6iGhjG0il5HHZYNR2WTJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
@ 2013-09-24 21:09           ` Travis Rhoden
       [not found]             ` <CACkq2mrfO+eFCYaEdoTQpJ2tOoDyVCkedSMAAztnQVYPBsv7gw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Travis Rhoden @ 2013-09-24 21:09 UTC (permalink / raw)
  To: Josh Durgin
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw


[-- Attachment #1.1: Type: text/plain, Size: 9470 bytes --]

This "noshare" option may have just helped me a ton -- I sure wish I would
have asked similar questions sooner, because I have seen the same failure
to scale.  =)

One question -- when using the "noshare" option (or really, even without
it) are there any practical limits on the number of RBDs that can be
mounted?  I have servers with ~100 RBDs on them each, and am wondering if I
switch them all over to using "noshare" if anything is going to blow up,
use a ton more memory, etc.  Even without noshare, are there any known
limits to how many RBDs can be mapped?

Thanks!

 - Travis


On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>wrote:

> Thanks Josh !
> I am able to successfully add this noshare option in the image mapping
> now. Looking at dmesg output, I found that was indeed the secret key
> problem. Block performance is scaling now.
>
> Regards
> Somnath
>
> -----Original Message-----
> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org [mailto:
> ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Josh Durgin
> Sent: Thursday, September 19, 2013 12:24 PM
> To: Somnath Roy
> Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: [ceph-users] Scaling RBD module
>
> On 09/19/2013 12:04 PM, Somnath Roy wrote:
> > Hi Josh,
> > Thanks for the information. I am trying to add the following but hitting
> some permission issue.
> >
> > root@emsclient:/etc# echo <mon-1>:6789,<mon-2>:6789,<mon-3>:6789
> > name=admin,key=client.admin,noshare test_rbd ceph_block_test' >
> > /sys/bus/rbd/add
> > -bash: echo: write error: Operation not permitted
>
> If you check dmesg, it will probably show an error trying to authenticate
> to the cluster.
>
> Instead of key=client.admin, you can pass the base64 secret value as shown
> in 'ceph auth list' with the secret=XXXXXXXXXXXXXXXXXXXXX option.
>
> BTW, there's a ticket for adding the noshare option to rbd map so using
> the sysfs interface like this is never necessary:
>
> http://tracker.ceph.com/issues/6264
>
> Josh
>
> > Here is the contents of rbd directory..
> >
> > root@emsclient:/sys/bus/rbd# ll
> > total 0
> > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
> > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
> > --w-------  1 root root 4096 Sep 19 11:59 add
> > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
> > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
> > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
> > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
> > --w-------  1 root root 4096 Sep 19 12:03 remove
> > --w-------  1 root root 4096 Sep 19 11:59 uevent
> >
> >
> > I checked even if I am logged in as root , I can't write anything on
> /sys.
> >
> > Here is the Ubuntu version I am using..
> >
> > root@emsclient:/etc# lsb_release -a
> > No LSB modules are available.
> > Distributor ID: Ubuntu
> > Description:    Ubuntu 13.04
> > Release:        13.04
> > Codename:       raring
> >
> > Here is the mount information....
> >
> > root@emsclient:/etc# mount
> > /dev/mapper/emsclient--vg-root on / type ext4 (rw,errors=remount-ro)
> > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type
> > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw)
> > none on /sys/fs/fuse/connections type fusectl (rw) none on
> > /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type
> > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on
> > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
> > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
> > none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
> > none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type
> > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
> > /dev/sda1 on /boot type ext2 (rw)
> > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
> >
> >
> > Any idea what went wrong here ?
> >
> > Thanks & Regards
> > Somnath
> >
> > -----Original Message-----
> > From: Josh Durgin [mailto:josh.durgin-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
> > Sent: Wednesday, September 18, 2013 6:10 PM
> > To: Somnath Roy
> > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > Subject: Re: [ceph-users] Scaling RBD module
> >
> > On 09/17/2013 03:30 PM, Somnath Roy wrote:
> >> Hi,
> >> I am running Ceph on a 3 node cluster and each of my server node is
> running 10 OSDs, one for each disk. I have one admin node and all the nodes
> are connected with 2 X 10G network. One network is for cluster and other
> one configured as public network.
> >>
> >> Here is the status of my cluster.
> >>
> >> ~/fio_test# ceph -s
> >>
> >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
> >>      health HEALTH_WARN clock skew detected on mon. <server-name-2>,
> mon. <server-name-3>
> >>      monmap e1: 3 mons at {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,
> <server-name-2>=xxx.xxx.xxx.xxx:6789/0,
> <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64, quorum 0,1,2
> <server-name-1>,<server-name-2>,<server-name-3>
> >>      osdmap e391: 30 osds: 30 up, 30 in
> >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB data, 27912
> MB used, 11145 GB / 11172 GB avail
> >>      mdsmap e1: 0/0/1 up
> >>
> >>
> >> I started with rados bench command to benchmark the read performance of
> this Cluster on a large pool (~10K PGs) and found that each rados client
> has a limitation. Each client can only drive up to a certain mark. Each
> server  node cpu utilization shows it is  around 85-90% idle and the admin
> node (from where rados client is running) is around ~80-85% idle. I am
> trying with 4K object size.
> >
> > Note that rados bench with 4k objects is different from rbd with
> 4k-sized I/Os - rados bench sends each request to a new object, while rbd
> objects are 4M by default.
> >
> >> Now, I started running more clients on the admin node and the
> performance is scaling till it hits the client cpu limit. Server still has
> the cpu of 30-35% idle. With small object size I must say that the ceph per
> osd cpu utilization is not promising!
> >>
> >> After this, I started testing the rados block interface with kernel rbd
> module from my admin node.
> >> I have created 8 images mapped on the pool having around 10K PGs and I
> am not able to scale up the performance by running fio (either by creating
> a software raid or running on individual /dev/rbd* instances). For example,
> running multiple fio instances (one in /dev/rbd1 and the other in
> /dev/rbd2)  the performance I am getting is half of what I am getting if
> running one instance. Here is my fio job script.
> >>
> >> [random-reads]
> >> ioengine=libaio
> >> iodepth=32
> >> filename=/dev/rbd1
> >> rw=randread
> >> bs=4k
> >> direct=1
> >> size=2G
> >> numjobs=64
> >>
> >> Let me know if I am following the proper procedure or not.
> >>
> >> But, If my understanding is correct, kernel rbd module is acting as a
> client to the cluster and in one admin node I can run only one of such
> kernel instance.
> >> If so, I am then limited to the client bottleneck that I stated
> earlier. The cpu utilization of the server side is around 85-90% idle, so,
> it is clear that client is not driving.
> >>
> >> My question is, is there any way to hit the cluster  with more client
> from a single box while testing the rbd module ?
> >
> > You can run multiple librbd instances easily (for example with multiple
> runs of the rbd bench-write command).
> >
> > The kernel rbd driver uses the same rados client instance for multiple
> block devices by default. There's an option (noshare) to use a new rados
> client instance for a newly mapped device, but it's not exposed by the rbd
> cli. You need to use the sysfs interface that 'rbd map' uses instead.
> >
> > Once you've used rbd map once on a machine, the kernel will already have
> the auth key stored, and you can use:
> >
> > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare poolname
> > imagename' > /sys/bus/rbd/add
> >
> > Where 1.2.3.4:6789 is the address of a monitor, and you're connecting
> as client.admin.
> >
> > You can use 'rbd unmap' as usual.
> >
> > Josh
> >
> >
> > ________________________________
> >
> > PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named above. If
> the reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
> >
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo info at
>  http://vger.kernel.org/majordomo-info.html
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 11922 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Scaling RBD module
       [not found]             ` <CACkq2mrfO+eFCYaEdoTQpJ2tOoDyVCkedSMAAztnQVYPBsv7gw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-09-24 21:16               ` Sage Weil
       [not found]                 ` <alpine.DEB.2.00.1309241413280.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  2013-09-24 22:23                 ` Somnath Roy
  0 siblings, 2 replies; 12+ messages in thread
From: Sage Weil @ 2013-09-24 21:16 UTC (permalink / raw)
  To: Travis Rhoden
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

[-- Attachment #1: Type: TEXT/PLAIN, Size: 11574 bytes --]

On Tue, 24 Sep 2013, Travis Rhoden wrote:
> This "noshare" option may have just helped me a ton -- I sure wish I would
> have asked similar questions sooner, because I have seen the same failure to
> scale.  =)
> 
> One question -- when using the "noshare" option (or really, even without it)
> are there any practical limits on the number of RBDs that can be mounted?  I
> have servers with ~100 RBDs on them each, and am wondering if I switch them
> all over to using "noshare" if anything is going to blow up, use a ton more
> memory, etc.  Even without noshare, are there any known limits to how many
> RBDs can be mapped?

With noshare each mapped image will appear as a separate client instance, 
which means it will have it's own session with teh monitors and own TCP 
connections to the OSDs.  It may be a viable workaround for now but in 
general I would not recommend it.

I'm very curious what the scaling issue is with the shared client.  Do you 
have a working perf that can capture callgraph information on this 
machine?

sage

> 
> Thanks!
> 
>  - Travis
> 
> 
> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
> wrote:
>       Thanks Josh !
>       I am able to successfully add this noshare option in the image
>       mapping now. Looking at dmesg output, I found that was indeed
>       the secret key problem. Block performance is scaling now.
> 
>       Regards
>       Somnath
> 
>       -----Original Message-----
>       From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>       [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Josh
>       Durgin
>       Sent: Thursday, September 19, 2013 12:24 PM
>       To: Somnath Roy
>       Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
>       ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>       Subject: Re: [ceph-users] Scaling RBD module
> 
>       On 09/19/2013 12:04 PM, Somnath Roy wrote:
>       > Hi Josh,
>       > Thanks for the information. I am trying to add the following
>       but hitting some permission issue.
>       >
>       > root@emsclient:/etc# echo
>       <mon-1>:6789,<mon-2>:6789,<mon-3>:6789
>       > name=admin,key=client.admin,noshare test_rbd ceph_block_test'
>       >
>       > /sys/bus/rbd/add
>       > -bash: echo: write error: Operation not permitted
> 
>       If you check dmesg, it will probably show an error trying to
>       authenticate to the cluster.
> 
>       Instead of key=client.admin, you can pass the base64 secret
>       value as shown in 'ceph auth list' with the
>       secret=XXXXXXXXXXXXXXXXXXXXX option.
> 
>       BTW, there's a ticket for adding the noshare option to rbd map
>       so using the sysfs interface like this is never necessary:
> 
>       http://tracker.ceph.com/issues/6264
> 
>       Josh
> 
>       > Here is the contents of rbd directory..
>       >
>       > root@emsclient:/sys/bus/rbd# ll
>       > total 0
>       > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
>       > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
>       > --w-------  1 root root 4096 Sep 19 11:59 add
>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
>       > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
>       > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
>       > --w-------  1 root root 4096 Sep 19 12:03 remove
>       > --w-------  1 root root 4096 Sep 19 11:59 uevent
>       >
>       >
>       > I checked even if I am logged in as root , I can't write
>       anything on /sys.
>       >
>       > Here is the Ubuntu version I am using..
>       >
>       > root@emsclient:/etc# lsb_release -a
>       > No LSB modules are available.
>       > Distributor ID: Ubuntu
>       > Description:    Ubuntu 13.04
>       > Release:        13.04
>       > Codename:       raring
>       >
>       > Here is the mount information....
>       >
>       > root@emsclient:/etc# mount
>       > /dev/mapper/emsclient--vg-root on / type ext4
>       (rw,errors=remount-ro)
>       > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys
>       type
>       > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type
>       tmpfs (rw)
>       > none on /sys/fs/fuse/connections type fusectl (rw) none on
>       > /sys/kernel/debug type debugfs (rw) none on
>       /sys/kernel/security type
>       > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)
>       devpts on
>       > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
>       > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
>       > none on /run/lock type tmpfs
>       (rw,noexec,nosuid,nodev,size=5242880)
>       > none on /run/shm type tmpfs (rw,nosuid,nodev) none on
>       /run/user type
>       > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
>       > /dev/sda1 on /boot type ext2 (rw)
>       > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
>       >
>       >
>       > Any idea what went wrong here ?
>       >
>       > Thanks & Regards
>       > Somnath
>       >
>       > -----Original Message-----
>       > From: Josh Durgin [mailto:josh.durgin-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>       > Sent: Wednesday, September 18, 2013 6:10 PM
>       > To: Somnath Roy
>       > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
>       > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>       > Subject: Re: [ceph-users] Scaling RBD module
>       >
>       > On 09/17/2013 03:30 PM, Somnath Roy wrote:
>       >> Hi,
>       >> I am running Ceph on a 3 node cluster and each of my server
>       node is running 10 OSDs, one for each disk. I have one admin
>       node and all the nodes are connected with 2 X 10G network. One
>       network is for cluster and other one configured as public
>       network.
>       >>
>       >> Here is the status of my cluster.
>       >>
>       >> ~/fio_test# ceph -s
>       >>
>       >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>       >>      health HEALTH_WARN clock skew detected on mon.
>       <server-name-2>, mon. <server-name-3>
>       >>      monmap e1: 3 mons at
>       {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,
>       <server-name-2>=xxx.xxx.xxx.xxx:6789/0,
>       <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64,
>       quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>       >>      osdmap e391: 30 osds: 30 up, 30 in
>       >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB
>       data, 27912 MB used, 11145 GB / 11172 GB avail
>       >>      mdsmap e1: 0/0/1 up
>       >>
>       >>
>       >> I started with rados bench command to benchmark the read
>       performance of this Cluster on a large pool (~10K PGs) and found
>       that each rados client has a limitation. Each client can only
>       drive up to a certain mark. Each server  node cpu utilization
>       shows it is  around 85-90% idle and the admin node (from where
>       rados client is running) is around ~80-85% idle. I am trying
>       with 4K object size.
>       >
>       > Note that rados bench with 4k objects is different from rbd
>       with 4k-sized I/Os - rados bench sends each request to a new
>       object, while rbd objects are 4M by default.
>       >
>       >> Now, I started running more clients on the admin node and the
>       performance is scaling till it hits the client cpu limit. Server
>       still has the cpu of 30-35% idle. With small object size I must
>       say that the ceph per osd cpu utilization is not promising!
>       >>
>       >> After this, I started testing the rados block interface with
>       kernel rbd module from my admin node.
>       >> I have created 8 images mapped on the pool having around 10K
>       PGs and I am not able to scale up the performance by running fio
>       (either by creating a software raid or running on individual
>       /dev/rbd* instances). For example, running multiple fio
>       instances (one in /dev/rbd1 and the other in /dev/rbd2)  the
>       performance I am getting is half of what I am getting if running
>       one instance. Here is my fio job script.
>       >>
>       >> [random-reads]
>       >> ioengine=libaio
>       >> iodepth=32
>       >> filename=/dev/rbd1
>       >> rw=randread
>       >> bs=4k
>       >> direct=1
>       >> size=2G
>       >> numjobs=64
>       >>
>       >> Let me know if I am following the proper procedure or not.
>       >>
>       >> But, If my understanding is correct, kernel rbd module is
>       acting as a client to the cluster and in one admin node I can
>       run only one of such kernel instance.
>       >> If so, I am then limited to the client bottleneck that I
>       stated earlier. The cpu utilization of the server side is around
>       85-90% idle, so, it is clear that client is not driving.
>       >>
>       >> My question is, is there any way to hit the cluster  with
>       more client from a single box while testing the rbd module ?
>       >
>       > You can run multiple librbd instances easily (for example with
>       multiple runs of the rbd bench-write command).
>       >
>       > The kernel rbd driver uses the same rados client instance for
>       multiple block devices by default. There's an option (noshare)
>       to use a new rados client instance for a newly mapped device,
>       but it's not exposed by the rbd cli. You need to use the sysfs
>       interface that 'rbd map' uses instead.
>       >
>       > Once you've used rbd map once on a machine, the kernel will
>       already have the auth key stored, and you can use:
>       >
>       > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare
>       poolname
>       > imagename' > /sys/bus/rbd/add
>       >
>       > Where 1.2.3.4:6789 is the address of a monitor, and you're
>       connecting as client.admin.
>       >
>       > You can use 'rbd unmap' as usual.
>       >
>       > Josh
>       >
>       >
>       > ________________________________
>       >
>       > PLEASE NOTE: The information contained in this electronic mail
>       message is intended only for the use of the designated
>       recipient(s) named above. If the reader of this message is not
>       the intended recipient, you are hereby notified that you have
>       received this message in error and that any review,
>       dissemination, distribution, or copying of this message is
>       strictly prohibited. If you have received this communication in
>       error, please notify the sender by telephone or e-mail (as shown
>       above) immediately and destroy any and all copies of this
>       message in your possession (whether hard copies or
>       electronically stored copies).
>       >
>       >
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Scaling RBD module
       [not found]                 ` <alpine.DEB.2.00.1309241413280.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2013-09-24 21:24                   ` Travis Rhoden
  2013-09-24 22:17                   ` Somnath Roy
  1 sibling, 0 replies; 12+ messages in thread
From: Travis Rhoden @ 2013-09-24 21:24 UTC (permalink / raw)
  To: Sage Weil
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

On Tue, Sep 24, 2013 at 5:16 PM, Sage Weil <sage-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org> wrote:
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
>> This "noshare" option may have just helped me a ton -- I sure wish I would
>> have asked similar questions sooner, because I have seen the same failure to
>> scale.  =)
>>
>> One question -- when using the "noshare" option (or really, even without it)
>> are there any practical limits on the number of RBDs that can be mounted?  I
>> have servers with ~100 RBDs on them each, and am wondering if I switch them
>> all over to using "noshare" if anything is going to blow up, use a ton more
>> memory, etc.  Even without noshare, are there any known limits to how many
>> RBDs can be mapped?
>
> With noshare each mapped image will appear as a separate client instance,
> which means it will have it's own session with teh monitors and own TCP
> connections to the OSDs.  It may be a viable workaround for now but in
> general I would not recommend it.

Good to know.  We are still playing with CephFS as our ultimate
solution, but in the meantime this may indeed be a good workaround for
me.

>
> I'm very curious what the scaling issue is with the shared client.  Do you
> have a working perf that can capture callgraph information on this
> machine?

Not currently, but I could certainly work on it.  The issue that we
see is basically what the OP showed -- that there seems to be a finite
amount of bandwidth that I can read/write from a machine, regardless
of how many RBDs are involved.  i.e., if I can get 1GB/sec writes on
one RBD when everything else is idle, running the same test on two
RBDs in parallel *from the same machine* ends up with the sum of the
two at ~1GB/sec, split fairly evenly. However, if I do the same thing
and run the same test on two RBDs, each hosted on a separate machine,
I definitely see increased bandwidth.  Monitoring network traffic and
the Ceph OSD nodes seems to imply that they are not overloaded --
there is more bandwidth to be had, the clients just aren't able to
push the data fast enough.  That's why I'm hoping creating a "new"
client for each RBD will improve things.

I'm not going to enable this everywhere just yet, we will test things
on a few RBDs and test, and perhaps enable on some RBDs that are
particularly heavily loaded.

I'll work on the perf capture!

Thanks for the feedback, as always.

 - Travis
>
> sage
>
>>
>> Thanks!
>>
>>  - Travis
>>
>>
>> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
>> wrote:
>>       Thanks Josh !
>>       I am able to successfully add this noshare option in the image
>>       mapping now. Looking at dmesg output, I found that was indeed
>>       the secret key problem. Block performance is scaling now.
>>
>>       Regards
>>       Somnath
>>
>>       -----Original Message-----
>>       From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>       [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Josh
>>       Durgin
>>       Sent: Thursday, September 19, 2013 12:24 PM
>>       To: Somnath Roy
>>       Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
>>       ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>       Subject: Re: [ceph-users] Scaling RBD module
>>
>>       On 09/19/2013 12:04 PM, Somnath Roy wrote:
>>       > Hi Josh,
>>       > Thanks for the information. I am trying to add the following
>>       but hitting some permission issue.
>>       >
>>       > root@emsclient:/etc# echo
>>       <mon-1>:6789,<mon-2>:6789,<mon-3>:6789
>>       > name=admin,key=client.admin,noshare test_rbd ceph_block_test'
>>       >
>>       > /sys/bus/rbd/add
>>       > -bash: echo: write error: Operation not permitted
>>
>>       If you check dmesg, it will probably show an error trying to
>>       authenticate to the cluster.
>>
>>       Instead of key=client.admin, you can pass the base64 secret
>>       value as shown in 'ceph auth list' with the
>>       secret=XXXXXXXXXXXXXXXXXXXXX option.
>>
>>       BTW, there's a ticket for adding the noshare option to rbd map
>>       so using the sysfs interface like this is never necessary:
>>
>>       http://tracker.ceph.com/issues/6264
>>
>>       Josh
>>
>>       > Here is the contents of rbd directory..
>>       >
>>       > root@emsclient:/sys/bus/rbd# ll
>>       > total 0
>>       > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
>>       > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
>>       > --w-------  1 root root 4096 Sep 19 11:59 add
>>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
>>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
>>       > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
>>       > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
>>       > --w-------  1 root root 4096 Sep 19 12:03 remove
>>       > --w-------  1 root root 4096 Sep 19 11:59 uevent
>>       >
>>       >
>>       > I checked even if I am logged in as root , I can't write
>>       anything on /sys.
>>       >
>>       > Here is the Ubuntu version I am using..
>>       >
>>       > root@emsclient:/etc# lsb_release -a
>>       > No LSB modules are available.
>>       > Distributor ID: Ubuntu
>>       > Description:    Ubuntu 13.04
>>       > Release:        13.04
>>       > Codename:       raring
>>       >
>>       > Here is the mount information....
>>       >
>>       > root@emsclient:/etc# mount
>>       > /dev/mapper/emsclient--vg-root on / type ext4
>>       (rw,errors=remount-ro)
>>       > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys
>>       type
>>       > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type
>>       tmpfs (rw)
>>       > none on /sys/fs/fuse/connections type fusectl (rw) none on
>>       > /sys/kernel/debug type debugfs (rw) none on
>>       /sys/kernel/security type
>>       > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)
>>       devpts on
>>       > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
>>       > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
>>       > none on /run/lock type tmpfs
>>       (rw,noexec,nosuid,nodev,size=5242880)
>>       > none on /run/shm type tmpfs (rw,nosuid,nodev) none on
>>       /run/user type
>>       > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
>>       > /dev/sda1 on /boot type ext2 (rw)
>>       > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
>>       >
>>       >
>>       > Any idea what went wrong here ?
>>       >
>>       > Thanks & Regards
>>       > Somnath
>>       >
>>       > -----Original Message-----
>>       > From: Josh Durgin [mailto:josh.durgin-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>>       > Sent: Wednesday, September 18, 2013 6:10 PM
>>       > To: Somnath Roy
>>       > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
>>       > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>>       > Subject: Re: [ceph-users] Scaling RBD module
>>       >
>>       > On 09/17/2013 03:30 PM, Somnath Roy wrote:
>>       >> Hi,
>>       >> I am running Ceph on a 3 node cluster and each of my server
>>       node is running 10 OSDs, one for each disk. I have one admin
>>       node and all the nodes are connected with 2 X 10G network. One
>>       network is for cluster and other one configured as public
>>       network.
>>       >>
>>       >> Here is the status of my cluster.
>>       >>
>>       >> ~/fio_test# ceph -s
>>       >>
>>       >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>>       >>      health HEALTH_WARN clock skew detected on mon.
>>       <server-name-2>, mon. <server-name-3>
>>       >>      monmap e1: 3 mons at
>>       {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,
>>       <server-name-2>=xxx.xxx.xxx.xxx:6789/0,
>>       <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64,
>>       quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>>       >>      osdmap e391: 30 osds: 30 up, 30 in
>>       >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB
>>       data, 27912 MB used, 11145 GB / 11172 GB avail
>>       >>      mdsmap e1: 0/0/1 up
>>       >>
>>       >>
>>       >> I started with rados bench command to benchmark the read
>>       performance of this Cluster on a large pool (~10K PGs) and found
>>       that each rados client has a limitation. Each client can only
>>       drive up to a certain mark. Each server  node cpu utilization
>>       shows it is  around 85-90% idle and the admin node (from where
>>       rados client is running) is around ~80-85% idle. I am trying
>>       with 4K object size.
>>       >
>>       > Note that rados bench with 4k objects is different from rbd
>>       with 4k-sized I/Os - rados bench sends each request to a new
>>       object, while rbd objects are 4M by default.
>>       >
>>       >> Now, I started running more clients on the admin node and the
>>       performance is scaling till it hits the client cpu limit. Server
>>       still has the cpu of 30-35% idle. With small object size I must
>>       say that the ceph per osd cpu utilization is not promising!
>>       >>
>>       >> After this, I started testing the rados block interface with
>>       kernel rbd module from my admin node.
>>       >> I have created 8 images mapped on the pool having around 10K
>>       PGs and I am not able to scale up the performance by running fio
>>       (either by creating a software raid or running on individual
>>       /dev/rbd* instances). For example, running multiple fio
>>       instances (one in /dev/rbd1 and the other in /dev/rbd2)  the
>>       performance I am getting is half of what I am getting if running
>>       one instance. Here is my fio job script.
>>       >>
>>       >> [random-reads]
>>       >> ioengine=libaio
>>       >> iodepth=32
>>       >> filename=/dev/rbd1
>>       >> rw=randread
>>       >> bs=4k
>>       >> direct=1
>>       >> size=2G
>>       >> numjobs=64
>>       >>
>>       >> Let me know if I am following the proper procedure or not.
>>       >>
>>       >> But, If my understanding is correct, kernel rbd module is
>>       acting as a client to the cluster and in one admin node I can
>>       run only one of such kernel instance.
>>       >> If so, I am then limited to the client bottleneck that I
>>       stated earlier. The cpu utilization of the server side is around
>>       85-90% idle, so, it is clear that client is not driving.
>>       >>
>>       >> My question is, is there any way to hit the cluster  with
>>       more client from a single box while testing the rbd module ?
>>       >
>>       > You can run multiple librbd instances easily (for example with
>>       multiple runs of the rbd bench-write command).
>>       >
>>       > The kernel rbd driver uses the same rados client instance for
>>       multiple block devices by default. There's an option (noshare)
>>       to use a new rados client instance for a newly mapped device,
>>       but it's not exposed by the rbd cli. You need to use the sysfs
>>       interface that 'rbd map' uses instead.
>>       >
>>       > Once you've used rbd map once on a machine, the kernel will
>>       already have the auth key stored, and you can use:
>>       >
>>       > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare
>>       poolname
>>       > imagename' > /sys/bus/rbd/add
>>       >
>>       > Where 1.2.3.4:6789 is the address of a monitor, and you're
>>       connecting as client.admin.
>>       >
>>       > You can use 'rbd unmap' as usual.
>>       >
>>       > Josh
>>       >
>>       >
>>       > ________________________________
>>       >
>>       > PLEASE NOTE: The information contained in this electronic mail
>>       message is intended only for the use of the designated
>>       recipient(s) named above. If the reader of this message is not
>>       the intended recipient, you are hereby notified that you have
>>       received this message in error and that any review,
>>       dissemination, distribution, or copying of this message is
>>       strictly prohibited. If you have received this communication in
>>       error, please notify the sender by telephone or e-mail (as shown
>>       above) immediately and destroy any and all copies of this
>>       message in your possession (whether hard copies or
>>       electronically stored copies).
>>       >
>>       >
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo
>> info at  http://vger.kernel.org/majordomo-info.html
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Scaling RBD module
       [not found]                 ` <alpine.DEB.2.00.1309241413280.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  2013-09-24 21:24                   ` Travis Rhoden
@ 2013-09-24 22:17                   ` Somnath Roy
  2013-09-24 22:47                     ` [ceph-users] " Sage Weil
  1 sibling, 1 reply; 12+ messages in thread
From: Somnath Roy @ 2013-09-24 22:17 UTC (permalink / raw)
  To: Sage Weil, Travis Rhoden
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw


[-- Attachment #1.1: Type: text/plain, Size: 16056 bytes --]

Hi Sage,

We did quite a few experiment to see how ceph read performance can scale up. Here is the summary.



1.

First we tried to see how far a single node cluster with one osd can scale up. We started with cuttlefish release and the entire osd file system is on the ssd. What we saw with 4K size object and with single rados client with dedicated 10G network, throughput can't go beyond a certain point.

We dig through the code and found out SimpleMessenger is opening single socket connection (per client)to talk to the osd. Also, we saw there is only one dispatcher Q (Dispatch thread)/ SimpleMessenger to carry these requests to OSD. We started adding more dispatcher threads in Dispatch Q, rearrange several locks in the Pipe.cc to identify the bottleneck. What we end up discovering is that there is bottleneck both in upstream as well as in the downstream at osd level and changing the locking scheme in io path  will affect lot of other codes (that we don't even know).

So, we stopped that activity and started workaround the upstream bottleneck by introducing more clients to the single OSD. What we saw single OSD is scaling with lot of cpu utilization. To produce ~40K iops (4K) it is taking almost 12 core of cpu.

Another point, I didn't see this single osd scale with the Dumpling release with the multiple clients !! Something changed..



2.   After that, we setup a proper cluster with 3 high performing nodes and total 30 osds. Here also, we are seeing single rados bech client as well as rbd client instance is not scaling beyond a certain limit. It is not able to generate much load as node cpu utilization remains very low. But running multiple client instance the performance is scaling till hit the cpu limit.



So, it is pretty clear we are not able to saturate anything with single client and that's why the 'noshare' option was very helpful to measure the rbd performance benchmark. I have a single osd/single client level call grind  data attached here.



Now, I am doing the benchmark for radosgw and I think I am stuck with similar bottleneck here. Could you please confirm that if radosgw also opening single client instance to the cluster ?

If so, is there any similar option like 'noshare' in this case ? Here also, creating multiple radosgw instance on separate nodes the performance is scaling.

BTW, is there a way to run multiple radosgw to a single node or it has to be one/node ?





Thanks & Regards

Somnath







-----Original Message-----
From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org [mailto:ceph-devel-owner-u79uwXL29TZUIDd8j+nm9g@public.gmane.org.org] On Behalf Of Sage Weil
Sent: Tuesday, September 24, 2013 2:16 PM
To: Travis Rhoden
Cc: Josh Durgin; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Scaling RBD module



On Tue, 24 Sep 2013, Travis Rhoden wrote:

> This "noshare" option may have just helped me a ton -- I sure wish I

> would have asked similar questions sooner, because I have seen the

> same failure to scale.  =)

>

> One question -- when using the "noshare" option (or really, even

> without it) are there any practical limits on the number of RBDs that

> can be mounted?  I have servers with ~100 RBDs on them each, and am

> wondering if I switch them all over to using "noshare" if anything is

> going to blow up, use a ton more memory, etc.  Even without noshare,

> are there any known limits to how many RBDs can be mapped?



With noshare each mapped image will appear as a separate client instance, which means it will have it's own session with teh monitors and own TCP connections to the OSDs.  It may be a viable workaround for now but in general I would not recommend it.



I'm very curious what the scaling issue is with the shared client.  Do you have a working perf that can capture callgraph information on this machine?



sage



>

> Thanks!

>

>  - Travis

>

>

> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org<mailto:Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>>

> wrote:

>       Thanks Josh !

>       I am able to successfully add this noshare option in the image

>       mapping now. Looking at dmesg output, I found that was indeed

>       the secret key problem. Block performance is scaling now.

>

>       Regards

>       Somnath

>

>       -----Original Message-----

>       From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel-owner@vger.kernel.org>

>       [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Josh

>       Durgin

>       Sent: Thursday, September 19, 2013 12:24 PM

>       To: Somnath Roy

>       Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel-fy+rA21nqHI@public.gmane.orgrnel.org>; Anirban Ray;

>       ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>

>       Subject: Re: [ceph-users] Scaling RBD module

>

>       On 09/19/2013 12:04 PM, Somnath Roy wrote:

>       > Hi Josh,

>       > Thanks for the information. I am trying to add the following

>       but hitting some permission issue.

>       >

>       > root@emsclient:/etc# echo

>       <mon-1>:6789,<mon-2>:6789,<mon-3>:6789

>       > name=admin,key=client.admin,noshare test_rbd ceph_block_test'

>       >

>       > /sys/bus/rbd/add

>       > -bash: echo: write error: Operation not permitted

>

>       If you check dmesg, it will probably show an error trying to

>       authenticate to the cluster.

>

>       Instead of key=client.admin, you can pass the base64 secret

>       value as shown in 'ceph auth list' with the

>       secret=XXXXXXXXXXXXXXXXXXXXX option.

>

>       BTW, there's a ticket for adding the noshare option to rbd map

>       so using the sysfs interface like this is never necessary:

>

>       http://tracker.ceph.com/issues/6264

>

>       Josh

>

>       > Here is the contents of rbd directory..

>       >

>       > root@emsclient:/sys/bus/rbd# ll

>       > total 0

>       > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./

>       > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../

>       > --w-------  1 root root 4096 Sep 19 11:59 add

>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/

>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/

>       > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe

>       > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe

>       > --w-------  1 root root 4096 Sep 19 12:03 remove

>       > --w-------  1 root root 4096 Sep 19 11:59 uevent

>       >

>       >

>       > I checked even if I am logged in as root , I can't write

>       anything on /sys.

>       >

>       > Here is the Ubuntu version I am using..

>       >

>       > root@emsclient:/etc# lsb_release -a

>       > No LSB modules are available.

>       > Distributor ID: Ubuntu

>       > Description:    Ubuntu 13.04

>       > Release:        13.04

>       > Codename:       raring

>       >

>       > Here is the mount information....

>       >

>       > root@emsclient:/etc# mount

>       > /dev/mapper/emsclient--vg-root on / type ext4

>       (rw,errors=remount-ro)

>       > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys

>       type

>       > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type

>       tmpfs (rw)

>       > none on /sys/fs/fuse/connections type fusectl (rw) none on

>       > /sys/kernel/debug type debugfs (rw) none on

>       /sys/kernel/security type

>       > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)

>       devpts on

>       > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)

>       > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)

>       > none on /run/lock type tmpfs

>       (rw,noexec,nosuid,nodev,size=5242880)

>       > none on /run/shm type tmpfs (rw,nosuid,nodev) none on

>       /run/user type

>       > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)

>       > /dev/sda1 on /boot type ext2 (rw)

>       > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)

>       >

>       >

>       > Any idea what went wrong here ?

>       >

>       > Thanks & Regards

>       > Somnath

>       >

>       > -----Original Message-----

>       > From: Josh Durgin [mailto:josh.durgin-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]

>       > Sent: Wednesday, September 18, 2013 6:10 PM

>       > To: Somnath Roy

>       > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel@vger.kernel.org>; Anirban Ray;

>       > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>

>       > Subject: Re: [ceph-users] Scaling RBD module

>       >

>       > On 09/17/2013 03:30 PM, Somnath Roy wrote:

>       >> Hi,

>       >> I am running Ceph on a 3 node cluster and each of my server

>       node is running 10 OSDs, one for each disk. I have one admin

>       node and all the nodes are connected with 2 X 10G network. One

>       network is for cluster and other one configured as public

>       network.

>       >>

>       >> Here is the status of my cluster.

>       >>

>       >> ~/fio_test# ceph -s

>       >>

>       >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023

>       >>      health HEALTH_WARN clock skew detected on mon.

>       <server-name-2>, mon. <server-name-3>

>       >>      monmap e1: 3 mons at

>       {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,

>       <server-name-2>=xxx.xxx.xxx.xxx:6789/0,

>       <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64,

>       quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>

>       >>      osdmap e391: 30 osds: 30 up, 30 in

>       >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB

>       data, 27912 MB used, 11145 GB / 11172 GB avail

>       >>      mdsmap e1: 0/0/1 up

>       >>

>       >>

>       >> I started with rados bench command to benchmark the read

>       performance of this Cluster on a large pool (~10K PGs) and found

>       that each rados client has a limitation. Each client can only

>       drive up to a certain mark. Each server  node cpu utilization

>       shows it is  around 85-90% idle and the admin node (from where

>       rados client is running) is around ~80-85% idle. I am trying

>       with 4K object size.

>       >

>       > Note that rados bench with 4k objects is different from rbd

>       with 4k-sized I/Os - rados bench sends each request to a new

>       object, while rbd objects are 4M by default.

>       >

>       >> Now, I started running more clients on the admin node and the

>       performance is scaling till it hits the client cpu limit. Server

>       still has the cpu of 30-35% idle. With small object size I must

>       say that the ceph per osd cpu utilization is not promising!

>       >>

>       >> After this, I started testing the rados block interface with

>       kernel rbd module from my admin node.

>       >> I have created 8 images mapped on the pool having around 10K

>       PGs and I am not able to scale up the performance by running fio

>       (either by creating a software raid or running on individual

>       /dev/rbd* instances). For example, running multiple fio

>       instances (one in /dev/rbd1 and the other in /dev/rbd2)  the

>       performance I am getting is half of what I am getting if running

>       one instance. Here is my fio job script.

>       >>

>       >> [random-reads]

>       >> ioengine=libaio

>       >> iodepth=32

>       >> filename=/dev/rbd1

>       >> rw=randread

>       >> bs=4k

>       >> direct=1

>       >> size=2G

>       >> numjobs=64

>       >>

>       >> Let me know if I am following the proper procedure or not.

>       >>

>       >> But, If my understanding is correct, kernel rbd module is

>       acting as a client to the cluster and in one admin node I can

>       run only one of such kernel instance.

>       >> If so, I am then limited to the client bottleneck that I

>       stated earlier. The cpu utilization of the server side is around

>       85-90% idle, so, it is clear that client is not driving.

>       >>

>       >> My question is, is there any way to hit the cluster  with

>       more client from a single box while testing the rbd module ?

>       >

>       > You can run multiple librbd instances easily (for example with

>       multiple runs of the rbd bench-write command).

>       >

>       > The kernel rbd driver uses the same rados client instance for

>       multiple block devices by default. There's an option (noshare)

>       to use a new rados client instance for a newly mapped device,

>       but it's not exposed by the rbd cli. You need to use the sysfs

>       interface that 'rbd map' uses instead.

>       >

>       > Once you've used rbd map once on a machine, the kernel will

>       already have the auth key stored, and you can use:

>       >

>       > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare

>       poolname

>       > imagename' > /sys/bus/rbd/add

>       >

>       > Where 1.2.3.4:6789 is the address of a monitor, and you're

>       connecting as client.admin.

>       >

>       > You can use 'rbd unmap' as usual.

>       >

>       > Josh

>       >

>       >

>       > ________________________________

>       >

>       > PLEASE NOTE: The information contained in this electronic mail

>       message is intended only for the use of the designated

>       recipient(s) named above. If the reader of this message is not

>       the intended recipient, you are hereby notified that you have

>       received this message in error and that any review,

>       dissemination, distribution, or copying of this message is

>       strictly prohibited. If you have received this communication in

>       error, please notify the sender by telephone or e-mail (as shown

>       above) immediately and destroy any and all copies of this

>       message in your possession (whether hard copies or

>       electronically stored copies).

>       >

>       >

>

> --

> To unsubscribe from this list: send the line "unsubscribe ceph-devel"

> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:majordomo@vger.kernel.org> More majordomo

> info at  http://vger.kernel.org/majordomo-info.html

>

>

> _______________________________________________

> ceph-users mailing list

> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>

> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>

>

>

>

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).


[-- Attachment #1.2: Type: text/html, Size: 35108 bytes --]

[-- Attachment #2: callgrind_overall.txt --]
[-- Type: text/plain, Size: 205518 bytes --]

--------------------------------------------------------------------------------
Profile data file 'callgrind.out.11853' (creator: callgrind-3.8.1)
--------------------------------------------------------------------------------
I1 cache: 
D1 cache: 
LL cache: 
Timerange: Basic block 0 - 554439227
Trigger: Program termination
Profiled target:  ceph-osd -i 0 -f (PID 11853, part 1)
Events recorded:  Ir
Events shown:     Ir
Event sort order: Ir
Thresholds:       99
Include dirs:     
User annotated:   
Auto-annotation:  off

--------------------------------------------------------------------------------
           Ir 
--------------------------------------------------------------------------------
2,197,545,623  PROGRAM TOTALS

--------------------------------------------------------------------------------
         Ir  file:function
--------------------------------------------------------------------------------
227,402,660  ???:operator new(unsigned long) [???]
185,363,992  ???:operator delete(void*) [???]
182,534,184  /root/xfs1/ceph-0.61.7/src/common/sctp_crc32.c:ceph_crc32c_le [/usr/local/bin/ceph-osd]
 94,321,090  /usr/include/cryptopp/misc.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
 59,950,844  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:__memcpy_ssse3_back [/lib/x86_64-linux-gnu/libc-2.17.so]
 57,608,567  common/buffer.cc:ceph::buffer::list::iterator::copy(unsigned int, char*) [/usr/local/bin/ceph-osd]
 44,255,360  ???:std::string::_M_mutate(unsigned long, unsigned long, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 43,386,143  ???:std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 39,201,084  ???:std::string::append(std::string const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 38,204,416  /build/buildd/eglibc-2.17/stdio-common/vfprintf.c:vfprintf [/lib/x86_64-linux-gnu/libc-2.17.so]
 35,513,695  ???:std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 31,647,755  /build/buildd/eglibc-2.17/nptl/pthread_mutex_trylock.c:pthread_mutex_trylock [/lib/x86_64-linux-gnu/libpthread-2.17.so]
 30,314,940  /build/buildd/eglibc-2.17/nptl/pthread_mutex_unlock.c:__pthread_mutex_unlock_usercnt [/lib/x86_64-linux-gnu/libpthread-2.17.so]
 29,163,356  ???:char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 28,795,178  common/buffer.cc:ceph::buffer::list::iterator::advance(int) [/usr/local/bin/ceph-osd]
 28,545,698  ???:std::string::replace(unsigned long, unsigned long, char const*, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 28,433,788  ???:std::string::append(char const*, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 25,397,312  ???:std::string::_Rep::_M_clone(std::allocator<char> const&, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 22,052,466  ???:free [???]
 21,954,546  ???:std::string::_M_replace_safe(unsigned long, unsigned long, char const*, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 21,527,934  common/buffer.cc:ceph::buffer::list::append(char const*, unsigned int) [/usr/local/bin/ceph-osd]
 20,243,599  ???:std::string::reserve(unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 19,864,523  /build/buildd/eglibc-2.17/libio/genops.c:_IO_default_xsputn [/lib/x86_64-linux-gnu/libc-2.17.so]
 18,803,977  ./common/Mutex.h:Mutex::Lock(bool)
 17,880,597  common/Mutex.cc:Mutex::Lock(bool) [/usr/local/bin/ceph-osd]
 16,184,904  ???:0x00000000000bb218 [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 14,634,118  ???:std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
 14,050,870  ./include/buffer.h:ceph::buffer::list::iterator::copy(unsigned int, char*)
 11,776,118  os/LFNIndex.cc:append_escaped(__gnu_cxx::__normal_iterator<char const*, std::string>, __gnu_cxx::__normal_iterator<char const*, std::string>, std::string*) [/usr/local/bin/ceph-osd]
 11,329,938  common/buffer.cc:ceph::buffer::ptr::append(char const*, unsigned int) [/usr/local/bin/ceph-osd]
 11,301,006  common/buffer.cc:ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsigned int) [/usr/local/bin/ceph-osd]
 10,697,297  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/multiarch/strlen-sse2-pminub.S:__strlen_sse2_pminub [/lib/x86_64-linux-gnu/libc-2.17.so]
  9,692,350  ???:0x0000000000014f80 [/usr/lib/libtcmalloc.so.4.1.0]
  9,229,395  ./common/Mutex.h:Mutex::_pre_unlock() [/usr/local/bin/ceph-osd]
  9,227,560  ???:0x00000000000bc226 [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  9,175,422  common/buffer.cc:ceph::buffer::ptr::c_str() const [/usr/local/bin/ceph-osd]
  9,074,655  ???:tcmalloc::AlignmentForSize(unsigned long) [/usr/lib/libtcmalloc.so.4.1.0]
  8,846,366  common/buffer.cc:ceph::buffer::ptr::release() [/usr/local/bin/ceph-osd]
  8,530,344  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/multiarch/memcmp-sse4.S:__memcmp_sse4_1 [/lib/x86_64-linux-gnu/libc-2.17.so]
  8,508,176  ???:0x00000000000bb228 [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  8,388,572  common/buffer.cc:ceph::buffer::ptr::unused_tail_length() const [/usr/local/bin/ceph-osd]
  8,123,450  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::get_full_path_subdir(std::vector<std::string, std::allocator<std::string> > const&)
  6,931,120  /usr/include/c++/4.7/bits/basic_string.h:std::basic_string<char, std::char_traits<char>, std::allocator<char> > std::operator+<char, std::char_traits<char>, std::allocator<char> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) [/usr/local/bin/ceph-osd]
  6,685,470  /usr/include/c++/4.7/bits/stl_uninitialized.h:std::string* std::__uninitialized_copy<false>::__uninit_copy<std::string*, std::string*>(std::string*, std::string*, std::string*) [/usr/local/bin/ceph-osd]
  6,483,890  ???:malloc [???]
  6,437,860  /build/buildd/eglibc-2.17/debug/vsnprintf_chk.c:__vsnprintf_chk [/lib/x86_64-linux-gnu/libc-2.17.so]
  6,314,055  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/strchrnul.S:strchrnul [/lib/x86_64-linux-gnu/libc-2.17.so]
  6,143,990  os/LFNIndex.cc:LFNIndex::get_full_path_subdir(std::vector<std::string, std::allocator<std::string> > const&) [/usr/local/bin/ceph-osd]
  5,898,638  ???:__cxxabiv1::__vmi_class_type_info::__do_dyncast(long, __cxxabiv1::__class_type_info::__sub_kind, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info::__dyncast_result&) const [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  5,841,122  ./common/Mutex.h:Mutex::Unlock() [/usr/local/bin/ceph-osd]
  5,830,090  /usr/include/c++/4.7/bits/vector.tcc:std::vector<std::string, std::allocator<std::string> >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, std::string const&) [/usr/local/bin/ceph-osd]
  5,741,874  /usr/include/c++/4.7/bits/basic_string.h:append_escaped(__gnu_cxx::__normal_iterator<char const*, std::string>, __gnu_cxx::__normal_iterator<char const*, std::string>, std::string*)
  5,550,455  common/perf_counters.cc:PerfCounters::inc(int, unsigned long) [/usr/local/bin/ceph-osd]
  5,507,707  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_clear() [/usr/local/bin/ceph-osd]
  5,447,662  /build/buildd/libcrypto++-5.6.1/rijndael.cpp:CryptoPP::Rijndael::Enc::AdvancedProcessBlocks(unsigned char const*, unsigned char const*, unsigned char*, unsigned long, unsigned int) const [/usr/lib/libcrypto++.so.9.0.0]
  5,380,607  ???:__cxxabiv1::__si_class_type_info::__do_dyncast(long, __cxxabiv1::__class_type_info::__sub_kind, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info::__dyncast_result&) const'2 [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  5,359,060  ./common/Mutex.h:PerfCounters::inc(int, unsigned long)
  5,088,044  /build/buildd/libcrypto++-5.6.1/rijndael.cpp:CryptoPP::Rijndael::Base::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&) [/usr/lib/libcrypto++.so.9.0.0]
  5,020,487  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/multiarch/strcmp-sse42.S:__strcmp_sse42 [/lib/x86_64-linux-gnu/libc-2.17.so]
  4,996,260  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.222] [/usr/local/bin/ceph-osd]
  4,895,925  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() [/usr/local/bin/ceph-osd]
  4,501,840  /usr/include/c++/4.7/bits/basic_string.h:void std::_Destroy_aux<false>::__destroy<std::string*>(std::string*, std::string*)
  4,456,980  os/chain_xattr.cc:get_raw_xattr_name(char const*, int, char*, int) [clone .constprop.8] [/usr/local/bin/ceph-osd]
  4,434,884  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::PutMaybeModifiable(unsigned char*, unsigned long, int, bool, bool) [/usr/lib/libcrypto++.so.9.0.0]
  4,333,175  /build/buildd/eglibc-2.17/libio/strops.c:_IO_str_init_static_internal [/lib/x86_64-linux-gnu/libc-2.17.so]
  4,322,440  common/Clock.cc:ceph_clock_now(CephContext*) [/usr/local/bin/ceph-osd]
  4,276,050  /build/buildd/eglibc-2.17/stdio-common/_itoa.c:_itoa_word [/lib/x86_64-linux-gnu/libc-2.17.so]
  4,215,261  /usr/include/x86_64-linux-gnu/bits/string3.h:ceph::buffer::list::iterator::copy(unsigned int, char*)
  4,210,194  /usr/include/c++/4.7/bits/stl_construct.h:void std::_Destroy_aux<false>::__destroy<std::string*>(std::string*, std::string*) [/usr/local/bin/ceph-osd]
  4,182,768  ???:std::_Rb_tree_insert_and_rebalance(bool, std::_Rb_tree_node_base*, std::_Rb_tree_node_base*, std::_Rb_tree_node_base&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  4,074,310  osd/ReplicatedPG.cc:ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
  3,981,986  /build/buildd/eglibc-2.17/nptl/pthread_self.c:pthread_self [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  3,957,057  ???:posix_memalign [???]
  3,704,618  ???:std::_Rb_tree_rebalance_for_erase(std::_Rb_tree_node_base*, std::_Rb_tree_node_base&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  3,674,074  ./include/utime.h:ceph_clock_now(CephContext*)
  3,644,844  ???:operator new[](unsigned long) [???]
  3,432,775  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()
  3,389,298  common/buffer.cc:ceph::buffer::ptr::c_str() [/usr/local/bin/ceph-osd]
  3,379,280  ./include/buffer.h:ceph::buffer::list::list() [/usr/local/bin/ceph-osd]
  3,269,286  common/perf_counters.cc:PerfCounters::set(int, unsigned long) [/usr/local/bin/ceph-osd]
  3,241,584  ???:std::string::compare(std::string const&) const [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  3,207,751  common/buffer.cc:ceph::buffer::list::iterator::copy(unsigned int, std::string&) [/usr/local/bin/ceph-osd]
  3,156,552  ./common/Mutex.h:PerfCounters::set(int, unsigned long)
  3,115,219  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::iterator::advance(int)
  2,984,298  ???:operator delete[](void*) [???]
  2,975,490  msg/Pipe.cc:Pipe::tcp_read(char*, int) [/usr/local/bin/ceph-osd]
  2,946,814  osd/PG.cc:intrusive_ptr_release(PG*) [/usr/local/bin/ceph-osd]
  2,920,566  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.3043] [/usr/local/bin/ceph-osd]
  2,840,964  common/buffer.cc:ceph::buffer::create_page_aligned(unsigned int) [/usr/local/bin/ceph-osd]
  2,814,973  /build/buildd/eglibc-2.17/math/../sysdeps/ieee754/dbl-64/wordsize-64/s_trunc.c:trunc [/lib/x86_64-linux-gnu/libm-2.17.so]
  2,720,136  /usr/include/c++/4.7/bits/basic_string.h:intrusive_ptr_release(PG*)
  2,720,136  /usr/include/c++/4.7/bits/basic_string.h:intrusive_ptr_add_ref(PG*)
  2,720,136  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.2856] [/usr/local/bin/ceph-osd]
  2,705,352  common/buffer.cc:ceph::buffer::ptr::operator=(ceph::buffer::ptr const&) [/usr/local/bin/ceph-osd]
  2,599,905  /build/buildd/eglibc-2.17/libio/genops.c:_IO_old_init [/lib/x86_64-linux-gnu/libc-2.17.so]
  2,599,905  /build/buildd/eglibc-2.17/libio/genops.c:_IO_setb [/lib/x86_64-linux-gnu/libc-2.17.so]
  2,476,100  /build/buildd/eglibc-2.17/libio/genops.c:_IO_no_init [/lib/x86_64-linux-gnu/libc-2.17.so]
  2,366,252  ???:std::__detail::_List_node_base::_M_hook(std::__detail::_List_node_base*) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  2,363,550  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<std::string, std::allocator<std::string> >::_M_check_len(unsigned long, char const*) const [/usr/local/bin/ceph-osd]
  2,341,040  ???:std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, unsigned long, std::allocator<char> const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  2,266,780  osd/PG.cc:intrusive_ptr_add_ref(PG*) [/usr/local/bin/ceph-osd]
  2,211,936  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.119] [/usr/local/bin/ceph-osd]
  2,183,470  os/HashIndex.cc:HashIndex::get_path_components(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*) [/usr/local/bin/ceph-osd]
  2,138,090  os/LFNIndex.cc:LFNIndex::path_exists(std::vector<std::string, std::allocator<std::string> > const&, int*) [/usr/local/bin/ceph-osd]
  2,117,403  common/WorkQueue.cc:ThreadPool::worker(ThreadPool::WorkThread*)
  2,093,767  ???:std::string::assign(std::string const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  2,093,571  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::BlockQueue::Put(unsigned char const*, unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
  2,037,800  common/Throttle.cc:Throttle::put(long) [/usr/local/bin/ceph-osd]
  2,007,192  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/cancellation.S:__pthread_disable_asynccancel [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  2,003,150  os/HashIndex.cc:HashIndex::_lookup(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*, std::string*, int*) [/usr/local/bin/ceph-osd]
  1,980,880  osd/OpRequest.cc:OpTracker::_mark_event(OpRequest*, std::string const&, utime_t) [/usr/local/bin/ceph-osd]
  1,979,420  /build/buildd/eglibc-2.17/nptl/pthread_mutex_unlock.c:pthread_mutex_unlock [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  1,961,096  /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/sysv/linux/x86_64/recv.c:recv [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  1,958,370  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::lfn_generate_object_name(hobject_t const&)
  1,839,926  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/cancellation.S:__pthread_enable_asynccancel [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  1,836,785  msg/Pipe.cc:Pipe::read_message(Message**) [/usr/local/bin/ceph-osd]
  1,825,848  msg/Pipe.cc:Pipe::tcp_read_nonblocking(char*, int) [/usr/local/bin/ceph-osd]
  1,808,612  msg/Pipe.cc:Pipe::write_message(ceph_msg_header&, ceph_msg_footer&, ceph::buffer::list&) [/usr/local/bin/ceph-osd]
  1,803,672  common/buffer.cc:ceph::buffer::ptr::ptr(ceph::buffer::ptr const&) [/usr/local/bin/ceph-osd]
  1,800,800  /usr/include/c++/4.7/bits/basic_string.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
  1,800,800  ./include/encoding.h:object_info_t::decode(ceph::buffer::list::iterator&)
  1,795,379  ???:std::string::_Rep::_M_destroy(std::allocator<char> const&) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  1,733,270  /build/buildd/eglibc-2.17/debug/snprintf_chk.c:__snprintf_chk [/lib/x86_64-linux-gnu/libc-2.17.so]
  1,722,015  osd/ReplicatedPG.cc:ReplicatedPG::do_osd_ops(ReplicatedPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&) [/usr/local/bin/ceph-osd]
  1,710,760  osd/osd_types.cc:object_info_t::decode(ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
  1,710,760  /usr/include/c++/4.7/bits/basic_string.h:hobject_t::~hobject_t()
  1,688,812  common/buffer.cc:ceph::buffer::list::append(ceph::buffer::list const&) [/usr/local/bin/ceph-osd]
  1,665,540  /build/buildd/eglibc-2.17/io/../sysdeps/unix/sysv/linux/wordsize-64/xstat.c:_xstat [/lib/x86_64-linux-gnu/libc-2.17.so]
  1,654,485  osd/PG.cc:PG::publish_stats_to_osd() [/usr/local/bin/ceph-osd]
  1,620,720  ./include/encoding.h:osd_reqid_t::decode(ceph::buffer::list::iterator&)
  1,594,014  ./include/buffer.h:ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsigned int)
  1,575,700  ./os/hobject.h:hobject_t::hobject_t(object_t, std::string const&, snapid_t, unsigned long, long) [/usr/local/bin/ceph-osd]
  1,530,680  ./msg/Message.h:Message::get_source_inst() const [/usr/local/bin/ceph-osd]
  1,521,945  common/buffer.cc:ceph::buffer::raw_posix_aligned::~raw_posix_aligned() [/usr/local/bin/ceph-osd]
  1,508,560  ./common/Mutex.h:Throttle::put(long)
  1,508,170  osd/osd_types.h:object_info_t::object_info_t(object_info_t const&) [/usr/local/bin/ceph-osd]
  1,489,107  ./include/buffer.h:ceph::buffer::list::crc32c(unsigned int) [/usr/local/bin/ceph-osd]
  1,486,320  common/Throttle.cc:Throttle::get(long, long) [/usr/local/bin/ceph-osd]
  1,485,858  /build/buildd/libcrypto++-5.6.1/cryptlib.cpp:CryptoPP::Algorithm::Algorithm(bool) [/usr/lib/libcrypto++.so.9.0.0]
  1,463,150  os/chain_xattr.cc:chain_fgetxattr(int, char const*, void*, unsigned long) [/usr/local/bin/ceph-osd]
  1,463,150  ./include/encoding.h:hobject_t::decode(ceph::buffer::list::iterator&)
  1,440,704  /build/buildd/libcrypto++-5.6.1/modes.cpp:CryptoPP::CBC_Encryption::ProcessData(unsigned char*, unsigned char const*, unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
  1,440,640  /usr/include/c++/4.7/bits/basic_string.h:HashIndex::get_path_components(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*)
  1,420,866  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:pthread_cond_timedwait@@GLIBC_2.3.2 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  1,420,524  common/buffer.cc:ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned int) [/usr/local/bin/ceph-osd]
  1,418,130  /usr/include/c++/4.7/bits/stl_uninitialized.h:std::vector<std::string, std::allocator<std::string> >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, std::string const&)
  1,406,875  osd/OSD.cc:OSD::handle_op(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
  1,405,087  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::iterator::copy(unsigned int, char*)
  1,350,780  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::BlockQueue::ResetQueue(unsigned long, unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
  1,350,655  auth/Crypto.cc:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const [/usr/local/bin/ceph-osd]
  1,350,600  os/LFNIndex.cc:LFNIndex::lfn_get_name(std::vector<std::string, std::allocator<std::string> > const&, hobject_t const&, std::string*, std::string*, int*) [/usr/local/bin/ceph-osd]
  1,350,600  osd/osd_types.cc:osd_reqid_t::decode(ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
  1,325,292  ./include/utime.h:OSD::sched_scrub()
  1,316,835  osd/osd_types.cc:object_locator_t::decode(ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
  1,305,580  os/LFNIndex.cc:LFNIndex::lfn_generate_object_name(hobject_t const&) [/usr/local/bin/ceph-osd]
  1,289,601  ./include/buffer.h:ceph::buffer::list::append(char const*, unsigned int)
  1,286,128  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::find(coll_t const&) const [/usr/local/bin/ceph-osd]
  1,284,857  msg/Pipe.cc:Pipe::reader() [/usr/local/bin/ceph-osd]
  1,283,264  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::StreamTransformationFilter::LastPut(unsigned char const*, unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
  1,264,100  msg/Pipe.cc:Pipe::writer() [/usr/local/bin/ceph-osd]
  1,260,320  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::path_exists(std::vector<std::string, std::allocator<std::string> > const&, int*)
  1,226,795  /build/buildd/eglibc-2.17/stdio-common/printf-parse.h:vfprintf
  1,216,116  ./log/SubsystemMap.h:ceph::log::SubsystemMap::should_gather(unsigned int, int)
  1,215,540  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.251] [/usr/local/bin/ceph-osd]
  1,215,540  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.423] [/usr/local/bin/ceph-osd]
  1,193,030  os/LFNIndex.cc:LFNIndex::lookup(hobject_t const&, std::tr1::shared_ptr<CollectionIndex::Path>*, int*) [/usr/local/bin/ceph-osd]
  1,193,030  os/FileStore.cc:FileStore::lfn_open(coll_t, hobject_t const&, int, unsigned int, std::tr1::shared_ptr<CollectionIndex::Path>*, std::tr1::shared_ptr<CollectionIndex>*) [/usr/local/bin/ceph-osd]
  1,188,789  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_broadcast.S:pthread_cond_broadcast@@GLIBC_2.3.2 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
  1,181,775  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<long, std::pair<long const, pg_pool_t>, std::_Select1st<std::pair<long const, pg_pool_t> >, std::less<long>, std::allocator<std::pair<long const, pg_pool_t> > >::find(long const&) const
  1,170,520  /usr/include/c++/4.7/bits/stl_vector.h:HashIndex::get_path_components(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*)
  1,149,790  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::ptr::release()
  1,149,623  msg/Pipe.cc:Pipe::tcp_read_wait() [/usr/local/bin/ceph-osd]
  1,140,544  ???:tcmalloc::CentralFreeList::ReleaseToSpans(void*) [/usr/lib/libtcmalloc.so.4.1.0]
  1,125,500  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::get_full_path(std::vector<std::string, std::allocator<std::string> > const&, std::string const&)
  1,125,500  osd/OpRequest.cc:OpRequest::mark_event(std::string const&) [/usr/local/bin/ceph-osd]
  1,125,500  ./msg/msg_types.h:Message::get_source_inst() const
  1,125,500  /usr/include/c++/4.7/bits/stl_construct.h:std::string* std::__uninitialized_copy<false>::__uninit_copy<std::string*, std::string*>(std::string*, std::string*, std::string*)
  1,108,220  ???:tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::FreeList*, unsigned long, int) [/usr/lib/libtcmalloc.so.4.1.0]
  1,080,904  ???:__cxxabiv1::__si_class_type_info::__do_dyncast(long, __cxxabiv1::__class_type_info::__sub_kind, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info::__dyncast_result&) const [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
  1,080,624  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::Rijndael::Base::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&)
  1,080,480  ./include/encoding.h:object_locator_t::decode(ceph::buffer::list::iterator&)
  1,058,111  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::StreamTransformationFilter::InitializeDerivedAndReturnNewSizes(CryptoPP::NameValuePairs const&, unsigned long&, unsigned long&, unsigned long&) [/usr/lib/libcrypto++.so.9.0.0]
  1,048,036  msg/Message.cc:decode_message(CephContext*, ceph_msg_header&, ceph_msg_footer&, ceph::buffer::list&, ceph::buffer::list&, ceph::buffer::list&) [/usr/local/bin/ceph-osd]
  1,035,598  common/Mutex.cc:Mutex::Mutex(char const*, bool, bool, bool, CephContext*) [/usr/local/bin/ceph-osd]
  1,014,428  /build/buildd/eglibc-2.17/io/../sysdeps/unix/syscall-template.S:poll [/lib/x86_64-linux-gnu/libc-2.17.so]
  1,012,950  ./include/buffer.h:ceph::buffer::list::clear() [/usr/local/bin/ceph-osd]
    991,481  ???:std::_Rb_tree_decrement(std::_Rb_tree_node_base*) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    990,440  os/hobject.cc:hobject_t::decode(ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
    985,212  /usr/include/x86_64-linux-gnu/bits/string3.h:ceph::buffer::ptr::append(char const*, unsigned int)
    979,185  osd/ReplicatedPG.cc:ReplicatedPG::get_object_context(hobject_t const&, object_locator_t const&, bool) [/usr/local/bin/ceph-osd]
    946,050  ./include/buffer.h:ceph::buffer::list::claim_append(ceph::buffer::list&)
    925,042  ./osd/osd_types.h:OSD::sched_scrub()
    923,051  /build/buildd/libcrypto++-5.6.1/cpu.h:CryptoPP::Rijndael::Base::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&)
    921,728  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::equal_range(coll_t const&) [/usr/local/bin/ceph-osd]
    911,655  /usr/include/c++/4.7/bits/stl_pair.h:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::pair(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > const&) [/usr/local/bin/ceph-osd]
    911,655  osd/osd_types.h:pg_stat_t::operator=(pg_stat_t const&) [/usr/local/bin/ceph-osd]
    900,520  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::StreamTransformationFilter::StreamTransformationFilter(CryptoPP::StreamTransformation&, CryptoPP::BufferedTransformation*, CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme, bool) [/usr/lib/libcrypto++.so.9.0.0]
    900,434  /usr/include/cryptopp/cryptlib.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    900,400  os/LFNIndex.cc:LFNIndex::get_full_path(std::vector<std::string, std::allocator<std::string> > const&, std::string const&) [/usr/local/bin/ceph-osd]
    900,400  /usr/include/c++/4.7/ext/new_allocator.h:std::vector<std::string, std::allocator<std::string> >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, std::string const&)
    878,280  ./common/Mutex.h:Throttle::get(long, long)
    877,890  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.2548] [/usr/local/bin/ceph-osd]
    866,635  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()'2 [/usr/local/bin/ceph-osd]
    855,380  osd/OpRequest.cc:OpTracker::create_request(Message*) [/usr/local/bin/ceph-osd]
    855,380  osd/OpRequest.cc:OpTracker::mark_event(OpRequest*, std::string const&) [/usr/local/bin/ceph-osd]
    855,280  /usr/include/c++/4.7/bits/stl_vector.h:HashIndex::_lookup(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*, std::string*, int*)
    844,125  osd/OSD.cc:OSD::OpWQ::_enqueue(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >) [/usr/local/bin/ceph-osd]
    833,485  ???:__dynamic_cast [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    832,870  os/IndexManager.cc:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*) [/usr/local/bin/ceph-osd]
    832,870  os/IndexManager.cc:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*) [/usr/local/bin/ceph-osd]
    832,870  osd/ReplicatedPG.cc:ReplicatedPG::log_op_stats(ReplicatedPG::OpContext*) [/usr/local/bin/ceph-osd]
    832,870  /usr/include/c++/4.7/tr1/shared_ptr.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    831,199  common/HeartbeatMap.cc:ceph::HeartbeatMap::reset_timeout(ceph::heartbeat_handle_d*, long, long) [/usr/local/bin/ceph-osd]
    817,190  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_lower_bound(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*, std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*, coll_t const&) [clone .isra.91] [/usr/local/bin/ceph-osd]
    811,728  /usr/include/c++/4.7/ext/new_allocator.h:ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsigned int)
    811,560  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/cancellation.S:__libc_disable_asynccancel [/lib/x86_64-linux-gnu/libc-2.17.so]
    810,958  osd/OSD.cc:OSD::_dispatch(Message*) [/usr/local/bin/ceph-osd]
    810,816  common/buffer.cc:ceph::buffer::create(unsigned int) [/usr/local/bin/ceph-osd]
    810,468  /build/buildd/libcrypto++-5.6.1/algparam.h:CryptoPP::AlgorithmParameters CryptoPP::MakeParameters<CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme>(char const*, CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme const&, bool) [/usr/lib/libcrypto++.so.9.0.0]
    810,468  /build/buildd/libcrypto++-5.6.1/misc.cpp:CryptoPP::UnalignedAllocate(unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
    810,360  ./os/hobject.h:hobject_t::hobject_t(hobject_t const&) [/usr/local/bin/ceph-osd]
    810,360  ./osd/osd_types.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    810,360  os/FileStore.cc:FileStore::get_index(coll_t, std::tr1::shared_ptr<CollectionIndex>*) [/usr/local/bin/ceph-osd]
    799,105  osd/ReplicatedPG.cc:ReplicatedPG::find_object_context(hobject_t const&, object_locator_t const&, ObjectContext**, bool, snapid_t*) [/usr/local/bin/ceph-osd]
    787,955  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::Filter::Output(int, unsigned char const*, unsigned long, int, bool, std::string const&) [/usr/lib/libcrypto++.so.9.0.0]
    787,850  os/LFNIndex.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    787,850  osd/osd_types.cc:object_stat_sum_t::add(object_stat_sum_t const&) [/usr/local/bin/ceph-osd]
    787,850  /usr/include/c++/4.7/bits/stl_pair.h:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::~pair() [/usr/local/bin/ceph-osd]
    787,850  /usr/include/c++/4.7/ext/new_allocator.h:std::_Vector_base<std::string, std::allocator<std::string> >::_M_allocate(unsigned long) [clone .isra.1812]
    776,595  ./include/encoding.h:MOSDOp::decode_payload()
    766,360  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:DispatchQueue::enqueue(Message*, int, unsigned long)
    765,375  auth/Crypto.cc:CryptoKey::encrypt(CephContext*, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const [/usr/local/bin/ceph-osd]
    765,340  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::~LFNIndex()
    759,384  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.550] [/usr/local/bin/ceph-osd]
    755,145  /usr/include/c++/4.7/bits/stl_map.h:std::map<PG*, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::operator[](PG* const&) [/usr/local/bin/ceph-osd]
    754,085  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_erase(std::_Rb_tree_node<std::pair<hobject_t const, ObjectContext*> >*) [/usr/local/bin/ceph-osd]
    747,834  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:pthread_cond_wait@@GLIBC_2.3.2 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    745,932  osd/OSD.cc:OSD::do_waiters() [/usr/local/bin/ceph-osd]
    743,930  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/cancellation.S:__libc_enable_asynccancel [/lib/x86_64-linux-gnu/libc-2.17.so]
    743,490  ./include/buffer.h:ceph::buffer::list::claim(ceph::buffer::list&)
    743,138  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<Context*, std::allocator<Context*> >::_M_clear() [/usr/local/bin/ceph-osd]
    743,094  /usr/include/c++/4.7/bits/stl_iterator.h:append_escaped(__gnu_cxx::__normal_iterator<char const*, std::string>, __gnu_cxx::__normal_iterator<char const*, std::string>, std::string*)
    742,929  /build/buildd/libcrypto++-5.6.1/algparam.cpp:CryptoPP::AlgorithmParametersBase::GetVoidValue(char const*, std::type_info const&, void*) const [/usr/lib/libcrypto++.so.9.0.0]
    736,596  msg/Pipe.cc:Pipe::do_sendmsg(msghdr*, int, bool) [/usr/local/bin/ceph-osd]
    731,575  os/FileStore.cc:FileStore::_fgetattr(int, char const*, ceph::buffer::ptr&) [/usr/local/bin/ceph-osd]
    728,220  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::_Rb_tree_const_iterator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >) [/usr/local/bin/ceph-osd]
    721,280  ./msg/Message.h:Message::Message(int, int, int) [/usr/local/bin/ceph-osd]
    720,440  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::Filter::AttachedTransformation() [/usr/lib/libcrypto++.so.9.0.0]
    720,416  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::BlockQueue::GetContigousBlocks(unsigned long&)
    720,352  /build/buildd/libcrypto++-5.6.1/modes.h:CryptoPP::CipherModeBase::SetCipherWithIV(CryptoPP::BlockCipher&, unsigned char const*, int) [/usr/lib/libcrypto++.so.9.0.0]
    720,320  os/FileStore.cc:FileStore::getattr(coll_t, hobject_t const&, char const*, ceph::buffer::ptr&) [/usr/local/bin/ceph-osd]
    720,320  ./os/hobject.h:hobject_t::get_filestore_key_u32() const [clone .isra.38] [/usr/local/bin/ceph-osd]
    720,320  /usr/include/c++/4.7/bits/basic_string.h:OpRequest::mark_event(std::string const&)
    712,596  /usr/include/c++/4.7/bits/vector.tcc:std::vector<int, std::allocator<int> >::operator=(std::vector<int, std::allocator<int> > const&) [/usr/local/bin/ceph-osd]
    709,065  auth/cephx/CephxSessionHandler.cc:CephxSessionHandler::sign_message(Message*) [/usr/local/bin/ceph-osd]
    709,065  /usr/include/c++/4.7/tr1/shared_ptr.h:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::pair(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > const&)
    709,065  os/FileStore.cc:FileStore::read(coll_t, hobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, bool) [/usr/local/bin/ceph-osd]
    709,065  ./include/encoding.h:MOSDOpReply::encode_payload(unsigned long)
    709,065  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::__shared_ptr<OpRequest, (__gnu_cxx::_Lock_policy)2>::__shared_ptr(std::tr1::__shared_ptr<OpRequest, (__gnu_cxx::_Lock_policy)2> const&) [/usr/local/bin/ceph-osd]
    698,740  /usr/include/x86_64-linux-gnu/bits/string3.h:Message::Message(int, int, int)
    697,903  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::FilterWithBufferedInput::BlockQueue::ResetQueue(unsigned long, unsigned long)
    697,810  osd/PG.cc:PG::op_has_sufficient_caps(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    697,810  osd/PG.cc:PG::do_request(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    697,810  common/perf_counters.cc:PerfCounters::tinc(int, utime_t) [/usr/local/bin/ceph-osd]
    695,756  /build/buildd/eglibc-2.17/nptl/../nptl/pthread_mutex_lock.c:__pthread_mutex_cond_lock [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    695,688  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::append(ceph::buffer::ptr const&, unsigned int, unsigned int)
    688,014  /usr/include/c++/4.7/backward/hashtable.h:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::count(pg_t const&) const
    684,255  osd/PG.cc:PG::put(std::string const&)
    676,377  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::ptr::ptr(ceph::buffer::ptr const&)
    675,620  /usr/include/c++/4.7/bits/stl_vector.h:ceph::log::SubsystemMap::should_gather(unsigned int, int)
    675,600  common/Throttle.cc:Throttle::_wait(long) [/usr/local/bin/ceph-osd]
    675,414  /build/buildd/libcrypto++-5.6.1/filters.h:CryptoPP::StringSinkTemplate<std::string>::Put2(unsigned char const*, unsigned long, int, bool) [/usr/lib/libcrypto++.so.9.0.0]
    675,390  /build/buildd/eglibc-2.17/nptl/pthread_mutex_init.c:pthread_mutex_init [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    675,320  ./include/encoding.h:decode(std::string&, ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
    675,300  auth/cephx/CephxProtocol.h:int encode_encrypt<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&) [/usr/local/bin/ceph-osd]
    675,300  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_lower_bound(std::_Rb_tree_node<std::pair<hobject_t const, ObjectContext*> >*, std::_Rb_tree_node<std::pair<hobject_t const, ObjectContext*> >*, hobject_t const&) [clone .isra.2103] [/usr/local/bin/ceph-osd]
    675,300  /usr/include/c++/4.7/bits/basic_string.h:std::basic_string<char, std::char_traits<char>, std::allocator<char> > std::operator+<char, std::char_traits<char>, std::allocator<char> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*) [/usr/local/bin/ceph-osd]
    675,300  /usr/include/c++/4.7/tr1/shared_ptr.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    675,300  osd/ReplicatedPG.cc:ReplicatedPG::prepare_transaction(ReplicatedPG::OpContext*) [/usr/local/bin/ceph-osd]
    668,460  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > const&) [/usr/local/bin/ceph-osd]
    657,143  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > const&) [/usr/local/bin/ceph-osd]
    652,819  /usr/include/cryptopp/secblock.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    652,819  /usr/include/cryptopp/secblock.h:CryptoPP::BlockOrientedCipherModeBase::ResizeBuffers()
    652,790  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::lookup(hobject_t const&, std::tr1::shared_ptr<CollectionIndex::Path>*, int*)
    652,790  /usr/include/c++/4.7/bits/stl_construct.h:void std::_Destroy_aux<false>::__destroy<OSDOp*>(OSDOp*, OSDOp*) [/usr/local/bin/ceph-osd]
    648,366  ???:0x0000000038055d23 [/usr/lib/valgrind/callgrind-amd64-linux]
    644,021  /usr/include/c++/4.7/backward/hashtable.h:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::find_or_insert(std::pair<pg_t const, PG*> const&) [/usr/local/bin/ceph-osd]
    641,535  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, pg_missing_t::item>, std::_Select1st<std::pair<hobject_t const, pg_missing_t::item> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, pg_missing_t::item> > >::find(hobject_t const&) const [/usr/local/bin/ceph-osd]
    641,535  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2>::operator=(std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2> const&) [/usr/local/bin/ceph-osd]
    640,537  /usr/include/c++/4.7/bits/stl_tree.h:std::map<PG*, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::operator[](PG* const&)
    630,308  /build/buildd/libcrypto++-5.6.1/modes.h:CryptoPP::CipherModeFinalTemplate_ExternalCipher<CryptoPP::CBC_Encryption>::CipherModeFinalTemplate_ExternalCipher(CryptoPP::BlockCipher&, unsigned char const*, int) [/usr/lib/libcrypto++.so.9.0.0]
    630,280  ./common/Mutex.h:PerfCounters::tinc(int, utime_t)
    630,280  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/memcpy.S:__GI_mempcpy [/lib/x86_64-linux-gnu/libc-2.17.so]
    619,025  /usr/include/c++/4.7/bits/vector.tcc:std::vector<OSDOp, std::allocator<OSDOp> >::_M_fill_insert(__gnu_cxx::__normal_iterator<OSDOp*, std::vector<OSDOp, std::allocator<OSDOp> > >, unsigned long, OSDOp const&) [/usr/local/bin/ceph-osd]
    619,025  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<unsigned long, entity_name_t>, std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t>, std::_Select1st<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> >, std::less<std::pair<unsigned long, entity_name_t> >, std::allocator<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> > >::_M_erase(std::_Rb_tree_node<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> >*) [/usr/local/bin/ceph-osd]
    607,851  /build/buildd/libcrypto++-5.6.1/algparam.h:CryptoPP::AlgorithmParametersTemplate<CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme>::AssignValue(char const*, std::type_info const&, void*) const [/usr/lib/libcrypto++.so.9.0.0]
    607,782  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.842] [/usr/local/bin/ceph-osd]
    607,770  ./messages/MOSDOpReply.h:MOSDOpReply::MOSDOpReply(MOSDOp*, int, unsigned int, int) [/usr/local/bin/ceph-osd]
    607,770  /usr/include/boost/tuple/tuple_comparison.hpp:bool boost::tuples::detail::lt<boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > >, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > >(boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > const&, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > const&) [/usr/local/bin/ceph-osd]
    607,770  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release()'2
    607,770  /usr/include/boost/tuple/tuple_comparison.hpp:bool boost::tuples::detail::lt<boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > >, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > >(boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > const&) [/usr/local/bin/ceph-osd]
    607,770  osd/ReplicatedPG.cc:ReplicatedPG::do_osd_op_effects(ReplicatedPG::OpContext*) [/usr/local/bin/ceph-osd]
    607,770  auth/cephx/CephxProtocol.h:void encode_encrypt_enc_bl<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&) [/usr/local/bin/ceph-osd]
    605,455  ???:std::_Rb_tree_increment(std::_Rb_tree_node_base const*) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    603,540  osd/OSD.cc:OSD::sched_scrub() [/usr/local/bin/ceph-osd]
    585,286  /build/buildd/libcrypto++-5.6.1/cpu.h:CryptoPP::Rijndael::Enc::AdvancedProcessBlocks(unsigned char const*, unsigned char const*, unsigned char*, unsigned long, unsigned int) const
    585,260  ./os/hobject.h:operator<(hobject_t const&, hobject_t const&) [/usr/local/bin/ceph-osd]
    574,328  common/buffer.cc:ceph::buffer::raw_char::~raw_char() [/usr/local/bin/ceph-osd]
    574,005  /usr/include/c++/4.7/bits/basic_string.h:OpTracker::create_request(Message*)
    574,005  osd/ReplicatedPG.h:ReplicatedPG::OpContext::OpContext(std::tr1::shared_ptr<OpRequest>, osd_reqid_t, std::vector<OSDOp, std::allocator<OSDOp> >&, ObjectState*, SnapSetContext*, ReplicatedPG*) [/usr/local/bin/ceph-osd]
    570,650  ???:tcmalloc::CentralFreeList::RemoveRange(void**, void**, int) [/usr/lib/libtcmalloc.so.4.1.0]
    568,308  /usr/include/c++/4.7/bits/basic_string.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::find(coll_t const&) const
    562,750  /usr/include/c++/4.7/tr1/shared_ptr.h:LFNIndex::lookup(hobject_t const&, std::tr1::shared_ptr<CollectionIndex::Path>*, int*)
    562,750  auth/cephx/CephxSessionHandler.cc:CephxSessionHandler::check_message_signature(Message*) [/usr/local/bin/ceph-osd]
    562,750  ???:std::string::compare(char const*) const [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    562,750  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<snapid_t, std::allocator<snapid_t> >::vector(std::vector<snapid_t, std::allocator<snapid_t> > const&) [/usr/local/bin/ceph-osd]
    553,632  common/HeartbeatMap.cc:ceph::HeartbeatMap::_check(ceph::heartbeat_handle_d*, char const*, long) [/usr/local/bin/ceph-osd]
    552,316  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::dequeue()
    551,567  ???:tcmalloc::CentralFreeList::FetchFromSpans() [/usr/lib/libtcmalloc.so.4.1.0]
    551,495  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::shared_ptr<OpRequest>::shared_ptr(std::tr1::shared_ptr<OpRequest> const&) [/usr/local/bin/ceph-osd]
    551,495  ./messages/MOSDOp.h:MOSDOp::decode_payload() [/usr/local/bin/ceph-osd]
    551,446  osd/OSD.cc:OSD::OpWQ::_process(boost::intrusive_ptr<PG>) [/usr/local/bin/ceph-osd]
    540,600  common/buffer.cc:ceph::buffer::list::claim_append(ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    540,322  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::StreamTransformationFilter::LastBlockSize(CryptoPP::StreamTransformation&, CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme) [/usr/lib/libcrypto++.so.9.0.0]
    540,312  /build/buildd/libcrypto++-5.6.1/cryptlib.cpp:CryptoPP::SimpleKeyingInterface::ThrowIfInvalidIVLength(int) [/usr/lib/libcrypto++.so.9.0.0]
    540,248  /usr/include/c++/4.7/ext/new_allocator.h:ceph::buffer::list::append(ceph::buffer::list const&)
    540,240  ./common/Mutex.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    540,240  /usr/include/c++/4.7/bits/basic_string.h:std::_List_base<std::pair<utime_t, std::string>, std::allocator<std::pair<utime_t, std::string> > >::_M_clear()
    540,240  ./log/SubsystemMap.h:OpTracker::_mark_event(OpRequest*, std::string const&, utime_t)
    540,240  osd/OSDCap.cc:OSDCap::is_capable(std::string const&, long, std::string const&, bool, bool, bool, bool) const [/usr/local/bin/ceph-osd]
    540,240  ./common/Mutex.h:IndexManager::put_index(coll_t)
    539,644  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_unique(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&) [/usr/local/bin/ceph-osd]
    537,888  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >::_M_clear() [/usr/local/bin/ceph-osd]
    535,386  ./common/Mutex.h:Pipe::writer() [/usr/local/bin/ceph-osd]
    518,332  ./msg/Message.h:Message::~Message() [/usr/local/bin/ceph-osd]
    517,799  /build/buildd/libcrypto++-5.6.1/cryptlib.cpp:CryptoPP::SimpleKeyingInterface::SetKey(unsigned char const*, unsigned long, CryptoPP::NameValuePairs const&) [/usr/lib/libcrypto++.so.9.0.0]
    517,799  ???:__cxxabiv1::__class_type_info::__do_dyncast(long, __cxxabiv1::__class_type_info::__sub_kind, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info const*, void const*, __cxxabiv1::__class_type_info::__dyncast_result&) const [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    517,799  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::BlockQueue::GetAll(unsigned char*) [/usr/lib/libcrypto++.so.9.0.0]
    517,730  /usr/include/c++/4.7/bits/basic_string.h:LFNIndex::lfn_get_name(std::vector<std::string, std::allocator<std::string> > const&, hobject_t const&, std::string*, std::string*, int*)
    517,730  /usr/include/boost/tuple/tuple_comparison.hpp:bool boost::tuples::detail::lt<boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> >, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > >(boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > const&) [/usr/local/bin/ceph-osd]
    515,047  ???:std::_Rb_tree_increment(std::_Rb_tree_node_base*) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    506,475  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::pair(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > const&)
    506,475  osd/OSD.cc:OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    506,475  /usr/include/c++/4.7/tr1/shared_ptr.h:PG::get_osdmap() const
    506,475  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::__shared_ptr<OpRequest, (__gnu_cxx::_Lock_policy)2>::__shared_ptr(std::tr1::__shared_ptr<OpRequest, (__gnu_cxx::_Lock_policy)2> const&)
    506,475  ./os/hobject.h:hobject_t::~hobject_t()
    506,436  common/WorkQueue.cc:ThreadPool::join_old_threads() [/usr/local/bin/ceph-osd]
    504,504  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_insert_unique(std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > const&) [/usr/local/bin/ceph-osd]
    503,224  common/Mutex.h:ThreadPool::worker(ThreadPool::WorkThread*) [/usr/local/bin/ceph-osd]
    500,492  ./include/utime.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_unique(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&)
    497,992  ./common/WorkQueue.h:ThreadPool::BatchWorkQueue<PG>::_void_dequeue() [/usr/local/bin/ceph-osd]
    495,964  common/buffer.cc:ceph::buffer::ptr::ptr(ceph::buffer::raw*) [/usr/local/bin/ceph-osd]
    495,330  ???:memalign [???]
    495,318  /build/buildd/libcrypto++-5.6.1/misc.h:CryptoPP::FilterWithBufferedInput::BlockQueue::ResetQueue(unsigned long, unsigned long)
    495,242  /usr/include/c++/4.7/bits/basic_string.tcc:char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag) [/usr/local/bin/ceph-osd]
    495,220  os/FileStore.cc:FileStore::lfn_open(coll_t, hobject_t const&, int, unsigned int) [/usr/local/bin/ceph-osd]
    495,220  ./include/encoding.h:void decode<std::pair<unsigned long, entity_name_t>, watch_info_t>(std::map<std::pair<unsigned long, entity_name_t>, watch_info_t, std::less<std::pair<unsigned long, entity_name_t> >, std::allocator<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> > >&, ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
    495,220  /usr/include/c++/4.7/bits/stl_pair.h:OpRequest::mark_event(std::string const&)
    495,220  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<entity_name_t, std::pair<entity_name_t const, watch_info_t>, std::_Select1st<std::pair<entity_name_t const, watch_info_t> >, std::less<entity_name_t>, std::allocator<std::pair<entity_name_t const, watch_info_t> > >::_M_erase(std::_Rb_tree_node<std::pair<entity_name_t const, watch_info_t> >*) [/usr/local/bin/ceph-osd]
    495,220  ./os/hobject.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::equal_range(hobject_t const&)
    495,220  /usr/include/c++/4.7/bits/stl_uninitialized.h:OSDOp* std::__uninitialized_copy<false>::__uninit_copy<OSDOp*, OSDOp*>(OSDOp*, OSDOp*, OSDOp*) [/usr/local/bin/ceph-osd]
    492,602  osd/PG.cc:PG::lock(bool) [/usr/local/bin/ceph-osd]
    489,219  ./common/PrioritizedQueue.h:PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::dequeue() [/usr/local/bin/ceph-osd]
    483,965  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::dequeue() [/usr/local/bin/ceph-osd]
    474,386  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::length() [/usr/local/bin/ceph-osd]
    473,130  common/buffer.cc:ceph::buffer::list::claim(ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    472,773  /build/buildd/libcrypto++-5.6.1/algparam.h:CryptoPP::AlgorithmParametersTemplate<CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme>::~AlgorithmParametersTemplate() [/usr/lib/libcrypto++.so.9.0.0]
    472,731  /usr/include/cryptopp/filters.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    472,710  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::equal_range(hobject_t const&) [/usr/local/bin/ceph-osd]
    472,710  ???:std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(std::string const&, unsigned long, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    472,710  /usr/include/c++/4.7/bits/stl_iterator.h:std::vector<std::string, std::allocator<std::string> >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, std::string const&)
    472,710  osd/OSD.cc:OSD::require_same_or_newer_map(std::tr1::shared_ptr<OpRequest>, unsigned int) [/usr/local/bin/ceph-osd]
    472,710  /usr/include/c++/4.7/bits/stl_construct.h:std::vector<std::string, std::allocator<std::string> >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, std::string const&)
    472,710  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::enqueue_strict(entity_inst_t, unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >) [/usr/local/bin/ceph-osd]
    456,920  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:__lll_unlock_wake [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    456,170  /usr/include/atomic_ops/sysdeps/gcc/x86.h:PG::put(std::string const&) [/usr/local/bin/ceph-osd]
    453,356  /usr/include/atomic_ops/sysdeps/gcc/x86.h:intrusive_ptr_add_ref(PG*)
    452,720  osd/OSD.h:OSD::PeeringWQ::_dequeue(std::list<PG*, std::allocator<PG*> >*) [/usr/local/bin/ceph-osd]
    451,027  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<double, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_unique(std::pair<double, std::tr1::shared_ptr<OpRequest> > const&) [/usr/local/bin/ceph-osd]
    450,906  /usr/include/c++/4.7/ext/new_allocator.h:std::_List_base<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_clear()
    450,820  ./common/Mutex.h:Pipe::reader()
    450,720  osd/OSD.cc:OSD::ms_dispatch(Message*) [/usr/local/bin/ceph-osd]
    450,710  ./log/SubsystemMap.h:Pipe::read_message(Message**)
    450,260  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::IsolatedInitialize(CryptoPP::NameValuePairs const&) [/usr/lib/libcrypto++.so.9.0.0]
    450,200  ./include/encoding.h:void decode<entity_name_t, watch_info_t>(std::map<entity_name_t, watch_info_t, std::less<entity_name_t>, std::allocator<std::pair<entity_name_t const, watch_info_t> > >&, ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
    450,200  ./common/Mutex.h:OpTracker::_mark_event(OpRequest*, std::string const&, utime_t)
    450,200  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::~pair()
    450,200  /usr/include/c++/4.7/bits/basic_string.h:CollectionIndex::Path::~Path()
    450,200  os/FileStore.cc:FileStore::lfn_open(coll_t, hobject_t const&, int) [/usr/local/bin/ceph-osd]
    450,200  osd/OSD.cc:OSD::_share_map_incoming(entity_name_t, Connection*, unsigned int, OSD::Session*) [/usr/local/bin/ceph-osd]
    450,200  /usr/include/c++/4.7/ext/new_allocator.h:HashIndex::get_path_components(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*)
    450,200  /usr/include/c++/4.7/tr1/shared_ptr.h:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::~pair()
    450,200  /usr/include/c++/4.7/bits/stl_list.h:void std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_initialize_dispatch<std::_List_const_iterator<ceph::buffer::ptr> >(std::_List_const_iterator<ceph::buffer::ptr>, std::_List_const_iterator<ceph::buffer::ptr>, std::__false_type) [clone .isra.1502] [/usr/local/bin/ceph-osd]
    450,120  /usr/include/x86_64-linux-gnu/sys/stat.h:LFNIndex::path_exists(std::vector<std::string, std::allocator<std::string> > const&, int*)
    444,901  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<Message*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > >, std::_Select1st<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >, std::less<Message*>, std::allocator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > > >::_M_insert_unique(std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > const&) [/usr/local/bin/ceph-osd]
    443,592  /usr/include/c++/4.7/bits/basic_string.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::equal_range(coll_t const&)
    439,530  msg/SimpleMessenger.cc:SimpleMessenger::submit_message(Message*, Connection*, entity_addr_t const&, int, bool) [/usr/local/bin/ceph-osd]
    439,153  ???:std::__detail::_List_node_base::_M_transfer(std::__detail::_List_node_base*, std::__detail::_List_node_base*) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    438,958  ./msg/Message.h:Connection::get_priv() [/usr/local/bin/ceph-osd]
    438,945  osd/OSD.cc:OSD::init_op_flags(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    437,398  ./osd/osd_types.h:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::count(pg_t const&) const
    428,260  msg/SimpleMessenger.cc:SimpleMessenger::_send_message(Message*, Connection*, bool) [/usr/local/bin/ceph-osd]
    428,054  /usr/include/c++/4.7/bits/stl_algobase.h:__gnu_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > > std::copy<__gnu_cxx::__normal_iterator<int const*, std::vector<int, std::allocator<int> > >, __gnu_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > > >(__gnu_cxx::__normal_iterator<int const*, std::vector<int, std::allocator<int> > >, __gnu_cxx::__normal_iterator<int const*, std::vector<int, std::allocator<int> > >, __gnu_cxx::__normal_iterator<int*, std::vector<int, std::allocator<int> > >) [/usr/local/bin/ceph-osd]
    427,747  /build/buildd/libcrypto++-5.6.1/misc.h:CryptoPP::Rijndael::Base::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&)
    427,690  /usr/include/c++/4.7/bits/stl_uninitialized.h:OSDOp* std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*>(__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, __gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*) [/usr/local/bin/ceph-osd]
    427,690  ./include/encoding.h:void encode_encrypt_enc_bl<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&)
    427,690  osd/OpRequest.h:OpRequest::OpRequest(Message*, OpTracker*) [/usr/local/bin/ceph-osd]
    427,550  /usr/include/c++/4.7/bits/stl_iterator.h:LFNIndex::get_full_path_subdir(std::vector<std::string, std::allocator<std::string> > const&)
    418,206  ???:tcmalloc::ThreadCache::FetchFromCentralCache(unsigned long, unsigned long) [/usr/lib/libtcmalloc.so.4.1.0]
    416,435  osd/ReplicatedPG.cc:ReplicatedPG::put_object_context(ObjectContext*) [/usr/local/bin/ceph-osd]
    416,435  osd/OpRequest.cc:OpHistory::cleanup(utime_t) [/usr/local/bin/ceph-osd]
    416,435  ./include/buffer.h:CephxSessionHandler::check_message_signature(Message*)
    416,435  ./include/buffer.h:CephxSessionHandler::sign_message(Message*)
    406,330  ./include/utime.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::equal_range(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&)
    405,741  common/buffer.cc:ceph::buffer::inc_total_alloc(unsigned int) [/usr/local/bin/ceph-osd]
    405,741  common/buffer.cc:ceph::buffer::dec_total_alloc(unsigned int) [/usr/local/bin/ceph-osd]
    405,327  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/multiarch/../memset.S:__GI_memset [/lib/x86_64-linux-gnu/libc-2.17.so]
    405,234  /build/buildd/libcrypto++-5.6.1/cryptlib.cpp:CryptoPP::BufferedTransformation::ChannelPut2(std::string const&, unsigned char const*, unsigned long, int, bool) [/usr/lib/libcrypto++.so.9.0.0]
    405,207  ???:std::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string() [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    405,180  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::StreamTransformationFilter::LastPut(unsigned char const*, unsigned long)
    405,180  /usr/include/x86_64-linux-gnu/bits/stdio2.h:LFNIndex::lfn_generate_object_name(hobject_t const&)
    405,180  ./messages/MOSDOpReply.h:MOSDOpReply::encode_payload(unsigned long) [/usr/local/bin/ceph-osd]
    405,180  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::shared_ptr<OpRequest>::~shared_ptr() [/usr/local/bin/ceph-osd]
    405,180  ./messages/MOSDOp.h:MOSDOp::get_reqid() const [/usr/local/bin/ceph-osd]
    405,180  /usr/include/c++/4.7/bits/basic_string.h:hobject_t::decode(ceph::buffer::list::iterator&)
    405,180  ./os/ObjectStore.h:ObjectStore::getattr(coll_t, hobject_t const&, char const*, ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    405,180  /usr/include/c++/4.7/ext/atomicity.h:std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >::pair(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > const&)
    393,925  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::shared_ptr<OpRequest>::shared_ptr(std::tr1::shared_ptr<OpRequest> const&)
    393,925  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_insert_unique(std::pair<hobject_t const, ObjectContext*> const&) [/usr/local/bin/ceph-osd]
    393,925  osd/OSD.cc:OSD::enqueue_op(PG*, std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    390,748  /usr/include/c++/4.7/bits/basic_string.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    384,812  /usr/include/c++/4.7/bits/stl_tree.h:void std::_Rb_tree<PG*, PG*, std::_Identity<PG*>, std::less<PG*>, std::allocator<PG*> >::_M_insert_unique<std::_Rb_tree_const_iterator<PG*> >(std::_Rb_tree_const_iterator<PG*>, std::_Rb_tree_const_iterator<PG*>) [/usr/local/bin/ceph-osd]
    383,155  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<Message*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > >, std::_Select1st<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >, std::less<Message*>, std::allocator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > const&) [/usr/local/bin/ceph-osd]
    382,790  /usr/include/c++/4.7/bits/stl_vector.h:PerfCounters::inc(int, unsigned long)
    382,740  /build/buildd/libcrypto++-5.6.1/misc.h:CryptoPP::FilterWithBufferedInput::PutMaybeModifiable(unsigned char*, unsigned long, int, bool, bool)
    382,670  ./common/WorkQueue.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process(void*, ThreadPool::TPHandle&) [/usr/local/bin/ceph-osd]
    382,670  /usr/include/boost/tuple/tuple_comparison.hpp:bool boost::tuples::detail::lt<boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > >, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > >(boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > const&) [/usr/local/bin/ceph-osd]
    382,670  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<unsigned long, entity_name_t>, std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t>, std::_Select1st<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> >, std::less<std::pair<unsigned long, entity_name_t> >, std::allocator<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> > >::_Rb_tree(std::_Rb_tree<std::pair<unsigned long, entity_name_t>, std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t>, std::_Select1st<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> >, std::less<std::pair<unsigned long, entity_name_t> >, std::allocator<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> > > const&) [/usr/local/bin/ceph-osd]
    382,670  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<CollectionIndex::Path*>(CollectionIndex::Path*) [/usr/local/bin/ceph-osd]
    382,670  osd/osd_types.h:ObjectContext::ObjectContext(object_info_t const&, bool, SnapSetContext*) [/usr/local/bin/ceph-osd]
    382,670  osd/PG.cc:PG::can_discard_op(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    382,670  ???:std::string::assign(char const*, unsigned long) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    378,129  ./include/encoding.h:pg_stat_t::encode(ceph::buffer::list&) const
    377,958  ./log/SubsystemMap.h:Pipe::writer()
    377,064  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:__lll_lock_wait [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    372,668  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<double, Message*>, std::pair<double, Message*>, std::_Identity<std::pair<double, Message*> >, std::less<std::pair<double, Message*> >, std::allocator<std::pair<double, Message*> > >::_M_insert_unique(std::pair<double, Message*> const&) [/usr/local/bin/ceph-osd]
    371,973  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::ptr::ptr(ceph::buffer::raw*)
    371,448  /build/buildd/eglibc-2.17/nptl/pthread_cond_destroy.c:pthread_cond_destroy@@GLIBC_2.3.2 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    360,192  common/Mutex.cc:Mutex::~Mutex() [/usr/local/bin/ceph-osd]
    360,184  /usr/include/x86_64-linux-gnu/bits/string3.h:CryptoPP::FilterWithBufferedInput::BlockQueue::Put(unsigned char const*, unsigned long)
    360,184  /usr/include/c++/4.7/bits/stl_list.h:void std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_initialize_dispatch<std::_List_const_iterator<ceph::buffer::ptr> >(std::_List_const_iterator<ceph::buffer::ptr>, std::_List_const_iterator<ceph::buffer::ptr>, std::__false_type) [clone .isra.602] [/usr/local/bin/ceph-osd]
    360,176  /usr/include/c++/4.7/bits/basic_string.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    360,160  ./os/hobject.h:hobject_t::get_filestore_key_u32() const [clone .isra.14] [/usr/local/bin/ceph-osd]
    360,160  ???:hobject_t::hobject_t(hobject_t const&)
    360,160  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<std::pair<utime_t, std::string>, std::allocator<std::pair<utime_t, std::string> > >::_M_clear() [/usr/local/bin/ceph-osd]
    360,160  osd/OSD.cc:OSD::OpWQ::_dequeue() [/usr/local/bin/ceph-osd]
    360,160  /usr/include/c++/4.7/bits/stl_uninitialized.h:void std::__uninitialized_fill_n<false>::__uninit_fill_n<OSDOp*, unsigned long, OSDOp>(OSDOp*, unsigned long, OSDOp const&) [/usr/local/bin/ceph-osd]
    360,160  /usr/include/c++/4.7/bits/basic_string.h:FileStore::get_index(coll_t, std::tr1::shared_ptr<CollectionIndex>*)
    360,160  osd/ReplicatedPG.cc:ReplicatedPG::populate_obc_watchers(ObjectContext*) [/usr/local/bin/ceph-osd]
    360,160  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&) [/usr/local/bin/ceph-osd]
    355,961  /build/buildd/eglibc-2.17/string/../sysdeps/x86_64/multiarch/memcpy-ssse3-back.S:__memmove_ssse3_back [/lib/x86_64-linux-gnu/libc-2.17.so]
    351,614  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<double, Message*>, std::pair<double, Message*>, std::_Identity<std::pair<double, Message*> >, std::less<std::pair<double, Message*> >, std::allocator<std::pair<double, Message*> > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<double, Message*> const&) [/usr/local/bin/ceph-osd]
    351,560  /usr/include/c++/4.7/bits/stl_vector.h:ThreadPool::worker(ThreadPool::WorkThread*)
    350,970  /usr/include/c++/4.7/bits/char_traits.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::find(coll_t const&) const
    349,216  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > const&) [/usr/local/bin/ceph-osd]
    348,905  osd/OpRequest.h:ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)
    348,905  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<hobject_t const, ObjectContext*> const&) [/usr/local/bin/ceph-osd]
    348,905  osd/osd_types.cc:OSDOp::merge_osd_op_vector_out_data(std::vector<OSDOp, std::allocator<OSDOp> >&, ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    348,905  /usr/include/c++/4.7/tr1/shared_ptr.h:OpHistory::insert(utime_t, std::tr1::shared_ptr<OpRequest>)
    339,288  ./common/WorkQueue.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_dequeue() [/usr/local/bin/ceph-osd]
    338,494  ./common/RefCountedObj.h:intrusive_ptr_release(RefCountedObject*)
    338,120  /usr/include/x86_64-linux-gnu/bits/socket2.h:Pipe::tcp_read_nonblocking(char*, int)
    338,118  ./log/SubsystemMap.h:Pipe::reader()
    338,080  mon/MonClient.cc:MonClient::ms_dispatch(Message*) [/usr/local/bin/ceph-osd]
    338,058  /usr/include/atomic_ops/sysdeps/gcc/x86.h:intrusive_ptr_add_ref(RefCountedObject*) [/usr/local/bin/ceph-osd]
    337,928  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::list()
    337,844  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<double, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<double, std::tr1::shared_ptr<OpRequest> > const&) [/usr/local/bin/ceph-osd]
    337,815  ???:std::__detail::_List_node_base::_M_unhook() [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    337,740  ./log/SubsystemMap.h:Throttle::put(long)
    337,734  /usr/include/c++/4.7/bits/stl_tree.h:OSD::sched_scrub()
    337,710  /usr/include/cryptopp/modes.h:CryptoPP::BlockOrientedCipherModeBase::MandatoryBlockSize() const
    337,695  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::Filter::Filter(CryptoPP::BufferedTransformation*) [/usr/lib/libcrypto++.so.9.0.0]
    337,695  /build/buildd/libcrypto++-5.6.1/cryptlib.cpp:CryptoPP::SimpleKeyingInterface::ThrowIfInvalidKeyLength(unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
    337,665  /usr/include/cryptopp/modes.h:CryptoPP::BlockOrientedCipherModeBase::ResizeBuffers() [/usr/local/bin/ceph-osd]
    337,665  /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/syscall-template.S:close [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    337,650  osd/ReplicatedPG.h:ReplicatedPG::register_object_context(ObjectContext*) [/usr/local/bin/ceph-osd]
    337,650  ./include/buffer.h:OSDOp* std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*>(__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, __gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*)
    337,650  /usr/include/c++/4.7/bits/basic_string.h:hobject_t::hobject_t(object_t, std::string const&, snapid_t, unsigned long, long)
    337,650  /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/syscall-template.S:open [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    337,650  /usr/include/c++/4.7/tr1/shared_ptr.h:CollectionIndex::Path::Path(std::string, std::tr1::weak_ptr<CollectionIndex>)
    337,650  ./os/ObjectStore.h:ObjectStore::Transaction::Transaction() [/usr/local/bin/ceph-osd]
    337,650  osd/osd_types.cc:OSDOp::split_osd_op_vector_in_data(std::vector<OSDOp, std::allocator<OSDOp> >&, ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    337,650  /usr/include/c++/4.7/bits/vector.tcc:std::vector<OSDOp, std::allocator<OSDOp> >::operator=(std::vector<OSDOp, std::allocator<OSDOp> > const&) [/usr/local/bin/ceph-osd]
    337,650  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::StreamTransformationFilter::NextPutModifiable(unsigned char*, unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
    337,650  osd/osd_types.h:ReplicatedPG::OpContext::OpContext(std::tr1::shared_ptr<OpRequest>, osd_reqid_t, std::vector<OSDOp, std::allocator<OSDOp> >&, ObjectState*, SnapSetContext*, ReplicatedPG*)
    337,650  osd/PG.h:PG::get_osdmap() const [/usr/local/bin/ceph-osd]
    337,650  osd/osd_types.h:object_locator_t::object_locator_t(object_locator_t const&) [/usr/local/bin/ceph-osd]
    337,650  ???:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_destroy() [/usr/local/bin/ceph-osd]
    337,503  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:OSD::OpWQ::_process(boost::intrusive_ptr<PG>)
    335,180  ./common/Mutex.h:DispatchQueue::entry() [/usr/local/bin/ceph-osd]
    332,301  msg/DispatchQueue.cc:DispatchQueue::entry()
    329,165  /build/buildd/eglibc-2.17/nptl/../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_signal.S:pthread_cond_signal@@GLIBC_2.3.2 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    326,395  /root/xfs1/ceph-0.61.7/src/common/safe_io.c:safe_pread [/usr/local/bin/ceph-osd]
    326,395  /usr/include/c++/4.7/bits/stl_map.h:std::map<entity_inst_t, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::operator[](entity_inst_t const&) [/usr/local/bin/ceph-osd]
    322,944  osd/PG.cc:PG::sched_scrub() [/usr/local/bin/ceph-osd]
    317,798  msg/DispatchQueue.cc:DispatchQueue::enqueue(Message*, int, unsigned long) [/usr/local/bin/ceph-osd]
    315,154  /usr/lib/gcc/x86_64-linux-gnu/4.7/include/emmintrin.h:CryptoPP::Rijndael::Enc::AdvancedProcessBlocks(unsigned char const*, unsigned char const*, unsigned char*, unsigned long, unsigned int) const
    315,140  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<std::string, std::allocator<std::string> >::_M_insert_aux(__gnu_cxx::__normal_iterator<std::string*, std::vector<std::string, std::allocator<std::string> > >, std::string const&)
    315,140  /usr/include/c++/4.7/bits/stl_vector.h:std::_Vector_base<std::string, std::allocator<std::string> >::_M_allocate(unsigned long) [clone .isra.1812] [/usr/local/bin/ceph-osd]
    315,140  /usr/include/c++/4.7/ext/atomicity.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    315,140  osd/OSD.cc:OSD::op_is_discardable(MOSDOp*) [/usr/local/bin/ceph-osd]
    315,140  ./include/buffer.h:int encode_encrypt<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&)
    315,140  /usr/include/c++/4.7/bits/basic_string.h:object_locator_t::~object_locator_t()
    315,140  ./include/buffer.h:void encode_encrypt_enc_bl<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&)
    315,140  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::__weak_count<(__gnu_cxx::_Lock_policy)2>::operator=(std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2> const&) [/usr/local/bin/ceph-osd]
    313,474  /usr/include/c++/4.7/backward/hashtable.h:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::resize(unsigned long) [/usr/local/bin/ceph-osd]
    311,384  /usr/include/c++/4.7/bits/char_traits.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::equal_range(coll_t const&)
    307,776  ./osd/osd_types.h:object_stat_collection_t::operator=(object_stat_collection_t const&)
    307,021  ./include/xlist.h:xlist<OpRequest*>::remove(xlist<OpRequest*>::item*) [/usr/local/bin/ceph-osd]
    305,307  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::erase(PG* const&) [/usr/local/bin/ceph-osd]
    304,290  ./common/Mutex.h:DispatchQueue::enqueue(Message*, int, unsigned long)
    304,020  ./common/Throttle.h:Throttle::_wait(long)
    303,990  /usr/include/atomic_ops/sysdeps/gcc/../aligned_atomic_load_store.h:Throttle::put(long)
    303,885  ./os/hobject.h:operator==(hobject_t const&, hobject_t const&) [/usr/local/bin/ceph-osd]
    303,885  ???:hobject_t::~hobject_t() [/usr/local/bin/ceph-osd]
    303,885  ./include/buffer.h:void std::__uninitialized_fill_n<false>::__uninit_fill_n<OSDOp*, unsigned long, OSDOp>(OSDOp*, unsigned long, OSDOp const&)
    303,885  os/chain_xattr.h:chain_fgetxattr(int, char const*, void*, unsigned long)
    303,569  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::equal_range(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&) [/usr/local/bin/ceph-osd]
    302,465  ./osd/osd_types.h:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::find_or_insert(std::pair<pg_t const, PG*> const&)
    300,190  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > const&) [/usr/local/bin/ceph-osd]
    296,852  osd/OSD.h:OSD::pg_stat_queue_enqueue(PG*) [/usr/local/bin/ceph-osd]
    294,294  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_erase(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*) [/usr/local/bin/ceph-osd]
    293,020  msg/SimpleMessenger.cc:SimpleMessenger::dispatch_throttle_release(unsigned long) [/usr/local/bin/ceph-osd]
    292,669  /build/buildd/libcrypto++-5.6.1/misc.h:CryptoPP::FilterWithBufferedInput::BlockQueue::GetContigousBlocks(unsigned long&)
    292,669  /build/buildd/libcrypto++-5.6.1/algparam.cpp:CryptoPP::AlgorithmParameters::AlgorithmParameters(CryptoPP::AlgorithmParameters const&) [/usr/lib/libcrypto++.so.9.0.0]
    292,669  /build/buildd/libcrypto++-5.6.1/cryptlib.cpp:CryptoPP::BufferedTransformation::ChannelCreatePutSpace(std::string const&, unsigned long&) [/usr/lib/libcrypto++.so.9.0.0]
    292,648  /usr/include/c++/4.7/bits/stl_tree.h:OpHistory::cleanup(utime_t)
    292,630  osd/OpRequest.cc:OpTracker::unregister_inflight_op(OpRequest*) [/usr/local/bin/ceph-osd]
    292,630  osd/PG.cc:PG::can_discard_request(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    292,630  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<hobject_t const, ObjectContext*> >, std::pair<hobject_t const, ObjectContext*> const&) [/usr/local/bin/ceph-osd]
    292,630  ./messages/MOSDOpReply.h:MOSDOpReply::claim_op_out_data(std::vector<OSDOp, std::allocator<OSDOp> >&) [/usr/local/bin/ceph-osd]
    292,630  osd/osd_types.h:ObjectContext::ondisk_read_lock() [/usr/local/bin/ceph-osd]
    292,630  /usr/include/x86_64-linux-gnu/bits/string3.h:OSDOp::OSDOp()
    292,630  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::BlockQueue::GetBlock() [/usr/lib/libcrypto++.so.9.0.0]
    292,630  os/CollectionIndex.h:CollectionIndex::Path::Path(std::string, std::tr1::weak_ptr<CollectionIndex>) [/usr/local/bin/ceph-osd]
    289,536  osd/OSD.cc:OSD::_lookup_lock_pg(pg_t) [/usr/local/bin/ceph-osd]
    282,793  /build/buildd/eglibc-2.17/nptl/pthread_mutex_unlock.c:_L_unlock_571 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    282,128  ./include/encoding.h:pg_t::encode(ceph::buffer::list&) const
    281,750  msg/Pipe.cc:Pipe::handle_ack(unsigned long) [/usr/local/bin/ceph-osd]
    281,375  /usr/include/c++/4.7/tr1/shared_ptr.h:OpHistory::cleanup(utime_t)
    281,375  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<hobject_t const, ObjectContext*> >, std::_Rb_tree_const_iterator<std::pair<hobject_t const, ObjectContext*> >) [/usr/local/bin/ceph-osd]
    281,375  ./msg/Message.h:Message::get_source_addr() const [/usr/local/bin/ceph-osd]
    280,569  ???:tcmalloc::CentralFreeList::InsertRange(void*, void*, int) [/usr/lib/libtcmalloc.so.4.1.0]
    280,316  /usr/include/c++/4.7/bits/stl_map.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    271,509  /usr/include/c++/4.7/bits/stl_list.h:void std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_initialize_dispatch<std::_List_const_iterator<ceph::buffer::ptr> >(std::_List_const_iterator<ceph::buffer::ptr>, std::_List_const_iterator<ceph::buffer::ptr>, std::__false_type) [clone .isra.216] [/usr/local/bin/ceph-osd]
    271,248  ./log/SubsystemMap.h:OSD::do_waiters()
    270,500  /usr/include/x86_64-linux-gnu/bits/poll2.h:Pipe::tcp_read_wait()
    270,480  msg/Message.h:decode_message(CephContext*, ceph_msg_header&, ceph_msg_footer&, ceph::buffer::list&, ceph::buffer::list&, ceph::buffer::list&)
    270,124  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::append(ceph::buffer::list const&)
    270,120  msg/Message.h:Message::get_source_inst() const
    270,120  /usr/include/c++/4.7/tr1/shared_ptr.h:FileStore::lfn_open(coll_t, hobject_t const&, int, unsigned int, std::tr1::shared_ptr<CollectionIndex::Path>*, std::tr1::shared_ptr<CollectionIndex>*)
    270,120  /usr/include/c++/4.7/bits/basic_string.h:object_info_t::~object_info_t()
    270,120  ./include/encoding.h:eversion_t::decode(ceph::buffer::list::iterator&)
    270,120  ./include/encoding.h:void decode<snapid_t>(std::vector<snapid_t, std::allocator<snapid_t> >&, ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
    270,120  ./common/WorkQueue.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::queue(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >) [/usr/local/bin/ceph-osd]
    270,120  /usr/include/c++/4.7/bits/basic_string.h:object_t::~object_t()
    270,120  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base_impl<HashIndex*, IndexManager::RemoveOnDelete, (__gnu_cxx::_Lock_policy)2>::~_Sp_counted_base_impl() [/usr/local/bin/ceph-osd]
    270,120  /usr/include/c++/4.7/bits/basic_string.h:coll_t::~coll_t()
    270,120  /usr/include/c++/4.7/bits/stl_tree.h:ReplicatedPG::OpContext::OpContext(std::tr1::shared_ptr<OpRequest>, osd_reqid_t, std::vector<OSDOp, std::allocator<OSDOp> >&, ObjectState*, SnapSetContext*, ReplicatedPG*)
    270,120  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.161] [/usr/local/bin/ceph-osd]
    270,120  /build/buildd/libcrypto++-5.6.1/cryptlib.h:CryptoPP::StreamTransformationFilter::NextPutModifiable(unsigned char*, unsigned long)
    270,081  ./include/utime.h:ThreadPool::worker(ThreadPool::WorkThread*)
    269,304  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::string, std::pair<std::string const, object_stat_sum_t>, std::_Select1st<std::pair<std::string const, object_stat_sum_t> >, std::less<std::string>, std::allocator<std::pair<std::string const, object_stat_sum_t> > >::operator=(std::_Rb_tree<std::string, std::pair<std::string const, object_stat_sum_t>, std::_Select1st<std::pair<std::string const, object_stat_sum_t> >, std::less<std::string>, std::allocator<std::pair<std::string const, object_stat_sum_t> > > const&) [/usr/local/bin/ceph-osd]
    267,409  ./common/Mutex.h:OSD::sched_scrub()
    263,070  /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/syscall-template.S:sendmsg [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    262,965  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::string, std::pair<std::string const, object_stat_sum_t>, std::_Select1st<std::pair<std::string const, object_stat_sum_t> >, std::less<std::string>, std::allocator<std::pair<std::string const, object_stat_sum_t> > >::_M_erase(std::_Rb_tree_node<std::pair<std::string const, object_stat_sum_t> >*) [/usr/local/bin/ceph-osd]
    260,064  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >::_M_clear() [/usr/local/bin/ceph-osd]
    259,667  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::_Rb_tree_const_iterator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >) [/usr/local/bin/ceph-osd]
    259,210  ./common/Mutex.h:SimpleMessenger::_send_message(Message*, Connection*, bool)
    258,865  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process_finish(void*)
    258,865  osd/OSD.cc:OSD::dispatch_op(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    258,865  ./common/WorkQueue.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process_finish(void*) [/usr/local/bin/ceph-osd]
    258,865  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process(void*, ThreadPool::TPHandle&)
    258,865  osd/osd_types.h:ObjectContext::ondisk_read_unlock() [/usr/local/bin/ceph-osd]
    258,865  osd/OpRequest.cc:OpTracker::RemoveOnDelete::operator()(OpRequest*) [/usr/local/bin/ceph-osd]
    258,865  ./include/utime.h:OpHistory::cleanup(utime_t)
    252,627  /usr/include/c++/4.7/bits/stl_tree.h:DispatchQueue::enqueue(Message*, int, unsigned long)
    252,581  msg/Pipe.h:Pipe::writer()
    251,345  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<Message*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > >, std::_Select1st<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >, std::less<Message*>, std::allocator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > > >::find(Message* const&)
    248,996  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, PG*, std::_Identity<PG*>, std::less<PG*>, std::allocator<PG*> >::_M_erase(std::_Rb_tree_node<PG*>*) [/usr/local/bin/ceph-osd]
    248,996  /usr/include/c++/4.7/bits/stl_tree.h:OSD::PeeringWQ::_dequeue(std::list<PG*, std::allocator<PG*> >*)
    248,996  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<PG*, std::allocator<PG*> >::_M_clear() [/usr/local/bin/ceph-osd]
    247,687  /build/buildd/eglibc-2.17/elf/dl-tls.c:__tls_get_addr [/lib/x86_64-linux-gnu/ld-2.17.so]
    247,643  /usr/include/cryptopp/modes.h:CryptoPP::BlockOrientedCipherModeBase::Resynchronize(unsigned char const*, int) [/usr/local/bin/ceph-osd]
    247,643  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::StreamTransformationFilter::FirstPut(unsigned char const*) [/usr/lib/libcrypto++.so.9.0.0]
    247,622  /build/buildd/libcrypto++-5.6.1/filters.h:CryptoPP::StreamTransformationFilter::LastPut(unsigned char const*, unsigned long)
    247,621  /usr/include/cryptopp/seckey.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    247,610  ./include/encoding.h:pg_t::decode(ceph::buffer::list::iterator&)
    247,610  os/IndexManager.cc:IndexManager::put_index(coll_t) [/usr/local/bin/ceph-osd]
    247,610  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<OpRequest*, OpTracker::RemoveOnDelete>(OpRequest*, OpTracker::RemoveOnDelete) [/usr/local/bin/ceph-osd]
    247,610  /build/buildd/eglibc-2.17/misc/../sysdeps/unix/syscall-template.S:fgetxattr [/lib/x86_64-linux-gnu/libc-2.17.so]
    247,610  /usr/include/c++/4.7/ext/atomicity.h:OpHistory::cleanup(utime_t)
    247,610  /usr/include/c++/4.7/ext/atomicity.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    247,610  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<watch_info_t, std::allocator<watch_info_t> >::_M_clear() [/usr/local/bin/ceph-osd]
    247,610  /usr/include/c++/4.7/bits/stl_tree.h:IndexManager::put_index(coll_t)
    247,610  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue::pop_front() [/usr/local/bin/ceph-osd]
    247,610  ./include/encoding.h:CephxSessionHandler::check_message_signature(Message*)
    247,191  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue::enqueue(entity_inst_t, unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >) [/usr/local/bin/ceph-osd]
    239,296  /usr/include/c++/4.7/bits/stl_tree.h:Pipe::writer()
    236,460  /usr/include/atomic_ops/sysdeps/gcc/../aligned_atomic_load_store.h:Throttle::_wait(long)
    236,355  /usr/include/c++/4.7/tr1/shared_ptr.h:PG::do_request(std::tr1::shared_ptr<OpRequest>)
    236,355  osd/OSD.cc:OSD::_lookup_pg(pg_t) [/usr/local/bin/ceph-osd]
    236,355  osd/osd_types.h:object_locator_t::~object_locator_t()
    236,355  /usr/include/c++/4.7/bits/stl_map.h:std::map<hobject_t, ObjectContext*, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::operator[](hobject_t const&) [/usr/local/bin/ceph-osd]
    236,355  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::operator=(std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > > const&) [/usr/local/bin/ceph-osd]
    234,204  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<int, std::allocator<int> >::operator=(std::vector<int, std::allocator<int> > const&)
    225,486  /usr/include/atomic_ops/sysdeps/gcc/../aligned_atomic_load_store.h:ceph::HeartbeatMap::_check(ceph::heartbeat_handle_d*, char const*, long)
    225,468  /usr/include/c++/4.7/bits/stl_vector.h:PerfCounters::set(int, unsigned long)
    225,453  ./include/buffer.h:std::_List_base<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_clear()
    225,446  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::ptr::operator=(ceph::buffer::ptr const&)
    225,420  /usr/include/c++/4.7/bits/list.tcc:std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::operator=(std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> > const&) [/usr/local/bin/ceph-osd]
    225,372  /usr/include/atomic_ops/sysdeps/gcc/x86.h:intrusive_ptr_release(RefCountedObject*) [/usr/local/bin/ceph-osd]
    225,150  /build/buildd/libcrypto++-5.6.1/misc.cpp:CryptoPP::AlignedAllocate(unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
    225,140  /usr/include/cryptopp/modes.h:non-virtual thunk to CryptoPP::BlockOrientedCipherModeBase::MandatoryBlockSize() const [/usr/local/bin/ceph-osd]
    225,130  /usr/include/c++/4.7/bits/basic_string.h:CryptoPP::StringSinkTemplate<std::string>::Put2(unsigned char const*, unsigned long, int, bool)
    225,130  /build/buildd/libcrypto++-5.6.1/filters.cpp:CryptoPP::FilterWithBufferedInput::FilterWithBufferedInput(CryptoPP::BufferedTransformation*) [/usr/lib/libcrypto++.so.9.0.0]
    225,130  /usr/include/cryptopp/modes.h:CryptoPP::BlockOrientedCipherModeBase::IsForwardTransformation() const [/usr/local/bin/ceph-osd]
    225,130  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::FilterWithBufferedInput::PutMaybeModifiable(unsigned char*, unsigned long, int, bool, bool)
    225,110  /usr/include/cryptopp/modes.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    225,100  osd/osd_types.h:object_info_t::object_info_t(ceph::buffer::list&)
    225,100  /usr/include/c++/4.7/ext/new_allocator.h:OpRequest::mark_event(std::string const&)
    225,100  ./osd/OSDMap.h:OSDMap::raw_pg_to_pg(pg_t) const [/usr/local/bin/ceph-osd]
    225,100  ./common/Mutex.h:OpRequest::mark_event(std::string const&)
    225,100  /usr/include/x86_64-linux-gnu/bits/stdio2.h:FileStore::get_cdir(coll_t, char*, int)
    225,100  ./include/encoding.h:CephxSessionHandler::sign_message(Message*)
    225,100  osd/ReplicatedPG.h:ReplicatedPG::OpContext::~OpContext() [/usr/local/bin/ceph-osd]
    225,100  /usr/include/x86_64-linux-gnu/bits/stdio2.h:HashIndex::get_path_components(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*)
    225,100  os/CollectionIndex.h:CollectionIndex::Path::~Path() [/usr/local/bin/ceph-osd]
    225,100  osd/OpRequest.h:OpRequest::mark_started() [/usr/local/bin/ceph-osd]
    225,100  /usr/include/c++/4.7/ext/atomicity.h:PG::get_osdmap() const
    225,100  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:ReplicatedPG::do_osd_op_effects(ReplicatedPG::OpContext*)
    225,100  ./osd/OpRequest.h:OpRequest::mark_reached_pg() [/usr/local/bin/ceph-osd]
    224,224  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_erase(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*)'2 [/usr/local/bin/ceph-osd]
    222,778  osd/OSD.h:OSD::sched_scrub()
    219,135  msg/Pipe.cc:Pipe::write_ack(unsigned long) [/usr/local/bin/ceph-osd]
    216,809  ./include/utime.h:operator-(utime_t const&, utime_t const&) [/usr/local/bin/ceph-osd]
    214,175  msg/Message.cc:Message::encode(unsigned long, bool) [/usr/local/bin/ceph-osd]
    214,147  msg/Messenger.h:DispatchQueue::entry()
    214,130  /usr/include/x86_64-linux-gnu/bits/string3.h:MonClient::ms_dispatch(Message*)
    213,845  /usr/include/boost/tuple/tuple_comparison.hpp:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::equal_range(hobject_t const&)
    213,845  /usr/include/c++/4.7/tr1/shared_ptr.h:OSD::OpWQ::_process(boost::intrusive_ptr<PG>)
    213,845  osd/OpRequest.cc:OpHistory::insert(utime_t, std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    213,845  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::find(hobject_t const&) const [/usr/local/bin/ceph-osd]
    213,845  ./log/SubsystemMap.h:ReplicatedPG::do_osd_ops(ReplicatedPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)
    213,845  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::find(hobject_t const&) [/usr/local/bin/ceph-osd]
    212,448  ???:0x0000000000074bb0 [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    212,226  ./log/SubsystemMap.h:ThreadPool::worker(ThreadPool::WorkThread*)
    206,380  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::find(PG* const&) const
    206,192  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::_M_insert_unique(std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > const&) [/usr/local/bin/ceph-osd]
    202,932  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::ptr::ptr(ceph::buffer::ptr const&, unsigned int, unsigned int)
    202,860  msg/Message.h:Message::set_middle(ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    202,860  msg/Message.h:Message::set_payload(ceph::buffer::list&) [/usr/local/bin/ceph-osd]
    202,710  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::claim_append(ceph::buffer::list&)
    202,680  ./log/SubsystemMap.h:Throttle::get(long, long)
    202,619  /build/buildd/libcrypto++-5.6.1/misc.cpp:CryptoPP::UnalignedDeallocate(void*) [/usr/lib/libcrypto++.so.9.0.0]
    202,617  /build/buildd/eglibc-2.17/nptl/pthread_mutexattr_settype.c:pthread_mutexattr_settype [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    202,617  /usr/include/x86_64-linux-gnu/bits/string3.h:CryptoPP::Rijndael::Base::UncheckedSetKey(unsigned char const*, unsigned int, CryptoPP::NameValuePairs const&)
    202,617  /usr/include/cryptopp/seckey.h:CryptoPP::SimpleKeyingInterfaceImpl<CryptoPP::TwoBases<CryptoPP::BlockCipher, CryptoPP::Rijndael_Info>, CryptoPP::TwoBases<CryptoPP::BlockCipher, CryptoPP::Rijndael_Info> >::GetValidKeyLength(unsigned long) const [/usr/local/bin/ceph-osd]
    202,608  /build/buildd/eglibc-2.17/nptl/pthread_mutex_destroy.c:pthread_mutex_destroy [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    202,608  /build/buildd/libcrypto++-5.6.1/filters.h:CryptoPP::FilterWithBufferedInput::Put2(unsigned char const*, unsigned long, int, bool) [/usr/lib/libcrypto++.so.9.0.0]
    202,599  /usr/include/cryptopp/modes.h:CryptoPP::CipherModeBase::SetFeedbackSize(unsigned int) [/usr/local/bin/ceph-osd]
    202,596  /usr/include/c++/4.7/bits/basic_string.h:decode(std::string&, ceph::buffer::list::iterator&)
    202,590  /usr/include/c++/4.7/bits/basic_string.h:object_locator_t::decode(ceph::buffer::list::iterator&)
    202,590  /usr/include/c++/4.7/bits/stl_vector.h:MOSDOpReply::claim_op_out_data(std::vector<OSDOp, std::allocator<OSDOp> >&)
    202,590  /usr/include/c++/4.7/bits/stl_list.h:std::list<boost::intrusive_ptr<PG>, std::allocator<boost::intrusive_ptr<PG> > >::_M_create_node(boost::intrusive_ptr<PG> const&) [clone .isra.2231] [/usr/local/bin/ceph-osd]
    202,590  os/LFNIndex.h:LFNIndex::~LFNIndex() [/usr/local/bin/ceph-osd]
    202,590  ./include/encoding.h:int encode_encrypt<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&)
    202,590  osd/OSDMap.h:OSDMap::get_pg_size(pg_t) const [clone .isra.1176] [/usr/local/bin/ceph-osd]
    202,590  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() [clone .part.281] [/usr/local/bin/ceph-osd]
    202,590  os/IndexManager.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    202,590  /usr/include/c++/4.7/bits/stl_list.h:ObjectStore::Transaction::Transaction()
    202,590  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base_impl<HashIndex*, IndexManager::RemoveOnDelete, (__gnu_cxx::_Lock_policy)2>::_M_dispose() [/usr/local/bin/ceph-osd]
    202,590  /usr/include/c++/4.7/bits/stl_tree.h:object_info_t::decode(ceph::buffer::list::iterator&)
    202,590  os/LFNIndex.cc:LFNIndex::maybe_inject_failure() [/usr/local/bin/ceph-osd]
    202,590  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base_impl<CollectionIndex::Path*, std::tr1::_Sp_deleter<CollectionIndex::Path>, (__gnu_cxx::_Lock_policy)2>::_M_dispose() [/usr/local/bin/ceph-osd]
    202,590  osd/osd_types.h:object_info_t::decode(ceph::buffer::list::iterator&)
    202,590  /usr/include/c++/4.7/tr1/shared_ptr.h:OSD::enqueue_op(PG*, std::tr1::shared_ptr<OpRequest>)
    202,590  ./osd/osd_types.h:object_info_t::~object_info_t()
    200,832  osd/osd_types.h:pg_stat_t::pg_stat_t(pg_stat_t const&) [/usr/local/bin/ceph-osd]
    200,658  /usr/include/c++/4.7/bits/stl_map.h:DispatchQueue::enqueue(Message*, int, unsigned long)
    198,924  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<Message*, std::allocator<Message*> >::_M_clear() [/usr/local/bin/ceph-osd]
    198,444  /usr/include/c++/4.7/bits/stl_tree.h:PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::dequeue()
    192,915  common/Cond.h:ThreadPool::worker(ThreadPool::WorkThread*)
    191,593  ./include/buffer.h:Pipe::writer()
    191,590  msg/Message.h:Pipe::read_message(Message**)
    191,590  msg/Pipe.h:Pipe::_send(Message*) [/usr/local/bin/ceph-osd]
    191,335  /usr/include/boost/tuple/tuple_comparison.hpp:bool boost::tuples::detail::lt<boost::tuples::cons<unsigned long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > >, boost::tuples::cons<unsigned long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > > >(boost::tuples::cons<unsigned long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > > const&, boost::tuples::cons<unsigned long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > > const&) [/usr/local/bin/ceph-osd]
    191,335  /usr/include/c++/4.7/bits/stl_vector.h:OSDOp::merge_osd_op_vector_out_data(std::vector<OSDOp, std::allocator<OSDOp> >&, ceph::buffer::list&)
    191,335  osd/OSDCap.cc:OSDCapMatch::is_match(std::string const&, long, std::string const&) const [/usr/local/bin/ceph-osd]
    191,335  ./messages/MOSDOp.h:ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)
    191,335  osd/PG.cc:PG::do_pending_flush() [/usr/local/bin/ceph-osd]
    191,335  /usr/include/c++/4.7/bits/stl_construct.h:void std::_Destroy_aux<false>::__destroy<pg_log_entry_t*>(pg_log_entry_t*, pg_log_entry_t*) [/usr/local/bin/ceph-osd]
    191,335  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<OpRequest*>(OpRequest*) [/usr/local/bin/ceph-osd]
    191,335  osd/OpRequest.cc:OpTracker::register_inflight_op(xlist<OpRequest*>::item*) [/usr/local/bin/ceph-osd]
    191,335  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<OSDOp, std::allocator<OSDOp> >::_M_check_len(unsigned long, char const*) const [/usr/local/bin/ceph-osd]
    186,837  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<int, std::pair<int const, std::list<Message*, std::allocator<Message*> > >, std::_Select1st<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<int const, std::list<Message*, std::allocator<Message*> > > const&) [/usr/local/bin/ceph-osd]
    184,212  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::_M_insert_unique(std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > const&) [/usr/local/bin/ceph-osd]
    184,212  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::_Rb_tree(std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > > const&) [/usr/local/bin/ceph-osd]
    183,152  ./include/utime.h:ReplicatedPG::log_op_stats(ReplicatedPG::OpContext*)
    180,435  ./include/encoding.h:object_stat_sum_t::encode(ceph::buffer::list&) const
    180,320  ./include/utime.h:Message::Message(int, int, int)
    180,104  /usr/include/cryptopp/cryptlib.h:CryptoPP::SimpleKeyingInterface::IsValidKeyLength(unsigned long) const [/usr/local/bin/ceph-osd]
    180,104  /build/buildd/libcrypto++-5.6.1/cryptlib.h:CryptoPP::StreamTransformationFilter::LastPut(unsigned char const*, unsigned long)
    180,095  /build/buildd/libcrypto++-5.6.1/algparam.h:CryptoPP::FilterWithBufferedInput::PutMaybeModifiable(unsigned char*, unsigned long, int, bool, bool)
    180,092  /build/buildd/libcrypto++-5.6.1/misc.h:CryptoPP::FilterWithBufferedInput::BlockQueue::Put(unsigned char const*, unsigned long)
    180,092  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::FilterWithBufferedInput::BlockQueue::Put(unsigned char const*, unsigned long)
    180,080  /usr/include/c++/4.7/bits/basic_string.h:ReplicatedPG::find_object_context(hobject_t const&, object_locator_t const&, ObjectContext**, bool, snapid_t*)
    180,080  /usr/include/c++/4.7/bits/basic_string.h:FileStore::lfn_open(coll_t, hobject_t const&, int)
    180,080  /usr/include/c++/4.7/bits/stl_list.h:OpRequest::mark_event(std::string const&)
    180,080  /usr/include/c++/4.7/bits/stl_vector.h:OpTracker::_mark_event(OpRequest*, std::string const&, utime_t)
    180,080  /build/buildd/eglibc-2.17/nptl/../sysdeps/unix/syscall-template.S:pread [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    180,080  /usr/include/c++/4.7/bits/stl_tree.h:void decode<std::pair<unsigned long, entity_name_t>, watch_info_t>(std::map<std::pair<unsigned long, entity_name_t>, watch_info_t, std::less<std::pair<unsigned long, entity_name_t> >, std::allocator<std::pair<std::pair<unsigned long, entity_name_t> const, watch_info_t> > >&, ceph::buffer::list::iterator&)
    180,080  ./common/Cond.h:IndexManager::put_index(coll_t)
    180,080  /usr/include/c++/4.7/bits/basic_string.h:FileStore::lfn_open(coll_t, hobject_t const&, int, unsigned int)
    180,080  /usr/include/c++/4.7/bits/basic_string.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_erase(std::_Rb_tree_node<std::pair<hobject_t const, ObjectContext*> >*)
    180,080  /usr/include/x86_64-linux-gnu/bits/stdio2.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    180,080  os/IndexManager.h:std::tr1::_Sp_counted_base_impl<HashIndex*, IndexManager::RemoveOnDelete, (__gnu_cxx::_Lock_policy)2>::_M_dispose()
    180,080  ./osd/osd_types.h:OSDOp* std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*>(__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, __gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*)
    180,080  /usr/include/c++/4.7/bits/basic_string.h:std::tr1::_Sp_counted_base_impl<HashIndex*, IndexManager::RemoveOnDelete, (__gnu_cxx::_Lock_policy)2>::_M_dispose()
    180,080  /usr/include/c++/4.7/bits/basic_string.h:std::tr1::_Sp_counted_base_impl<HashIndex*, IndexManager::RemoveOnDelete, (__gnu_cxx::_Lock_policy)2>::~_Sp_counted_base_impl()
    180,080  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<unsigned long, entity_name_t>, std::pair<std::pair<unsigned long, entity_name_t> const, std::tr1::shared_ptr<Watch> >, std::_Select1st<std::pair<std::pair<unsigned long, entity_name_t> const, std::tr1::shared_ptr<Watch> > >, std::less<std::pair<unsigned long, entity_name_t> >, std::allocator<std::pair<std::pair<unsigned long, entity_name_t> const, std::tr1::shared_ptr<Watch> > > >::_M_erase(std::_Rb_tree_node<std::pair<std::pair<unsigned long, entity_name_t> const, std::tr1::shared_ptr<Watch> > >*) [/usr/local/bin/ceph-osd]
    180,080  /usr/include/c++/4.7/bits/basic_string.h:HashIndex::_lookup(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*, std::string*, int*)
    180,080  /usr/include/c++/4.7/bits/stl_list.h:void std::__uninitialized_fill_n<false>::__uninit_fill_n<OSDOp*, unsigned long, OSDOp>(OSDOp*, unsigned long, OSDOp const&)
    180,080  os/FileStore.cc:FileStore::lfn_close(int) [/usr/local/bin/ceph-osd]
    180,080  /usr/include/c++/4.7/bits/stl_tree.h:void decode<entity_name_t, watch_info_t>(std::map<entity_name_t, watch_info_t, std::less<entity_name_t>, std::allocator<std::pair<entity_name_t const, watch_info_t> > >&, ceph::buffer::list::iterator&)
    180,080  /usr/include/c++/4.7/bits/stl_list.h:ObjectStore::Transaction::~Transaction()
    179,360  common/buffer.cc:ceph::buffer::list::iterator::copy_in(unsigned int, char const*) [/usr/local/bin/ceph-osd]
    179,296  /usr/include/c++/4.7/bits/stl_list.h:void std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >::_M_initialize_dispatch<std::_List_const_iterator<std::tr1::shared_ptr<OpRequest> > >(std::_List_const_iterator<std::tr1::shared_ptr<OpRequest> >, std::_List_const_iterator<std::tr1::shared_ptr<OpRequest> >, std::__false_type) [clone .isra.1639] [/usr/local/bin/ceph-osd]
    179,296  /usr/include/c++/4.7/bits/stl_list.h:void std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >::_M_initialize_dispatch<std::_List_const_iterator<std::tr1::shared_ptr<OpRequest> > >(std::_List_const_iterator<std::tr1::shared_ptr<OpRequest> >, std::_List_const_iterator<std::tr1::shared_ptr<OpRequest> >, std::__false_type) [clone .isra.2044] [/usr/local/bin/ceph-osd]
    179,183  osd/OpRequest.cc:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_unique(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&)
    178,794  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::_M_erase(std::_Rb_tree_node<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >*) [/usr/local/bin/ceph-osd]
    177,471  /build/buildd/eglibc-2.17/nptl/pthread_mutex_lock.c:pthread_mutex_lock [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    176,512  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<int, std::allocator<int> >::vector(std::vector<int, std::allocator<int> > const&) [/usr/local/bin/ceph-osd]
    176,460  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> > >::erase(unsigned int const&) [/usr/local/bin/ceph-osd]
    176,386  /usr/include/c++/4.7/bits/stl_list.h:Pipe::writer()
    169,714  ./osd/osd_types.h:OSD::handle_pg_stats_ack(MPGStatsAck*)
    169,050  ./log/SubsystemMap.h:DispatchQueue::entry()
    168,830  ./common/Mutex.h:Connection::get_priv()
    168,825  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_erase(std::_Rb_tree_node<std::pair<hobject_t const, ObjectContext*> >*)'2 [/usr/local/bin/ceph-osd]
    168,825  ./messages/MOSDOp.h:MOSDOp::~MOSDOp()'2 [/usr/local/bin/ceph-osd]
    168,825  /usr/include/c++/4.7/bits/stl_vector.h:OSDOp::split_osd_op_vector_in_data(std::vector<OSDOp, std::allocator<OSDOp> >&, ceph::buffer::list&)
    168,825  ./log/SubsystemMap.h:FileStore::read(coll_t, hobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, bool)
    168,825  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2>::operator=(std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2> const&)
    168,825  ./osd/osd_types.h:coll_t::~coll_t() [/usr/local/bin/ceph-osd]
    168,825  /usr/include/boost/tuple/detail/tuple_basic.hpp:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::equal_range(hobject_t const&)
    168,825  /usr/include/c++/4.7/bits/stl_vector.h:MOSDOp::decode_payload()
    168,825  osd/OSDMap.cc:OSDMap::is_blacklisted(entity_addr_t const&) const [/usr/local/bin/ceph-osd]
    168,825  ./log/SubsystemMap.h:FileStore::getattr(coll_t, hobject_t const&, char const*, ceph::buffer::ptr&)
    168,825  ./include/utime.h:OSD::enqueue_op(PG*, std::tr1::shared_ptr<OpRequest>)
    168,825  osd/osd_types.h:object_stat_collection_t::add(object_stat_sum_t const&, std::string const&) [/usr/local/bin/ceph-osd]
    168,825  ./include/utime.h:OSD::dequeue_op(boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest>)
    168,825  ./include/object.h:object_t::~object_t() [/usr/local/bin/ceph-osd]
    168,812  ./log/SubsystemMap.h:ceph::HeartbeatMap::reset_timeout(ceph::heartbeat_handle_d*, long, long)
    168,756  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<int, std::pair<int const, std::list<Message*, std::allocator<Message*> > >, std::_Select1st<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::_M_insert_unique(std::pair<int const, std::list<Message*, std::allocator<Message*> > > const&) [/usr/local/bin/ceph-osd]
    167,958  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > const&) [/usr/local/bin/ceph-osd]
    167,927  osd/OpRequest.cc:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::equal_range(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&)
    160,025  ./include/xlist.h:OpTracker::register_inflight_op(xlist<OpRequest*>::item*)
    158,452  ./include/xlist.h:OSD::ScrubFinalizeWQ::_dequeue()
    158,452  ./common/PrioritizedQueue.h:OSD::OpWQ::_empty()
    158,452  osd/OSD.h:OSD::ScrubFinalizeWQ::_dequeue() [/usr/local/bin/ceph-osd]
    158,313  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> const&) [/usr/local/bin/ceph-osd]
    157,630  /usr/include/atomic_ops/sysdeps/gcc/x86.h:Throttle::put(long)
    157,591  /build/buildd/libcrypto++-5.6.1/misc.h:CryptoPP::StreamTransformationFilter::FirstPut(unsigned char const*)
    157,591  ???:std::uncaught_exception() [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    157,591  /usr/include/x86_64-linux-gnu/bits/string3.h:CryptoPP::FilterWithBufferedInput::BlockQueue::GetAll(unsigned char*)
    157,591  /build/buildd/libcrypto++-5.6.1/smartptr.h:CryptoPP::AlgorithmParameters::AlgorithmParameters(CryptoPP::AlgorithmParameters const&)
    157,591  /build/buildd/libcrypto++-5.6.1/smartptr.h:CryptoPP::AlgorithmParameters CryptoPP::MakeParameters<CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme>(char const*, CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme const&, bool)
    157,591  /build/buildd/libcrypto++-5.6.1/cryptlib.h:CryptoPP::StreamTransformationFilter::InitializeDerivedAndReturnNewSizes(CryptoPP::NameValuePairs const&, unsigned long&, unsigned long&, unsigned long&)
    157,577  /usr/include/cryptopp/misc.h:CryptoPP::BlockOrientedCipherModeBase::ResizeBuffers()
    157,570  ./osd/osd_types.h:object_locator_t::~object_locator_t() [/usr/local/bin/ceph-osd]
    157,570  osd/OpRequest.h:OpRequest::~OpRequest()'2 [/usr/local/bin/ceph-osd]
    157,570  ???:std::string::assign(char const*) [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
    157,570  osd/ReplicatedPG.cc:ReplicatedPG::put_object_contexts(std::map<hobject_t, ObjectContext*, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >&) [/usr/local/bin/ceph-osd]
    157,570  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:OSD::enqueue_op(PG*, std::tr1::shared_ptr<OpRequest>)
    157,570  osd/PG.cc:PG::must_delay_request(std::tr1::shared_ptr<OpRequest>) [/usr/local/bin/ceph-osd]
    157,570  ./os/ObjectStore.h:ObjectStore::Transaction::~Transaction() [/usr/local/bin/ceph-osd]
    157,570  os/HashIndex.h:HashIndex::~HashIndex() [/usr/local/bin/ceph-osd]
    157,570  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<OSDOp, std::allocator<OSDOp> >::operator=(std::vector<OSDOp, std::allocator<OSDOp> > const&)
    157,570  /usr/include/boost/tuple/detail/tuple_basic.hpp:operator<(hobject_t const&, hobject_t const&)
    157,470  /usr/include/c++/4.7/ext/new_allocator.h:HashIndex::_lookup(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*, std::string*, int*)
    157,151  /usr/include/c++/4.7/bits/stl_map.h:std::map<unsigned int, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::operator[](unsigned int const&) [/usr/local/bin/ceph-osd]
    157,122  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> const&) [/usr/local/bin/ceph-osd]
    156,732  /usr/include/c++/4.7/bits/stl_iterator.h:std::reverse_iterator<std::_Rb_tree_iterator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::operator*() const [/usr/local/bin/ceph-osd]
    156,313  /usr/include/c++/4.7/bits/stl_tree.h:std::map<unsigned int, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::operator[](unsigned int const&)
    155,951  /usr/include/atomic_ops/sysdeps/gcc/../aligned_atomic_load_store.h:ceph::HeartbeatMap::reset_timeout(ceph::heartbeat_handle_d*, long, long)
    153,322  ???:tcmalloc::CentralFreeList::ReleaseListToSpans(void*) [/usr/lib/libtcmalloc.so.4.1.0]
    152,550  /usr/include/c++/4.7/bits/stl_map.h:std::map<int, std::list<Message*, std::allocator<Message*> >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::operator[](int const&) [/usr/local/bin/ceph-osd]
    151,704  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::_M_insert_unique(std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> const&) [/usr/local/bin/ceph-osd]
    150,675  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<int, std::pair<int const, std::list<Message*, std::allocator<Message*> > >, std::_Select1st<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::pair<int const, std::list<Message*, std::allocator<Message*> > > const&) [/usr/local/bin/ceph-osd]
    149,259  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > >::_M_clear() [/usr/local/bin/ceph-osd]
    149,259  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned long, std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > >, std::_Select1st<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > > >::_M_erase(std::_Rb_tree_node<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > >*) [/usr/local/bin/ceph-osd]
    148,667  /usr/include/c++/4.7/bits/basic_string.h:MOSDOp::~MOSDOp()'2
    146,536  ./include/buffer.h:ceph::buffer::list::push_back(ceph::buffer::ptr const&) [clone .part.215] [/usr/local/bin/ceph-osd]
    146,510  msg/Message.h:Message::encode(unsigned long, bool)
    146,510  ./msg/Message.h:Message::set_data(ceph::buffer::list const&) [/usr/local/bin/ceph-osd]
    146,328  /build/buildd/eglibc-2.17/nptl/pthread_cond_init.c:pthread_cond_init@@GLIBC_2.3.2 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    146,315  osd/ReplicatedPG.h:ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)
    146,315  ./log/SubsystemMap.h:CephxSessionHandler::sign_message(Message*)
    146,315  /usr/include/c++/4.7/bits/stl_set.h:OpHistory::insert(utime_t, std::tr1::shared_ptr<OpRequest>)
    146,315  ./msg/Message.h:Message::clear_payload() [/usr/local/bin/ceph-osd]
    146,315  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >) [/usr/local/bin/ceph-osd]
    146,315  ./include/buffer.h:ceph::buffer::list::push_back(ceph::buffer::ptr const&) [clone .part.1501] [/usr/local/bin/ceph-osd]
    146,315  osd/osd_types.h:object_stat_collection_t::calc_copies(int) [/usr/local/bin/ceph-osd]
    146,315  ./messages/MOSDOpReply.h:MOSDOpReply::~MOSDOpReply()'2 [/usr/local/bin/ceph-osd]
    146,315  ./common/WorkQueue.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_process(boost::intrusive_ptr<PG>, ThreadPool::TPHandle&) [/usr/local/bin/ceph-osd]
    146,315  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<double, std::tr1::shared_ptr<OpRequest> > > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<double, std::tr1::shared_ptr<OpRequest> > >) [/usr/local/bin/ceph-osd]
    145,678  /usr/include/c++/4.7/bits/stl_list.h:std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >::list(std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > const&) [/usr/local/bin/ceph-osd]
    145,116  ./log/SubsystemMap.h:OSD::sched_scrub()
    144,648  /usr/include/c++/4.7/bits/stl_list.h:void std::list<Message*, std::allocator<Message*> >::_M_initialize_dispatch<std::_List_const_iterator<Message*> >(std::_List_const_iterator<Message*>, std::_List_const_iterator<Message*>, std::__false_type) [clone .isra.207] [/usr/local/bin/ceph-osd]
    141,408  ./common/PrioritizedQueue.h:DispatchQueue::entry()
    140,868  /usr/include/c++/4.7/bits/stl_pair.h:std::map<entity_inst_t, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::operator[](entity_inst_t const&)
    140,868  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > const&) [/usr/local/bin/ceph-osd]
    140,213  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned long, std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > >, std::_Select1st<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > const&) [/usr/local/bin/ceph-osd]
    138,739  /usr/include/c++/4.7/bits/stl_pair.h:std::_Rb_tree<std::pair<double, Message*>, std::pair<double, Message*>, std::_Identity<std::pair<double, Message*> >, std::less<std::pair<double, Message*> >, std::allocator<std::pair<double, Message*> > >::_M_insert_unique(std::pair<double, Message*> const&)
    137,720  ./osd/osd_types.h:std::_Rb_tree<pg_t, std::pair<pg_t const, eversion_t>, std::_Select1st<std::pair<pg_t const, eversion_t> >, std::less<pg_t>, std::allocator<std::pair<pg_t const, eversion_t> > >::_M_lower_bound(std::_Rb_tree_node<std::pair<pg_t const, eversion_t> >*, std::_Rb_tree_node<std::pair<pg_t const, eversion_t> >*, pg_t const&) [clone .isra.1817]
    136,032  ./log/SubsystemMap.h:Pipe::write_message(ceph_msg_header&, ceph_msg_footer&, ceph::buffer::list&)
    135,636  /usr/include/c++/4.7/bits/stl_iterator_base_funcs.h:Pipe::write_message(ceph_msg_header&, ceph_msg_footer&, ceph::buffer::list&)
    135,450  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::erase(unsigned int const&) [/usr/local/bin/ceph-osd]
    135,450  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> const&) [/usr/local/bin/ceph-osd]
    135,250  /usr/include/c++/4.7/bits/basic_ios.h:Pipe::tcp_read(char*, int)
    135,247  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::inc_total_alloc(unsigned int)
    135,247  /usr/include/atomic_ops/sysdeps/gcc/x86.h:ceph::buffer::dec_total_alloc(unsigned int)
    135,240  /usr/include/c++/4.7/bits/stl_list.h:Message::~Message()
    135,240  ./include/buffer.h:Pipe::read_message(Message**)
    135,240  ./include/buffer.h:Message::~Message()
    135,240  /usr/include/x86_64-linux-gnu/bits/string3.h:Pipe::write_message(ceph_msg_header&, ceph_msg_footer&, ceph::buffer::list&)
    135,210  /usr/include/c++/4.7/bits/stl_vector.h:Pipe::read_message(Message**)
    135,124  auth/KeyRing.cc:ceph::log::SubsystemMap::should_gather(unsigned int, int) [/usr/local/bin/ceph-osd]
    135,120  ./include/atomic.h:Throttle::put(long)
    135,078  /build/buildd/libcrypto++-5.6.1/cryptlib.h:CryptoPP::StringSinkTemplate<std::string>::~StringSinkTemplate() [/usr/lib/libcrypto++.so.9.0.0]
    135,066  /usr/include/x86_64-linux-gnu/bits/string3.h:CryptoPP::CBC_Encryption::ProcessData(unsigned char*, unsigned char const*, unsigned long)
    135,066  /usr/include/cryptopp/seckey.h:CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>::IsForwardTransformation() const [/usr/local/bin/ceph-osd]
    135,066  /usr/include/c++/4.7/bits/char_traits.h:char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag)
    135,066  /build/buildd/libcrypto++-5.6.1/cryptlib.h:CryptoPP::CipherModeFinalTemplate_ExternalCipher<CryptoPP::CBC_Encryption>::CipherModeFinalTemplate_ExternalCipher(CryptoPP::BlockCipher&, unsigned char const*, int)
    135,066  /usr/include/c++/4.7/bits/basic_string.h:char* std::string::_S_construct<char*>(char*, char*, std::allocator<char> const&, std::forward_iterator_tag)
    135,066  /usr/include/x86_64-linux-gnu/bits/string3.h:CryptoPP::StreamTransformationFilter::LastPut(unsigned char const*, unsigned long)
    135,066  /usr/include/cryptopp/seckey.h:non-virtual thunk to CryptoPP::BlockCipherFinal<(CryptoPP::CipherDir)0, CryptoPP::Rijndael::Enc>::IsForwardTransformation() const [/usr/local/bin/ceph-osd]
    135,060  /usr/include/c++/4.7/bits/stl_uninitialized.h:std::vector<OSDOp, std::allocator<OSDOp> >::_M_fill_insert(__gnu_cxx::__normal_iterator<OSDOp*, std::vector<OSDOp, std::allocator<OSDOp> > >, unsigned long, OSDOp const&)
    135,060  /usr/include/c++/4.7/bits/stl_list.h:ReplicatedPG::OpContext::OpContext(std::tr1::shared_ptr<OpRequest>, osd_reqid_t, std::vector<OSDOp, std::allocator<OSDOp> >&, ObjectState*, SnapSetContext*, ReplicatedPG*)
    135,060  osd/ReplicatedPG.cc:ReplicatedPG::is_missing_object(hobject_t const&) [/usr/local/bin/ceph-osd]
    135,060  /usr/include/c++/4.7/ext/new_allocator.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > const&)
    135,060  osd/PG.h:PG::have_same_or_newer_map(unsigned int) [/usr/local/bin/ceph-osd]
    135,060  ./osd/osd_types.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    135,060  /usr/include/c++/4.7/bits/stl_map.h:ReplicatedPG::is_missing_object(hobject_t const&)
    135,060  /usr/include/c++/4.7/ext/atomicity.h:LFNIndex::~LFNIndex()
    135,060  ./include/buffer.h:ReplicatedPG::do_osd_ops(ReplicatedPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)
    135,060  /usr/include/c++/4.7/bits/basic_string.h:bool boost::tuples::detail::lt<boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > >, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > >(boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > const&)
    135,060  /usr/include/c++/4.7/bits/basic_string.h:OpRequest::~OpRequest()'2
    135,060  /usr/include/c++/4.7/bits/stl_map.h:IndexManager::put_index(coll_t)
    135,060  /usr/include/x86_64-linux-gnu/bits/fcntl2.h:FileStore::lfn_open(coll_t, hobject_t const&, int, unsigned int, std::tr1::shared_ptr<CollectionIndex::Path>*, std::tr1::shared_ptr<CollectionIndex>*)
    135,060  /usr/include/c++/4.7/bits/stl_vector.h:LFNIndex::lookup(hobject_t const&, std::tr1::shared_ptr<CollectionIndex::Path>*, int*)
    135,060  /usr/include/c++/4.7/bits/basic_string.h:bool boost::tuples::detail::lt<boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > >, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > >(boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > const&, boost::tuples::cons<std::string const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > > > > const&)
    135,060  /usr/include/c++/4.7/bits/stl_list.h:OSDOp* std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*>(__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, __gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*)
    135,060  /usr/include/c++/4.7/bits/stl_tree.h:ObjectContext::ObjectContext(object_info_t const&, bool, SnapSetContext*)
    135,060  /usr/include/c++/4.7/ext/new_allocator.h:std::vector<OSDOp, std::allocator<OSDOp> >::_M_fill_insert(__gnu_cxx::__normal_iterator<OSDOp*, std::vector<OSDOp, std::allocator<OSDOp> > >, unsigned long, OSDOp const&)
    135,060  /usr/include/c++/4.7/tr1/shared_ptr.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > const&)
    135,060  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_weak_release() [/usr/local/bin/ceph-osd]
    135,060  osd/osd_types.h:object_info_t::~object_info_t() [/usr/local/bin/ceph-osd]
    135,060  /usr/include/c++/4.7/bits/stl_vector.h:MOSDOpReply::MOSDOpReply(MOSDOp*, int, unsigned int, int)
    135,060  osd/osd_types.h:void std::__uninitialized_fill_n<false>::__uninit_fill_n<OSDOp*, unsigned long, OSDOp>(OSDOp*, unsigned long, OSDOp const&)
    135,060  /usr/include/c++/4.7/ext/atomicity.h:CollectionIndex::Path::~Path()
    135,060  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:OSD::OpWQ::_dequeue()
    135,060  /usr/include/c++/4.7/bits/basic_string.h:bool boost::tuples::detail::lt<boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> >, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > >(boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > const&)
    135,060  /usr/include/c++/4.7/bits/stl_vector.h:ReplicatedPG::OpContext::OpContext(std::tr1::shared_ptr<OpRequest>, osd_reqid_t, std::vector<OSDOp, std::allocator<OSDOp> >&, ObjectState*, SnapSetContext*, ReplicatedPG*)
    135,060  ./os/hobject.h:HashIndex::get_path_components(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*)
    135,060  /usr/include/c++/4.7/tr1/shared_ptr.h:CollectionIndex::Path::~Path()
    135,060  /usr/include/c++/4.7/tr1/shared_ptr.h:LFNIndex::~LFNIndex()
    135,040  /usr/include/c++/4.7/bits/stl_iterator.h:HashIndex::_lookup(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*, std::string*, int*)
    134,584  osd/OSD.cc:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::count(pg_t const&) const [/usr/local/bin/ceph-osd]
    134,346  ./log/SubsystemMap.h:PG::lock(bool)
    132,708  /usr/include/c++/4.7/bits/basic_string.h:MOSDOpReply::~MOSDOpReply()'2
    132,552  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::_M_erase(std::_Rb_tree_node<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >*) [/usr/local/bin/ceph-osd]
    130,032  /usr/include/c++/4.7/bits/stl_list.h:void std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >::_M_initialize_dispatch<std::_List_const_iterator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >(std::_List_const_iterator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > >, std::_List_const_iterator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > >, std::__false_type) [clone .isra.2621] [/usr/local/bin/ceph-osd]
    128,414  ./msg/msg_types.h:std::map<entity_inst_t, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::operator[](entity_inst_t const&)
    128,168  ???:tcmalloc::DLL_Remove(tcmalloc::Span*) [/usr/lib/libtcmalloc.so.4.1.0]
    126,644  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned long, std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > >, std::_Select1st<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > > >::_M_insert_unique(std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > const&) [/usr/local/bin/ceph-osd]
    126,392  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> > >::_M_insert_unique(std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> const&) [/usr/local/bin/ceph-osd]
    126,156  ???:tcmalloc::ThreadCache::ListTooLong(tcmalloc::ThreadCache::FreeList*, unsigned long) [/usr/lib/libtcmalloc.so.4.1.0]
    125,986  /usr/include/c++/4.7/bits/stl_vector.h:Pipe::writer()
    125,756  /usr/include/c++/4.7/bits/stl_pair.h:std::_Rb_tree<std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<double, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_unique(std::pair<double, std::tr1::shared_ptr<OpRequest> > const&)
    124,614  /usr/include/c++/4.7/bits/stl_pair.h:std::_Rb_tree<entity_inst_t, std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > >, std::_Select1st<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::_M_create_node(std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > const&) [clone .isra.2728]
    123,805  /usr/include/c++/4.7/bits/basic_string.h:OpRequest::mark_reached_pg()
    123,805  ???:operator!=(object_locator_t const&, object_locator_t const&) [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/tr1/shared_ptr.h:ReplicatedPG::log_op_stats(ReplicatedPG::OpContext*)
    123,805  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<snapid_t, std::pair<snapid_t const, interval_set<unsigned long> >, std::_Select1st<std::pair<snapid_t const, interval_set<unsigned long> > >, std::less<snapid_t>, std::allocator<std::pair<snapid_t const, interval_set<unsigned long> > > >::_M_erase(std::_Rb_tree_node<std::pair<snapid_t const, interval_set<unsigned long> > >*) [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/stl_iterator_base_funcs.h:OpRequest::get_duration() const
    123,805  osd/osd_types.h:OSDOp::OSDOp() [/usr/local/bin/ceph-osd]
    123,805  ./messages/MOSDOp.h:MOSDOp::MOSDOp() [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<ReplicatedPG::OpContext::NotifyAck, std::allocator<ReplicatedPG::OpContext::NotifyAck> >::_M_clear() [/usr/local/bin/ceph-osd]
    123,805  ./include/buffer.h:ceph::buffer::list::push_back(ceph::buffer::ptr const&) [clone .part.382] [/usr/local/bin/ceph-osd]
    123,805  /usr/include/boost/tuple/tuple_comparison.hpp:operator<(hobject_t const&, hobject_t const&)
    123,805  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<ObjectContext*, ObjectContext*, std::_Identity<ObjectContext*>, std::less<ObjectContext*>, std::allocator<ObjectContext*> >::_M_erase(std::_Rb_tree_node<ObjectContext*>*) [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::clear() [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/list.tcc:std::_List_base<notify_info_t, std::allocator<notify_info_t> >::_M_clear() [/usr/local/bin/ceph-osd]
    123,805  osd/osd_types.h:ObjectContext::~ObjectContext() [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/basic_string.h:__gnu_cxx::__enable_if<std::__is_char<char>::__value, bool>::__type std::operator==<char>(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<pg_log_entry_t, std::allocator<pg_log_entry_t> >::~vector() [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned long, std::pair<unsigned long const, unsigned long>, std::_Select1st<std::pair<unsigned long const, unsigned long> >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, unsigned long> > >::_M_erase(std::_Rb_tree_node<std::pair<unsigned long const, unsigned long> >*) [/usr/local/bin/ceph-osd]
    123,805  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<snapid_t, std::pair<snapid_t const, unsigned long>, std::_Select1st<std::pair<snapid_t const, unsigned long> >, std::less<snapid_t>, std::allocator<std::pair<snapid_t const, unsigned long> > >::_M_erase(std::_Rb_tree_node<std::pair<snapid_t const, unsigned long> >*) [/usr/local/bin/ceph-osd]
    122,391  ./osd/osd_types.h:void decode<pg_t, eversion_t>(std::map<pg_t, eversion_t, std::less<pg_t>, std::allocator<std::pair<pg_t const, eversion_t> > >&, ceph::buffer::list::iterator&)
    120,932  osd/OSD.cc:std::map<PG*, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::operator[](PG* const&)
    117,549  /usr/include/c++/4.7/bits/stl_iterator.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::dequeue()
    116,740  ./msg/msg_types.h:operator<(entity_inst_t const&, entity_inst_t const&) [/usr/local/bin/ceph-osd]
    113,936  ./osd/osd_types.h:std::map<pg_t, pg_stat_t, std::less<pg_t>, std::allocator<std::pair<pg_t const, pg_stat_t> > >::operator[](pg_t const&)
    113,180  ./common/Mutex.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_dequeue()
    113,106  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> const&) [/usr/local/bin/ceph-osd]
    113,075  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned long, std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > >, std::_Select1st<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > >, std::less<unsigned long>, std::allocator<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > > >::_M_insert_unique_(std::_Rb_tree_const_iterator<std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > >, std::pair<unsigned long const, std::list<std::pair<unsigned int, DispatchQueue::QueueItem>, std::allocator<std::pair<unsigned int, DispatchQueue::QueueItem> > > > const&) [/usr/local/bin/ceph-osd]
    112,706  /usr/include/c++/4.7/bits/stl_vector.h:Pipe::reader()
    112,700  ./include/byteorder.h:Message::encode(unsigned long, bool)
    112,700  ./include/byteorder.h:Message::Message(int, int, int)
    112,580  /usr/include/c++/4.7/bits/stl_vector.h:Throttle::put(long)
    112,570  auth/Crypto.cc:CryptoPP::BlockOrientedCipherModeBase::MandatoryBlockSize() const [/usr/local/bin/ceph-osd]
    112,565  /build/buildd/libcrypto++-5.6.1/smartptr.h:CryptoPP::StreamTransformationFilter::StreamTransformationFilter(CryptoPP::StreamTransformation&, CryptoPP::BufferedTransformation*, CryptoPP::BlockPaddingSchemeDef::BlockPaddingScheme, bool)
    112,565  /usr/include/c++/4.7/ext/atomicity.h:__gnu_cxx::__exchange_and_add_dispatch(int*, int) [clone .constprop.96] [/usr/local/bin/ceph-osd]
    112,565  /build/buildd/libcrypto++-5.6.1/algparam.cpp:CryptoPP::AlgorithmParameters::AlgorithmParameters() [/usr/lib/libcrypto++.so.9.0.0]
    112,555  /usr/include/cryptopp/smartptr.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    112,555  /usr/include/cryptopp/rijndael.h:CryptoAES::encrypt(ceph::buffer::ptr const&, ceph::buffer::list const&, ceph::buffer::list&, std::string&) const
    112,550  osd/OSD.cc:OSD::_have_pg(pg_t) [/usr/local/bin/ceph-osd]
    112,550  /usr/include/c++/4.7/bits/stl_vector.h:std::vector<OSDOp, std::allocator<OSDOp> >::~vector() [/usr/local/bin/ceph-osd]
    112,550  /usr/include/x86_64-linux-gnu/sys/stat.h:LFNIndex::lookup(hobject_t const&, std::tr1::shared_ptr<CollectionIndex::Path>*, int*)
    112,550  /usr/include/c++/4.7/bits/stl_list.h:CephxSessionHandler::check_message_signature(Message*)
    112,550  /usr/include/c++/4.7/bits/stl_list.h:void encode_encrypt_enc_bl<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&)
    112,550  ./osd/osd_types.h:MOSDOp::get_reqid() const
    112,550  /usr/include/c++/4.7/bits/basic_string.h:OpRequest::mark_started()
    112,550  /usr/include/c++/4.7/bits/stl_vector.h:PG::publish_stats_to_osd()
    112,550  /usr/include/c++/4.7/bits/stl_list.h:int encode_encrypt<ceph::buffer::list>(CephContext*, ceph::buffer::list const&, CryptoKey const&, ceph::buffer::list&, std::string&)
    112,550  /usr/include/c++/4.7/bits/stl_list.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_void_process(void*, ThreadPool::TPHandle&)
    112,550  ./osd/osd_types.h:eversion_t::decode(ceph::buffer::list::iterator&)
    112,550  /usr/include/c++/4.7/tr1/shared_ptr.h:LFNIndex::set_ref(std::tr1::shared_ptr<CollectionIndex>) [/usr/local/bin/ceph-osd]
    112,550  ./common/Mutex.h:OSD::op_is_discardable(MOSDOp*)
    112,550  common/snap_types.cc:SnapContext::is_valid() const
    112,550  os/LFNIndex.h:LFNIndex::coll() const [/usr/local/bin/ceph-osd]
    112,550  ./osd/osd_types.h:FileStore::lfn_open(coll_t, hobject_t const&, int, unsigned int, std::tr1::shared_ptr<CollectionIndex::Path>*, std::tr1::shared_ptr<CollectionIndex>*)
    112,550  /usr/include/c++/4.7/bits/stl_tree.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
    112,550  /usr/include/c++/4.7/bits/stl_list.h:CephxSessionHandler::sign_message(Message*)
    112,550  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:std::list<boost::intrusive_ptr<PG>, std::allocator<boost::intrusive_ptr<PG> > >::_M_create_node(boost::intrusive_ptr<PG> const&) [clone .isra.2231]
    112,550  ./osd/osd_types.h:FileStore::get_index(coll_t, std::tr1::shared_ptr<CollectionIndex>*)
    112,550  /usr/include/boost/tuple/tuple_comparison.hpp:bool boost::tuples::detail::eq<boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > >, boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > > >(boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > > const&, boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > > const&) [/usr/local/bin/ceph-osd]
    112,550  /usr/include/c++/4.7/bits/basic_string.h:FileStore::getattr(coll_t, hobject_t const&, char const*, ceph::buffer::ptr&)
    112,550  ./include/utime.h:PerfCounters::tinc(int, utime_t)
    112,550  osd/PG.h:PG::unlock() [/usr/local/bin/ceph-osd]
    112,550  os/FileStore.cc:FileStore::get_cdir(coll_t, char*, int) [/usr/local/bin/ceph-osd]
    112,550  ./include/byteorder.h:ReplicatedPG::do_osd_ops(ReplicatedPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)
    112,550  /usr/include/c++/4.7/ext/new_allocator.h:std::list<boost::intrusive_ptr<PG>, std::allocator<boost::intrusive_ptr<PG> > >::_M_create_node(boost::intrusive_ptr<PG> const&) [clone .isra.2231]
    112,550  osd/OSDMap.cc:std::_Rb_tree<long, std::pair<long const, pg_pool_t>, std::_Select1st<std::pair<long const, pg_pool_t> >, std::less<long>, std::allocator<std::pair<long const, pg_pool_t> > >::find(long const&) const
    112,511  ???:tcmalloc::DLL_Prepend(tcmalloc::Span*, tcmalloc::Span*) [/usr/lib/libtcmalloc.so.4.1.0]
    112,112  /usr/include/c++/4.7/bits/basic_string.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_erase(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*)
    111,399  ./osd/osd_types.h:OSD::send_pg_stats(utime_t const&)
    108,519  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<int, std::pair<int const, std::list<Message*, std::allocator<Message*> > >, std::_Select1st<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::_M_erase(std::_Rb_tree_node<std::pair<int const, std::list<Message*, std::allocator<Message*> > > >*) [/usr/local/bin/ceph-osd]
    108,360  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::_M_erase(std::_Rb_tree_node<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >*) [/usr/local/bin/ceph-osd]
    103,884  common/buffer.cc:ceph::buffer::ptr::copy_in(unsigned int, unsigned int, char const*) [/usr/local/bin/ceph-osd]
    102,942  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >, std::_Rb_tree_const_iterator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> >) [/usr/local/bin/ceph-osd]
    101,562  msg/Message.cc:ceph::buffer::list::crc32c(unsigned int)
    101,463  ./include/atomic.h:ceph::buffer::create_page_aligned(unsigned int)
    101,430  msg/DispatchQueue.h:DispatchQueue::enqueue(Message*, int, unsigned long)
    101,430  /usr/include/c++/4.7/ext/new_allocator.h:std::_Rb_tree<Message*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > >, std::_Select1st<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >, std::less<Message*>, std::allocator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > const&)
    101,385  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::claim(ceph::buffer::list&)
    101,340  /usr/include/atomic_ops/sysdeps/gcc/../aligned_atomic_load_store.h:Throttle::get(long, long)
    101,340  /usr/include/atomic_ops/sysdeps/gcc/x86.h:Throttle::get(long, long)
    101,304  ./include/buffer.h:ceph::buffer::list::~list() [/usr/local/bin/ceph-osd]
    101,304  common/buffer.cc:ceph::buffer::ptr::ptr(unsigned int) [/usr/local/bin/ceph-osd]
    101,295  ./include/buffer.h:object_info_t::object_info_t(ceph::buffer::list&)
    101,295  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::_process(boost::intrusive_ptr<PG>, ThreadPool::TPHandle&)
    101,295  /build/buildd/eglibc-2.17/nptl/../csu/errno-loc.c:__errno_location [/lib/x86_64-linux-gnu/libpthread-2.17.so]
    101,295  /usr/include/c++/4.7/bits/stl_list.h:std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_create_node(ceph::buffer::ptr const&) [clone .isra.380] [/usr/local/bin/ceph-osd]
    101,295  /usr/include/c++/4.7/bits/stl_vector.h:object_info_t::~object_info_t()
    101,295  /usr/include/c++/4.7/bits/stl_map.h:OSDMap::raw_pg_to_pg(pg_t) const
    101,295  ./include/object.h:void std::__uninitialized_fill_n<false>::__uninit_fill_n<OSDOp*, unsigned long, OSDOp>(OSDOp*, unsigned long, OSDOp const&)
    101,295  osd/OpRequest.h:PG::op_has_sufficient_caps(std::tr1::shared_ptr<OpRequest>)
    101,295  ./log/SubsystemMap.h:OSD::_share_map_incoming(entity_name_t, Connection*, unsigned int, OSD::Session*)
    101,295  ./include/buffer.h:object_info_t::decode(ceph::buffer::list&)
    101,295  /usr/include/c++/4.7/bits/stl_list.h:ReplicatedPG::do_osd_op_effects(ReplicatedPG::OpContext*)
    101,295  ./include/buffer.h:OSDOp::split_osd_op_vector_in_data(std::vector<OSDOp, std::allocator<OSDOp> >&, ceph::buffer::list&)
    101,295  ./include/utime.h:OpRequest::get_duration() const
    101,295  /usr/include/c++/4.7/ext/new_allocator.h:std::vector<OSDOp, std::allocator<OSDOp> >::operator=(std::vector<OSDOp, std::allocator<OSDOp> > const&)
    101,295  /usr/include/c++/4.7/bits/stl_list.h:std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >::_M_create_node(std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > const&) [clone .isra.2619] [/usr/local/bin/ceph-osd]
    101,295  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<hobject_t, std::pair<hobject_t const, ObjectContext*>, std::_Select1st<std::pair<hobject_t const, ObjectContext*> >, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::_M_create_node(std::pair<hobject_t const, ObjectContext*> const&) [clone .isra.1930] [/usr/local/bin/ceph-osd]
    101,295  /usr/include/c++/4.7/bits/stl_list.h:ceph::buffer::list::clear()
    101,186  ./include/utime.h:bool std::operator< <utime_t, std::tr1::shared_ptr<OpRequest> >(std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&)
    101,099  /usr/include/c++/4.7/bits/stl_map.h:OSD::OpWQ::_process(boost::intrusive_ptr<PG>)
    100,938  /usr/include/c++/4.7/bits/stl_vector.h:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::count(pg_t const&) const
     98,700  /build/buildd/eglibc-2.17/nptl/../nptl/pthread_mutex_lock.c:_L_cond_lock_987 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
     98,162  os/IndexManager.cc:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::equal_range(coll_t const&)
     95,771  /build/buildd/eglibc-2.17/nptl/pthread_mutex_lock.c:_L_lock_982 [/lib/x86_64-linux-gnu/libpthread-2.17.so]
     94,620  /usr/include/c++/4.7/bits/stl_tree.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue::pop_front()
     94,159  osd/OpRequest.cc:std::_Rb_tree<std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<double, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_unique(std::pair<double, std::tr1::shared_ptr<OpRequest> > const&)
     92,639  ???:0x0000000000074c10 [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
     92,106  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue::SubQueue(PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue const&) [/usr/local/bin/ceph-osd]
     91,336  /usr/include/c++/4.7/ext/new_allocator.h:void std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_initialize_dispatch<std::_List_const_iterator<ceph::buffer::ptr> >(std::_List_const_iterator<ceph::buffer::ptr>, std::_List_const_iterator<ceph::buffer::ptr>, std::__false_type) [clone .isra.216]
     90,684  /build/buildd/eglibc-2.17/libio/iopadn.c:_IO_padn [/lib/x86_64-linux-gnu/libc-2.17.so]
     90,544  /usr/include/c++/4.7/bits/stl_set.h:OSD::PeeringWQ::_dequeue(std::list<PG*, std::allocator<PG*> >*)
     90,544  /usr/include/c++/4.7/bits/stl_list.h:ThreadPool::BatchWorkQueue<PG>::_void_dequeue()
     90,416  /usr/include/c++/4.7/bits/stl_vector.h:OSD::do_waiters()
     90,290  /usr/include/c++/4.7/bits/stl_list.h:DispatchQueue::enqueue(Message*, int, unsigned long)
     90,280  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<unsigned int, std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue>, std::_Select1st<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> > >::_M_erase(std::_Rb_tree_node<std::pair<unsigned int const, PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::SubQueue> >*) [/usr/local/bin/ceph-osd]
     90,176  /usr/include/boost/smart_ptr/intrusive_ptr.hpp:DispatchQueue::entry()
     90,176  /usr/include/c++/4.7/ext/new_allocator.h:ceph::buffer::list::push_back(ceph::buffer::ptr const&) [clone .part.215]
     90,160  /usr/include/c++/4.7/bits/stl_set.h:DispatchQueue::enqueue(Message*, int, unsigned long)
     90,160  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<Message*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > >, std::_Select1st<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >, std::less<Message*>, std::allocator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >) [/usr/local/bin/ceph-osd]
     90,160  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<std::pair<double, Message*>, std::pair<double, Message*>, std::_Identity<std::pair<double, Message*> >, std::less<std::pair<double, Message*> >, std::allocator<std::pair<double, Message*> > >::_M_erase_aux(std::_Rb_tree_const_iterator<std::pair<double, Message*> >) [/usr/local/bin/ceph-osd]
     90,160  ./log/SubsystemMap.h:SimpleMessenger::_send_message(Message*, Connection*, bool)
     90,160  ./msg/msg_types.h:MonClient::ms_dispatch(Message*)
     90,160  /usr/include/c++/4.7/ext/new_allocator.h:std::_Rb_tree<std::pair<double, Message*>, std::pair<double, Message*>, std::_Identity<std::pair<double, Message*> >, std::less<std::pair<double, Message*> >, std::allocator<std::pair<double, Message*> > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<double, Message*> const&)
     90,160  ./log/SubsystemMap.h:Pipe::handle_ack(unsigned long)
     90,160  ./common/Cond.h:Pipe::_send(Message*)
     90,160  ./common/Cond.h:DispatchQueue::enqueue(Message*, int, unsigned long)
     90,157  ./common/Cond.h:Pipe::reader()
     90,055  /build/buildd/libcrypto++-5.6.1/smartptr.h:CryptoPP::Filter::AttachedTransformation()
     90,052  /build/buildd/libcrypto++-5.6.1/fips140.cpp:CryptoPP::FIPS_140_2_ComplianceEnabled() [/usr/lib/libcrypto++.so.9.0.0]
     90,052  ???:__cxa_get_globals [/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.17]
     90,052  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::FilterWithBufferedInput::BlockQueue::GetContigousBlocks(unsigned long&) [/usr/lib/libcrypto++.so.9.0.0]
     90,052  /usr/include/cryptopp/modes.h:non-virtual thunk to CryptoPP::BlockOrientedCipherModeBase::IsForwardTransformation() const [/usr/local/bin/ceph-osd]
     90,052  /usr/include/cryptopp/cryptlib.h:CryptoPP::BufferedTransformation::PutModifiable2(unsigned char*, unsigned long, int, bool) [/usr/local/bin/ceph-osd]
     90,052  /usr/include/cryptopp/modes.h:CryptoPP::CBC_ModeBase::MinLastBlockSize() const [/usr/local/bin/ceph-osd]
     90,052  /usr/include/cryptopp/modes.h:non-virtual thunk to CryptoPP::CBC_ModeBase::MinLastBlockSize() const [/usr/local/bin/ceph-osd]
     90,046  /build/buildd/libcrypto++-5.6.1/algparam.h:CryptoPP::FilterWithBufferedInput::BlockQueue::Put(unsigned char const*, unsigned long)
     90,044  /build/buildd/libcrypto++-5.6.1/modes.cpp:non-virtual thunk to CryptoPP::CBC_Encryption::ProcessData(unsigned char*, unsigned char const*, unsigned long) [/usr/lib/libcrypto++.so.9.0.0]
     90,044  /build/buildd/libcrypto++-5.6.1/rijndael.cpp:non-virtual thunk to CryptoPP::Rijndael::Enc::AdvancedProcessBlocks(unsigned char const*, unsigned char const*, unsigned char*, unsigned long, unsigned int) const [/usr/lib/libcrypto++.so.9.0.0]
     90,044  /build/buildd/libcrypto++-5.6.1/secblock.h:CryptoPP::CipherModeFinalTemplate_ExternalCipher<CryptoPP::CBC_Encryption>::CipherModeFinalTemplate_ExternalCipher(CryptoPP::BlockCipher&, unsigned char const*, int)
     90,040  /usr/include/c++/4.7/bits/stl_list.h:void std::_Destroy_aux<false>::__destroy<OSDOp*>(OSDOp*, OSDOp*)
     90,040  /usr/include/c++/4.7/bits/stl_list.h:ReplicatedPG::OpContext::~OpContext()
     90,040  /usr/include/c++/4.7/bits/basic_string.h:OpTracker::RemoveOnDelete::operator()(OpRequest*)
     90,040  /usr/include/c++/4.7/ext/atomicity.h:LFNIndex::lookup(hobject_t const&, std::tr1::shared_ptr<CollectionIndex::Path>*, int*)
     90,040  /usr/include/c++/4.7/ext/new_allocator.h:ceph::buffer::list::push_back(ceph::buffer::ptr const&) [clone .part.1501]
     90,040  /usr/include/c++/4.7/ext/new_allocator.h:void std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_initialize_dispatch<std::_List_const_iterator<ceph::buffer::ptr> >(std::_List_const_iterator<ceph::buffer::ptr>, std::_List_const_iterator<ceph::buffer::ptr>, std::__false_type) [clone .isra.1502]
     90,040  osd/osd_types.h:ReplicatedPG::do_op(std::tr1::shared_ptr<OpRequest>)
     90,040  /usr/include/c++/4.7/bits/stl_tree.h:std::map<hobject_t, ObjectContext*, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::erase(hobject_t const&)
     90,040  /usr/include/c++/4.7/ext/atomicity.h:OSD::enqueue_op(PG*, std::tr1::shared_ptr<OpRequest>)
     90,040  /usr/include/c++/4.7/bits/basic_string.h:FileStore::read(coll_t, hobject_t const&, unsigned long, unsigned long, ceph::buffer::list&, bool)
     90,040  /usr/include/c++/4.7/bits/stl_map.h:std::map<hobject_t, ObjectContext*, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::count(hobject_t const&) const [/usr/local/bin/ceph-osd]
     90,040  /usr/include/c++/4.7/ext/atomicity.h:std::tr1::__weak_count<(__gnu_cxx::_Lock_policy)2>::operator=(std::tr1::__shared_count<(__gnu_cxx::_Lock_policy)2> const&)
     90,040  /usr/include/c++/4.7/tr1/shared_ptr.h:PG::publish_stats_to_osd()
     90,040  ./common/PrioritizedQueue.h:OSD::OpWQ::_dequeue()
     90,040  /usr/include/c++/4.7/tr1/shared_ptr.h:std::tr1::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_weak_release()
     90,040  /usr/include/c++/4.7/bits/basic_string.h:void std::_Destroy_aux<false>::__destroy<OSDOp*>(OSDOp*, OSDOp*)
     90,040  /usr/include/c++/4.7/bits/stl_vector.h:MOSDOpReply::encode_payload(unsigned long)
     90,040  /usr/include/c++/4.7/bits/basic_string.h:ObjectStore::getattr(coll_t, hobject_t const&, char const*, ceph::buffer::list&)
     90,040  osd/osd_types.h:eversion_t::decode(ceph::buffer::list::iterator&) [/usr/local/bin/ceph-osd]
     90,040  ./include/buffer.h:void std::_Destroy_aux<false>::__destroy<OSDOp*>(OSDOp*, OSDOp*)
     90,040  ./log/SubsystemMap.h:ReplicatedPG::put_object_context(ObjectContext*)
     90,040  /usr/include/c++/4.7/tr1/shared_ptr.h:OSD::handle_op(std::tr1::shared_ptr<OpRequest>)
     90,040  /usr/include/c++/4.7/ext/new_allocator.h:std::list<ceph::buffer::ptr, std::allocator<ceph::buffer::ptr> >::_M_create_node(ceph::buffer::ptr const&) [clone .isra.380]
     90,040  ./include/buffer.h:MOSDOp::decode_payload()
     90,040  /usr/include/c++/4.7/bits/stl_map.h:std::map<hobject_t, ObjectContext*, std::less<hobject_t>, std::allocator<std::pair<hobject_t const, ObjectContext*> > >::erase(hobject_t const&) [/usr/local/bin/ceph-osd]
     90,040  ./include/buffer.h:osd_reqid_t::decode(ceph::buffer::list::iterator&)
     90,040  ./include/object.h:bool boost::tuples::detail::lt<boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> >, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > >(boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > const&, boost::tuples::cons<object_t const&, boost::tuples::cons<snapid_t const&, boost::tuples::null_type> > const&)
     90,040  /usr/include/c++/4.7/tr1/shared_ptr.h:OpTracker::unregister_inflight_op(OpRequest*)
     90,040  /usr/include/c++/4.7/bits/char_traits.h:std::basic_string<char, std::char_traits<char>, std::allocator<char> > std::operator+<char, std::char_traits<char>, std::allocator<char> >(std::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, char const*)
     90,040  ./common/PrioritizedQueue.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue::front() const [/usr/local/bin/ceph-osd]
     90,040  ./common/Cond.h:IndexManager::get_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
     90,040  ./osd/osd_types.h:MOSDOp::MOSDOp()
     90,040  /usr/include/c++/4.7/bits/basic_string.h:ReplicatedPG::do_osd_ops(ReplicatedPG::OpContext*, std::vector<OSDOp, std::allocator<OSDOp> >&)
     90,040  /usr/include/x86_64-linux-gnu/sys/stat.h:LFNIndex::lfn_get_name(std::vector<std::string, std::allocator<std::string> > const&, hobject_t const&, std::string*, std::string*, int*)
     90,040  /usr/include/c++/4.7/ext/new_allocator.h:std::_List_base<std::pair<utime_t, std::string>, std::allocator<std::pair<utime_t, std::string> > >::_M_clear()
     90,040  os/HashIndex.h:IndexManager::build_index(coll_t, char const*, std::tr1::shared_ptr<CollectionIndex>*)
     90,040  /usr/include/c++/4.7/bits/stl_construct.h:HashIndex::_lookup(hobject_t const&, std::vector<std::string, std::allocator<std::string> >*, std::string*, int*)
     90,040  ./common/Cond.h:ThreadPool::WorkQueueVal<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, boost::intrusive_ptr<PG> >::queue(std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >)
     89,768  ./osd/osd_types.h:pg_t::encode(ceph::buffer::list&) const
     89,564  osd/OSD.cc:__gnu_cxx::hashtable<std::pair<pg_t const, PG*>, pg_t, __gnu_cxx::hash<pg_t>, std::_Select1st<std::pair<pg_t const, PG*> >, std::equal_to<pg_t>, std::allocator<PG*> >::find_or_insert(std::pair<pg_t const, PG*> const&)
     87,819  /usr/include/c++/4.7/bits/stl_tree.h:std::map<int, std::list<Message*, std::allocator<Message*> >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::operator[](int const&)
     87,690  /usr/include/c++/4.7/bits/basic_ios.h:Pipe::do_sendmsg(msghdr*, int, bool)
     87,606  os/IndexManager.cc:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::find(coll_t const&) const
     85,194  ./include/encoding.h:void encode<int>(std::vector<int, std::allocator<int> > const&, ceph::buffer::list&) [/usr/local/bin/ceph-osd]
     84,084  /usr/include/c++/4.7/ext/atomicity.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_erase(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*)
     84,084  /usr/include/c++/4.7/tr1/shared_ptr.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_erase(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*)
     81,004  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::_M_erase(std::_Rb_tree_node<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >*)'2 [/usr/local/bin/ceph-osd]
     81,004  /usr/include/c++/4.7/bits/stl_tree.h:std::_Rb_tree<PG*, std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > >, std::_Select1st<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::clear() [/usr/local/bin/ceph-osd]
     80,672  ./common/Cond.h:Pipe::writer()
     79,204  /usr/include/c++/4.7/bits/stl_tree.h:PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::length()
     79,030  /usr/include/c++/4.7/bits/stl_iterator_base_funcs.h:OSD::OpWQ::_process(boost::intrusive_ptr<PG>)
     79,020  ./common/PrioritizedQueue.h:DispatchQueue::enqueue(Message*, int, unsigned long)
     78,904  msg/DispatchQueue.h:PrioritizedQueue<DispatchQueue::QueueItem, unsigned long>::dequeue()
     78,890  /usr/include/c++/4.7/bits/stl_list.h:Pipe::read_message(Message**)
     78,890  ./include/utime.h:DispatchQueue::enqueue(Message*, int, unsigned long)
     78,890  msg/Message.h:SimpleMessenger::submit_message(Message*, Connection*, entity_addr_t const&, int, bool)
     78,876  ./log/SubsystemMap.h:OSD::_dispatch(Message*)
     78,785  /usr/include/c++/4.7/ext/hash_map:OSD::_lookup_pg(pg_t)
     78,785  ./common/Cond.h:ObjectContext::ObjectContext(object_info_t const&, bool, SnapSetContext*)
     78,785  /usr/include/c++/4.7/tr1/shared_ptr.h:std::_Rb_tree<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<utime_t, std::tr1::shared_ptr<OpRequest> > const&)
     78,785  /usr/include/c++/4.7/tr1/shared_ptr.h:std::_Rb_tree<std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::pair<double, std::tr1::shared_ptr<OpRequest> >, std::_Identity<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::less<std::pair<double, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<double, std::tr1::shared_ptr<OpRequest> > > >::_M_insert_(std::_Rb_tree_node_base const*, std::_Rb_tree_node_base const*, std::pair<double, std::tr1::shared_ptr<OpRequest> > const&)
     78,785  /usr/include/c++/4.7/tr1/shared_ptr.h:ReplicatedPG::OpContext::OpContext(std::tr1::shared_ptr<OpRequest>, osd_reqid_t, std::vector<OSDOp, std::allocator<OSDOp> >&, ObjectState*, SnapSetContext*, ReplicatedPG*)
     78,785  /usr/include/c++/4.7/bits/basic_string.h:bool boost::tuples::detail::eq<boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > >, boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > > >(boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > > const&, boost::tuples::cons<object_t const&, boost::tuples::cons<std::string const&, boost::tuples::cons<snapid_t const&, boost::tuples::cons<unsigned int const&, boost::tuples::cons<bool const&, boost::tuples::cons<long const&, boost::tuples::cons<std::string const&, boost::tuples::null_type> > > > > > > const&)
     78,785  ???:object_stat_collection_t::add(object_stat_collection_t const&) [/usr/local/bin/ceph-osd]
     78,785  ./msg/Message.h:OpTracker::unregister_inflight_op(OpRequest*)
     78,785  ./include/object.h:OSDOp* std::__uninitialized_copy<false>::__uninit_copy<__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*>(__gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, __gnu_cxx::__normal_iterator<OSDOp const*, std::vector<OSDOp, std::allocator<OSDOp> > >, OSDOp*)
     78,785  /usr/include/c++/4.7/tr1/shared_ptr.h:OSD::OpWQ::_dequeue()
     78,785  ./osd/OSDMap.h:OSDMap::have_pg_pool(long) const [/usr/local/bin/ceph-osd]
     78,785  osd/OpRequest.h:OpRequest::get_duration() const
     78,785  osd/osd_types.cc:pg_pool_t::raw_pg_to_pg(pg_t) const [/usr/local/bin/ceph-osd]
     78,785  ./log/SubsystemMap.h:OSD::require_same_or_newer_map(std::tr1::shared_ptr<OpRequest>, unsigned int)
     78,785  ./include/byteorder.h:MOSDOpReply::MOSDOpReply(MOSDOp*, int, unsigned int, int)
     78,442  /usr/include/c++/4.7/bits/stl_list.h:std::map<PG*, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > >, std::less<PG*>, std::allocator<std::pair<PG* const, std::list<std::tr1::shared_ptr<OpRequest>, std::allocator<std::tr1::shared_ptr<OpRequest> > > > > >::operator[](PG* const&)
     78,366  /usr/include/c++/4.7/bits/stl_tree.h:std::reverse_iterator<std::_Rb_tree_iterator<std::pair<unsigned int const, PrioritizedQueue<std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> >, entity_inst_t>::SubQueue> > >::operator*() const
     78,351  /usr/include/c++/4.7/bits/stl_list.h:std::map<int, std::list<Message*, std::allocator<Message*> >, std::less<int>, std::allocator<std::pair<int const, std::list<Message*, std::allocator<Message*> > > > >::operator[](int const&)
     77,952  /usr/include/c++/4.7/ext/hash_map:OSD::_lookup_lock_pg(pg_t)
     77,952  ./osd/PG.h:OSD::sched_scrub()
     77,022  osd/PG.cc:std::vector<int, std::allocator<int> >::operator=(std::vector<int, std::allocator<int> > const&)
     76,662  /usr/include/c++/4.7/bits/stl_pair.h:DispatchQueue::enqueue(Message*, int, unsigned long)
     73,786  /usr/include/c++/4.7/bits/stl_tree.h:std::map<entity_inst_t, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > >, std::less<entity_inst_t>, std::allocator<std::pair<entity_inst_t const, std::list<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > >, std::allocator<std::pair<unsigned int, std::pair<boost::intrusive_ptr<PG>, std::tr1::shared_ptr<OpRequest> > > > > > > >::operator[](entity_inst_t const&)
     72,503  msg/DispatchQueue.cc:std::_Rb_tree<Message*, std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > >, std::_Select1st<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > >, std::less<Message*>, std::allocator<std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > > >::_M_insert_unique(std::pair<Message* const, std::_Rb_tree_const_iterator<std::pair<double, Message*> > > const&)
     72,450  ???:0x0000000038055d2d [/usr/lib/valgrind/callgrind-amd64-linux]
     72,090  /usr/include/c++/4.7/bits/char_traits.h:std::_Rb_tree<coll_t, std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> >, std::_Select1st<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >, std::less<coll_t>, std::allocator<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > > >::_M_lower_bound(std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*, std::_Rb_tree_node<std::pair<coll_t const, std::tr1::weak_ptr<CollectionIndex> > >*, coll_t const&) [clone .isra.91]


[-- Attachment #3: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Scaling RBD module
  2013-09-24 21:16               ` Sage Weil
       [not found]                 ` <alpine.DEB.2.00.1309241413280.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2013-09-24 22:23                 ` Somnath Roy
  1 sibling, 0 replies; 12+ messages in thread
From: Somnath Roy @ 2013-09-24 22:23 UTC (permalink / raw)
  To: Sage Weil, Travis Rhoden
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw


[-- Attachment #1.1: Type: text/plain, Size: 16342 bytes --]

Hi Sage,

We did quite a few experiment to see how ceph read performance can scale up. Here is the summary.



1.

First we tried to see how far a single node cluster with one osd can scale up. We started with cuttlefish release and the entire osd file system is on the ssd. What we saw with 4K size object and with single rados client with dedicated 10G network, throughput can't go beyond a certain point.

We dig through the code and found out SimpleMessenger is opening single socket connection (per client)to talk to the osd. Also, we saw there is only one dispatcher Q (Dispatch thread)/ SimpleMessenger to carry these requests to OSD. We started adding more dispatcher threads in Dispatch Q, rearrange several locks in the Pipe.cc to identify the bottleneck. What we end up discovering is that there is bottleneck both in upstream as well as in the downstream at osd level and changing the locking scheme in io path  will affect lot of other codes (that we don't even know).

So, we stopped that activity and started workaround the upstream bottleneck by introducing more clients to the single OSD. What we saw single OSD is scaling with lot of cpu utilization. To produce ~40K iops (4K) it is taking almost 12 core of cpu.

Another point, I didn't see this single osd scale with the Dumpling release with the multiple clients !! Something changed..



2.   After that, we setup a proper cluster with 3 high performing nodes and total 30 osds. Here also, we are seeing single rados bech client as well as rbd client instance is not scaling beyond a certain limit. It is not able to generate much load as node cpu utilization remains very low. But running multiple client instance the performance is scaling till hit the cpu limit.



So, it is pretty clear we are not able to saturate anything with single client and that's why the 'noshare' option was very helpful to measure the rbd performance benchmark. I have a single osd/single client level callgrind  data. Attachment is not going through the community I guess and that's why can't send it to you.



Now, I am doing the benchmark for radosgw and I think I am stuck with similar bottleneck here. Could you please confirm that if radosgw also opening single client instance to the cluster ?

If so, is there any similar option like 'noshare' in this case ? Here also, creating multiple radosgw instance on separate nodes the performance is scaling.

BTW, is there a way to run multiple radosgw to a single node or it has to be one/node ?





Thanks & Regards

Somnath







-----Original Message-----
From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel-owner-u79uwXL29TasMV2rI37PzA@public.gmane.orgorg> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Sage Weil
Sent: Tuesday, September 24, 2013 2:16 PM
To: Travis Rhoden
Cc: Josh Durgin; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel-u79uwXL29TbrhsbdSgBK9A@public.gmane.orgrg>; Anirban Ray; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFLNfb0M+mGrxg@public.gmane.orgm>
Subject: Re: [ceph-users] Scaling RBD module



On Tue, 24 Sep 2013, Travis Rhoden wrote:

> This "noshare" option may have just helped me a ton -- I sure wish I

> would have asked similar questions sooner, because I have seen the

> same failure to scale.  =)

>

> One question -- when using the "noshare" option (or really, even

> without it) are there any practical limits on the number of RBDs that

> can be mounted?  I have servers with ~100 RBDs on them each, and am

> wondering if I switch them all over to using "noshare" if anything is

> going to blow up, use a ton more memory, etc.  Even without noshare,

> are there any known limits to how many RBDs can be mapped?



With noshare each mapped image will appear as a separate client instance, which means it will have it's own session with teh monitors and own TCP connections to the OSDs.  It may be a viable workaround for now but in general I would not recommend it.



I'm very curious what the scaling issue is with the shared client.  Do you have a working perf that can capture callgraph information on this machine?



sage



>

> Thanks!

>

>  - Travis

>

>

> On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org<mailto:Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>>

> wrote:

>       Thanks Josh !

>       I am able to successfully add this noshare option in the image

>       mapping now. Looking at dmesg output, I found that was indeed

>       the secret key problem. Block performance is scaling now.

>

>       Regards

>       Somnath

>

>       -----Original Message-----

>       From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel-owner@vger.kernel.org>

>       [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Josh

>       Durgin

>       Sent: Thursday, September 19, 2013 12:24 PM

>       To: Somnath Roy

>       Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel-fy+rA21nqHI@public.gmane.orgrnel.org>; Anirban Ray;

>       ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>

>       Subject: Re: [ceph-users] Scaling RBD module

>

>       On 09/19/2013 12:04 PM, Somnath Roy wrote:

>       > Hi Josh,

>       > Thanks for the information. I am trying to add the following

>       but hitting some permission issue.

>       >

>       > root@emsclient:/etc# echo

>       <mon-1>:6789,<mon-2>:6789,<mon-3>:6789

>       > name=admin,key=client.admin,noshare test_rbd ceph_block_test'

>       >

>       > /sys/bus/rbd/add

>       > -bash: echo: write error: Operation not permitted

>

>       If you check dmesg, it will probably show an error trying to

>       authenticate to the cluster.

>

>       Instead of key=client.admin, you can pass the base64 secret

>       value as shown in 'ceph auth list' with the

>       secret=XXXXXXXXXXXXXXXXXXXXX option.

>

>       BTW, there's a ticket for adding the noshare option to rbd map

>       so using the sysfs interface like this is never necessary:

>

>       http://tracker.ceph.com/issues/6264

>

>       Josh

>

>       > Here is the contents of rbd directory..

>       >

>       > root@emsclient:/sys/bus/rbd# ll

>       > total 0

>       > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./

>       > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../

>       > --w-------  1 root root 4096 Sep 19 11:59 add

>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/

>       > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/

>       > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe

>       > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe

>       > --w-------  1 root root 4096 Sep 19 12:03 remove

>       > --w-------  1 root root 4096 Sep 19 11:59 uevent

>       >

>       >

>       > I checked even if I am logged in as root , I can't write

>       anything on /sys.

>       >

>       > Here is the Ubuntu version I am using..

>       >

>       > root@emsclient:/etc# lsb_release -a

>       > No LSB modules are available.

>       > Distributor ID: Ubuntu

>       > Description:    Ubuntu 13.04

>       > Release:        13.04

>       > Codename:       raring

>       >

>       > Here is the mount information....

>       >

>       > root@emsclient:/etc# mount

>       > /dev/mapper/emsclient--vg-root on / type ext4

>       (rw,errors=remount-ro)

>       > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys

>       type

>       > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type

>       tmpfs (rw)

>       > none on /sys/fs/fuse/connections type fusectl (rw) none on

>       > /sys/kernel/debug type debugfs (rw) none on

>       /sys/kernel/security type

>       > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)

>       devpts on

>       > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)

>       > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)

>       > none on /run/lock type tmpfs

>       (rw,noexec,nosuid,nodev,size=5242880)

>       > none on /run/shm type tmpfs (rw,nosuid,nodev) none on

>       /run/user type

>       > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)

>       > /dev/sda1 on /boot type ext2 (rw)

>       > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)

>       >

>       >

>       > Any idea what went wrong here ?

>       >

>       > Thanks & Regards

>       > Somnath

>       >

>       > -----Original Message-----

>       > From: Josh Durgin [mailto:josh.durgin-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]

>       > Sent: Wednesday, September 18, 2013 6:10 PM

>       > To: Somnath Roy

>       > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:ceph-devel@vger.kernel.org>; Anirban Ray;

>       > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>

>       > Subject: Re: [ceph-users] Scaling RBD module

>       >

>       > On 09/17/2013 03:30 PM, Somnath Roy wrote:

>       >> Hi,

>       >> I am running Ceph on a 3 node cluster and each of my server

>       node is running 10 OSDs, one for each disk. I have one admin

>       node and all the nodes are connected with 2 X 10G network. One

>       network is for cluster and other one configured as public

>       network.

>       >>

>       >> Here is the status of my cluster.

>       >>

>       >> ~/fio_test# ceph -s

>       >>

>       >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023

>       >>      health HEALTH_WARN clock skew detected on mon.

>       <server-name-2>, mon. <server-name-3>

>       >>      monmap e1: 3 mons at

>       {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,

>       <server-name-2>=xxx.xxx.xxx.xxx:6789/0,

>       <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64,

>       quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>

>       >>      osdmap e391: 30 osds: 30 up, 30 in

>       >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB

>       data, 27912 MB used, 11145 GB / 11172 GB avail

>       >>      mdsmap e1: 0/0/1 up

>       >>

>       >>

>       >> I started with rados bench command to benchmark the read

>       performance of this Cluster on a large pool (~10K PGs) and found

>       that each rados client has a limitation. Each client can only

>       drive up to a certain mark. Each server  node cpu utilization

>       shows it is  around 85-90% idle and the admin node (from where

>       rados client is running) is around ~80-85% idle. I am trying

>       with 4K object size.

>       >

>       > Note that rados bench with 4k objects is different from rbd

>       with 4k-sized I/Os - rados bench sends each request to a new

>       object, while rbd objects are 4M by default.

>       >

>       >> Now, I started running more clients on the admin node and the

>       performance is scaling till it hits the client cpu limit. Server

>       still has the cpu of 30-35% idle. With small object size I must

>       say that the ceph per osd cpu utilization is not promising!

>       >>

>       >> After this, I started testing the rados block interface with

>       kernel rbd module from my admin node.

>       >> I have created 8 images mapped on the pool having around 10K

>       PGs and I am not able to scale up the performance by running fio

>       (either by creating a software raid or running on individual

>       /dev/rbd* instances). For example, running multiple fio

>       instances (one in /dev/rbd1 and the other in /dev/rbd2)  the

>       performance I am getting is half of what I am getting if running

>       one instance. Here is my fio job script.

>       >>

>       >> [random-reads]

>       >> ioengine=libaio

>       >> iodepth=32

>       >> filename=/dev/rbd1

>       >> rw=randread

>       >> bs=4k

>       >> direct=1

>       >> size=2G

>       >> numjobs=64

>       >>

>       >> Let me know if I am following the proper procedure or not.

>       >>

>       >> But, If my understanding is correct, kernel rbd module is

>       acting as a client to the cluster and in one admin node I can

>       run only one of such kernel instance.

>       >> If so, I am then limited to the client bottleneck that I

>       stated earlier. The cpu utilization of the server side is around

>       85-90% idle, so, it is clear that client is not driving.

>       >>

>       >> My question is, is there any way to hit the cluster  with

>       more client from a single box while testing the rbd module ?

>       >

>       > You can run multiple librbd instances easily (for example with

>       multiple runs of the rbd bench-write command).

>       >

>       > The kernel rbd driver uses the same rados client instance for

>       multiple block devices by default. There's an option (noshare)

>       to use a new rados client instance for a newly mapped device,

>       but it's not exposed by the rbd cli. You need to use the sysfs

>       interface that 'rbd map' uses instead.

>       >

>       > Once you've used rbd map once on a machine, the kernel will

>       already have the auth key stored, and you can use:

>       >

>       > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare

>       poolname

>       > imagename' > /sys/bus/rbd/add

>       >

>       > Where 1.2.3.4:6789 is the address of a monitor, and you're

>       connecting as client.admin.

>       >

>       > You can use 'rbd unmap' as usual.

>       >

>       > Josh

>       >

>       >

>       > ________________________________

>       >

>       > PLEASE NOTE: The information contained in this electronic mail

>       message is intended only for the use of the designated

>       recipient(s) named above. If the reader of this message is not

>       the intended recipient, you are hereby notified that you have

>       received this message in error and that any review,

>       dissemination, distribution, or copying of this message is

>       strictly prohibited. If you have received this communication in

>       error, please notify the sender by telephone or e-mail (as shown

>       above) immediately and destroy any and all copies of this

>       message in your possession (whether hard copies or

>       electronically stored copies).

>       >

>       >

>

> --

> To unsubscribe from this list: send the line "unsubscribe ceph-devel"

> in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org<mailto:majordomo@vger.kernel.org> More majordomo

> info at  http://vger.kernel.org/majordomo-info.html

>

>

> _______________________________________________

> ceph-users mailing list

> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<mailto:ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>

> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

>

>

>

>

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).


[-- Attachment #1.2: Type: text/html, Size: 35564 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

* RE: [ceph-users] Scaling RBD module
  2013-09-24 22:17                   ` Somnath Roy
@ 2013-09-24 22:47                     ` Sage Weil
       [not found]                       ` <alpine.DEB.2.00.1309241535430.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Sage Weil @ 2013-09-24 22:47 UTC (permalink / raw)
  To: Somnath Roy
  Cc: Travis Rhoden, Josh Durgin, ceph-devel, Anirban Ray, ceph-users

[-- Attachment #1: Type: TEXT/PLAIN, Size: 18447 bytes --]

Hi Somnath!

On Tue, 24 Sep 2013, Somnath Roy wrote:
> 
> Hi Sage,
> 
> We did quite a few experiment to see how ceph read performance can scale up.
> Here is the summary.
> 
>  
> 
> 1.
> 
> First we tried to see how far a single node cluster with one osd can scale
> up. We started with cuttlefish release and the entire osd file system is on
> the ssd. What we saw with 4K size object and with single rados client with
> dedicated 10G network, throughput can't go beyond a certain point.

Are you using 'rados bench' to generate this load or something else?  
We've noticed that individual rados bench commands do not scale beyond a 
point but have never looked into it; the problem may be in the bench code 
and not in librados or SimpleMessenger.

> We dig through the code and found out SimpleMessenger is opening single
> socket connection (per client)to talk to the osd. Also, we saw there is only
> one dispatcher Q (Dispatch thread)/ SimpleMessenger to carry these requests
> to OSD. We started adding more dispatcher threads in Dispatch Q, rearrange
> several locks in the Pipe.cc to identify the bottleneck. What we end up
> discovering is that there is bottleneck both in upstream as well as in the
> downstream at osd level and changing the locking scheme in io path  will
> affect lot of other codes (that we don't even know).
> 
> So, we stopped that activity and started workaround the upstream bottleneck
> by introducing more clients to the single OSD. What we saw single OSD is
> scaling with lot of cpu utilization. To produce ~40K iops (4K) it is taking
> almost 12 core of cpu.

Just to make sure I understand: the single OSD dispatch queue does not 
become a problem with multiple clients?

Possibilities that come to mind:

- DispatchQueue is doing some funny stuff to keep individual clients' 
messages ordered but to fairly process requests from multiple clients.  
There could easily be a problem with the per-client queue portion of this.

- Pipe's use of MSG_MORE is making the TCP stream efficient... you might 
try setting 'ms tcp nodelay = false'.

- The message encode is happening in the thread that sends messages over 
the wire.  Maybe doing it in send_message() instead of writer() will keep 
that on a separate core than the thread that's shoveling data into the 
socket.

> Another point, I didn't see this single osd scale with the Dumpling release
> with the multiple clients !! Something changed..

What is it with dumpling?

> 2.   After that, we setup a proper cluster with 3 high performing nodes and
> total 30 osds. Here also, we are seeing single rados bech client as well as
> rbd client instance is not scaling beyond a certain limit. It is not able to
> generate much load as node cpu utilization remains very low. But running
> multiple client instance the performance is scaling till hit the cpu limit.
> 
> So, it is pretty clear we are not able to saturate anything with single
> client and that's why the 'noshare' option was very helpful to measure the
> rbd performance benchmark. I have a single osd/single client level call
> grind  data attached here.

Something from perf that shows a call graph would be more helpful to 
identify where things are waiting.  We haven't done much optimizing at 
this level at all, so these results aren't entirely surprising.

> Now, I am doing the benchmark for radosgw and I think I am stuck with
> similar bottleneck here. Could you please confirm that if radosgw also
> opening single client instance to the cluster?                                                                         

It is: each radosgw has a single librados client instance.

> If so, is there any similar option like 'noshare' in this case ? Here also,
> creating multiple radosgw instance on separate nodes the performance is
> scaling.

No, but

> BTW, is there a way to run multiple radosgw to a single node or it has to be
> one/node ?

yes.  You just need to make sure they have different fastcgi sockets they 
listen on and probably set up a separate web server in front of each one.

I think the next step to understanding what is going on is getting the 
right profiling tools in place so we can see where the client threads are 
spending their (non-idle and idle) time...

sage


> 
>  
> 
> Thanks & Regards
> 
> Somnath
> 
>  
> 
>    
> 
>  
> 
> -----Original Message-----
> From: ceph-devel-owner@vger.kernel.org
> [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Sage Weil
> Sent: Tuesday, September 24, 2013 2:16 PM
> To: Travis Rhoden
> Cc: Josh Durgin; ceph-devel@vger.kernel.org; Anirban Ray;
> ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Scaling RBD module
> 
>  
> 
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
> 
> > This "noshare" option may have just helped me a ton -- I sure wish I
> 
> > would have asked similar questions sooner, because I have seen the
> 
> > same failure to scale.  =)
> 
> >
> 
> > One question -- when using the "noshare" option (or really, even
> 
> > without it) are there any practical limits on the number of RBDs that
> 
> > can be mounted?  I have servers with ~100 RBDs on them each, and am
> 
> > wondering if I switch them all over to using "noshare" if anything is
> 
> > going to blow up, use a ton more memory, etc.  Even without noshare,
> 
> > are there any known limits to how many RBDs can be mapped?
> 
>  
> 
> With noshare each mapped image will appear as a separate client instance,
> which means it will have it's own session with teh monitors and own TCP
> connections to the OSDs.  It may be a viable workaround for now but in
> general I would not recommend it.
> 
>  
> 
> I'm very curious what the scaling issue is with the shared client.  Do you
> have a working perf that can capture callgraph information on this machine?
> 
>  
> 
> sage
> 
>  
> 
> >
> 
> > Thanks!
> 
> >
> 
> >  - Travis
> 
> >
> 
> >
> 
> > On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy <Somnath.Roy@sandisk.com>
> 
> > wrote:
> 
> >       Thanks Josh !
> 
> >       I am able to successfully add this noshare option in the image
> 
> >       mapping now. Looking at dmesg output, I found that was indeed
> 
> >       the secret key problem. Block performance is scaling now.
> 
> >
> 
> >       Regards
> 
> >       Somnath
> 
> >
> 
> >       -----Original Message-----
> 
> >       From: ceph-devel-owner@vger.kernel.org
> 
> >       [mailto:ceph-devel-owner@vger.kernel.org] On Behalf Of Josh
> 
> >       Durgin
> 
> >       Sent: Thursday, September 19, 2013 12:24 PM
> 
> >       To: Somnath Roy
> 
> >       Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray;
> 
> >       ceph-users@lists.ceph.com
> 
> >       Subject: Re: [ceph-users] Scaling RBD module
> 
> >
> 
> >       On 09/19/2013 12:04 PM, Somnath Roy wrote:
> 
> >       > Hi Josh,
> 
> >       > Thanks for the information. I am trying to add the following
> 
> >       but hitting some permission issue.
> 
> >       >
> 
> >       > root@emsclient:/etc# echo
> 
> >       <mon-1>:6789,<mon-2>:6789,<mon-3>:6789
> 
> >       > name=admin,key=client.admin,noshare test_rbd ceph_block_test'
> 
> >       >
> 
> >       > /sys/bus/rbd/add
> 
> >       > -bash: echo: write error: Operation not permitted
> 
> >
> 
> >       If you check dmesg, it will probably show an error trying to
> 
> >       authenticate to the cluster.
> 
> >
> 
> >       Instead of key=client.admin, you can pass the base64 secret
> 
> >       value as shown in 'ceph auth list' with the
> 
> >       secret=XXXXXXXXXXXXXXXXXXXXX option.
> 
> >
> 
> >       BTW, there's a ticket for adding the noshare option to rbd map
> 
> >       so using the sysfs interface like this is never necessary:
> 
> >
> 
> >       http://tracker.ceph.com/issues/6264
> 
> >
> 
> >       Josh
> 
> >
> 
> >       > Here is the contents of rbd directory..
> 
> >       >
> 
> >       > root@emsclient:/sys/bus/rbd# ll
> 
> >       > total 0
> 
> >       > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
> 
> >       > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
> 
> >       > --w-------  1 root root 4096 Sep 19 11:59 add
> 
> >       > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
> 
> >       > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
> 
> >       > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
> 
> >       > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
> 
> >       > --w-------  1 root root 4096 Sep 19 12:03 remove
> 
> >       > --w-------  1 root root 4096 Sep 19 11:59 uevent
> 
> >       >
> 
> >       >
> 
> >       > I checked even if I am logged in as root , I can't write
> 
> >       anything on /sys.
> 
> >       >
> 
> >       > Here is the Ubuntu version I am using..
> 
> >       >
> 
> >       > root@emsclient:/etc# lsb_release -a
> 
> >       > No LSB modules are available.
> 
> >       > Distributor ID: Ubuntu
> 
> >       > Description:    Ubuntu 13.04
> 
> >       > Release:        13.04
> 
> >       > Codename:       raring
> 
> >       >
> 
> >       > Here is the mount information....
> 
> >       >
> 
> >       > root@emsclient:/etc# mount
> 
> >       > /dev/mapper/emsclient--vg-root on / type ext4
> 
> >       (rw,errors=remount-ro)
> 
> >       > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys
> 
> >       type
> 
> >       > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type
> 
> >       tmpfs (rw)
> 
> >       > none on /sys/fs/fuse/connections type fusectl (rw) none on
> 
> >       > /sys/kernel/debug type debugfs (rw) none on
> 
> >       /sys/kernel/security type
> 
> >       > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)
> 
> >       devpts on
> 
> >       > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
> 
> >       > tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
> 
> >       > none on /run/lock type tmpfs
> 
> >       (rw,noexec,nosuid,nodev,size=5242880)
> 
> >       > none on /run/shm type tmpfs (rw,nosuid,nodev) none on
> 
> >       /run/user type
> 
> >       > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
> 
> >       > /dev/sda1 on /boot type ext2 (rw)
> 
> >       > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
> 
> >       >
> 
> >       >
> 
> >       > Any idea what went wrong here ?
> 
> >       >
> 
> >       > Thanks & Regards
> 
> >       > Somnath
> 
> >       >
> 
> >       > -----Original Message-----
> 
> >       > From: Josh Durgin [mailto:josh.durgin@inktank.com]
> 
> >       > Sent: Wednesday, September 18, 2013 6:10 PM
> 
> >       > To: Somnath Roy
> 
> >       > Cc: Sage Weil; ceph-devel@vger.kernel.org; Anirban Ray;
> 
> >       > ceph-users@lists.ceph.com
> 
> >       > Subject: Re: [ceph-users] Scaling RBD module
> 
> >       >
> 
> >       > On 09/17/2013 03:30 PM, Somnath Roy wrote:
> 
> >       >> Hi,
> 
> >       >> I am running Ceph on a 3 node cluster and each of my server
> 
> >       node is running 10 OSDs, one for each disk. I have one admin
> 
> >       node and all the nodes are connected with 2 X 10G network. One
> 
> >       network is for cluster and other one configured as public
> 
> >       network.
> 
> >       >>
> 
> >       >> Here is the status of my cluster.
> 
> >       >>
> 
> >       >> ~/fio_test# ceph -s
> 
> >       >>
> 
> >       >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
> 
> >       >>      health HEALTH_WARN clock skew detected on mon.
> 
> >       <server-name-2>, mon. <server-name-3>
> 
> >       >>      monmap e1: 3 mons at
> 
> >       {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,
> 
> >       <server-name-2>=xxx.xxx.xxx.xxx:6789/0,
> 
> >       <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64,
> 
> >       quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
> 
> >       >>      osdmap e391: 30 osds: 30 up, 30 in
> 
> >       >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB
> 
> >       data, 27912 MB used, 11145 GB / 11172 GB avail
> 
> >       >>      mdsmap e1: 0/0/1 up
> 
> >       >>
> 
> >       >>
> 
> >       >> I started with rados bench command to benchmark the read
> 
> >       performance of this Cluster on a large pool (~10K PGs) and found
> 
> >       that each rados client has a limitation. Each client can only
> 
> >       drive up to a certain mark. Each server  node cpu utilization
> 
> >       shows it is  around 85-90% idle and the admin node (from where
> 
> >       rados client is running) is around ~80-85% idle. I am trying
> 
> >       with 4K object size.
> 
> >       >
> 
> >       > Note that rados bench with 4k objects is different from rbd
> 
> >       with 4k-sized I/Os - rados bench sends each request to a new
> 
> >       object, while rbd objects are 4M by default.
> 
> >       >
> 
> >       >> Now, I started running more clients on the admin node and the
> 
> >       performance is scaling till it hits the client cpu limit. Server
> 
> >       still has the cpu of 30-35% idle. With small object size I must
> 
> >       say that the ceph per osd cpu utilization is not promising!
> 
> >       >>
> 
> >       >> After this, I started testing the rados block interface with
> 
> >       kernel rbd module from my admin node.
> 
> >       >> I have created 8 images mapped on the pool having around 10K
> 
> >       PGs and I am not able to scale up the performance by running fio
> 
> >       (either by creating a software raid or running on individual
> 
> >       /dev/rbd* instances). For example, running multiple fio
> 
> >       instances (one in /dev/rbd1 and the other in /dev/rbd2)  the
> 
> >       performance I am getting is half of what I am getting if running
> 
> >       one instance. Here is my fio job script.
> 
> >       >>
> 
> >       >> [random-reads]
> 
> >       >> ioengine=libaio
> 
> >       >> iodepth=32
> 
> >       >> filename=/dev/rbd1
> 
> >       >> rw=randread
> 
> >       >> bs=4k
> 
> >       >> direct=1
> 
> >       >> size=2G
> 
> >       >> numjobs=64
> 
> >       >>
> 
> >       >> Let me know if I am following the proper procedure or not.
> 
> >       >>
> 
> >       >> But, If my understanding is correct, kernel rbd module is
> 
> >       acting as a client to the cluster and in one admin node I can
> 
> >       run only one of such kernel instance.
> 
> >       >> If so, I am then limited to the client bottleneck that I
> 
> >       stated earlier. The cpu utilization of the server side is around
> 
> >       85-90% idle, so, it is clear that client is not driving.
> 
> >       >>
> 
> >       >> My question is, is there any way to hit the cluster  with
> 
> >       more client from a single box while testing the rbd module ?
> 
> >       >
> 
> >       > You can run multiple librbd instances easily (for example with
> 
> >       multiple runs of the rbd bench-write command).
> 
> >       >
> 
> >       > The kernel rbd driver uses the same rados client instance for
> 
> >       multiple block devices by default. There's an option (noshare)
> 
> >       to use a new rados client instance for a newly mapped device,
> 
> >       but it's not exposed by the rbd cli. You need to use the sysfs
> 
> >       interface that 'rbd map' uses instead.
> 
> >       >
> 
> >       > Once you've used rbd map once on a machine, the kernel will
> 
> >       already have the auth key stored, and you can use:
> 
> >       >
> 
> >       > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare
> 
> >       poolname
> 
> >       > imagename' > /sys/bus/rbd/add
> 
> >       >
> 
> >       > Where 1.2.3.4:6789 is the address of a monitor, and you're
> 
> >       connecting as client.admin.
> 
> >       >
> 
> >       > You can use 'rbd unmap' as usual.
> 
> >       >
> 
> >       > Josh
> 
> >       >
> 
> >       >
> 
> >       > ________________________________
> 
> >       >
> 
> >       > PLEASE NOTE: The information contained in this electronic mail
> 
> >       message is intended only for the use of the designated
> 
> >       recipient(s) named above. If the reader of this message is not
> 
> >       the intended recipient, you are hereby notified that you have
> 
> >       received this message in error and that any review,
> 
> >       dissemination, distribution, or copying of this message is
> 
> >       strictly prohibited. If you have received this communication in
> 
> >       error, please notify the sender by telephone or e-mail (as shown
> 
> >       above) immediately and destroy any and all copies of this
> 
> >       message in your possession (whether hard copies or
> 
> >       electronically stored copies).
> 
> >       >
> 
> >       >
> 
> >
> 
> > --
> 
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
> 
> > in the body of a message to majordomo@vger.kernel.org More majordomo
> 
> > info at  http://vger.kernel.org/majordomo-info.html
> 
> >
> 
> >
> 
> > _______________________________________________
> 
> > ceph-users mailing list
> 
> > ceph-users@lists.ceph.com
> 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> >
> 
> >
> 
> >
> 
> >
> 
> 
> ____________________________________________________________________________
> 
> PLEASE NOTE: The information contained in this electronic mail message is
> intended only for the use of the designated recipient(s) named above. If the
> reader of this message is not the intended recipient, you are hereby
> notified that you have received this message in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> the sender by telephone or e-mail (as shown above) immediately and destroy
> any and all copies of this message in your possession (whether hard copies
> or electronically stored copies).
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Scaling RBD module
       [not found]                       ` <alpine.DEB.2.00.1309241535430.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2013-09-24 23:59                         ` Somnath Roy
  0 siblings, 0 replies; 12+ messages in thread
From: Somnath Roy @ 2013-09-24 23:59 UTC (permalink / raw)
  To: Sage Weil
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Anirban Ray,
	ceph-users-idqoXFIVOFJgJs9I8MT0rw

Hi Sage,
Thanks for your input. I will try those. Please see my response inline.

Thanks & Regards
Somnath

-----Original Message-----
From: Sage Weil [mailto:sage-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
Sent: Tuesday, September 24, 2013 3:47 PM
To: Somnath Roy
Cc: Travis Rhoden; Josh Durgin; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray; ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
Subject: RE: [ceph-users] Scaling RBD module

Hi Somnath!

On Tue, 24 Sep 2013, Somnath Roy wrote:
>
> Hi Sage,
>
> We did quite a few experiment to see how ceph read performance can scale up.
> Here is the summary.
>
>
>
> 1.
>
> First we tried to see how far a single node cluster with one osd can
> scale up. We started with cuttlefish release and the entire osd file
> system is on the ssd. What we saw with 4K size object and with single
> rados client with dedicated 10G network, throughput can't go beyond a certain point.

Are you using 'rados bench' to generate this load or something else?
We've noticed that individual rados bench commands do not scale beyond a point but have never looked into it; the problem may be in the bench code and not in librados or SimpleMessenger.

[Somnath] Yes, for generating load and measure the performance at the rados level.

> We dig through the code and found out SimpleMessenger is opening
> single socket connection (per client)to talk to the osd. Also, we saw
> there is only one dispatcher Q (Dispatch thread)/ SimpleMessenger to
> carry these requests to OSD. We started adding more dispatcher threads
> in Dispatch Q, rearrange several locks in the Pipe.cc to identify the
> bottleneck. What we end up discovering is that there is bottleneck
> both in upstream as well as in the downstream at osd level and
> changing the locking scheme in io path  will affect lot of other codes (that we don't even know).
>
> So, we stopped that activity and started workaround the upstream
> bottleneck by introducing more clients to the single OSD. What we saw
> single OSD is scaling with lot of cpu utilization. To produce ~40K
> iops (4K) it is taking almost 12 core of cpu.

Just to make sure I understand: the single OSD dispatch queue does not become a problem with multiple clients?

[Somnath] We saw with single client/single osd, if we increase the dispatch thread till 3, we were getting some improvement. But, not beyond 3.
This is what we were also wondering !..But, looking at the architecture, it seems if the upstream bottleneck is removed this might be the next bottleneck. The next io request will not be in the OSD worker Q, till it completes OSD::ms_dispatch(). And there is lot of staff happening in this function.
Now, top of the function it is taking osd level lock , so, increasing the threads is not helping, but, I think rearranging the locks will help here.

Possibilities that come to mind:

- DispatchQueue is doing some funny stuff to keep individual clients'
messages ordered but to fairly process requests from multiple clients.
There could easily be a problem with the per-client queue portion of this.

- Pipe's use of MSG_MORE is making the TCP stream efficient... you might try setting 'ms tcp nodelay = false'.

[Somnath] I will try this.

- The message encode is happening in the thread that sends messages over the wire.  Maybe doing it in send_message() instead of writer() will keep that on a separate core than the thread that's shoveling data into the socket.

[Somnath] Are you telling to  move the following code snippet from Pipe::writer() to SimpleMessenger::_send_message() ?

        // encode and copy out of *m
        m->encode(connection_state->get_features(), !msgr->cct->_conf->ms_nocrc);

> Another point, I didn't see this single osd scale with the Dumpling
> release with the multiple clients !! Something changed..

What is it with dumpling?

[Somnath] We tried to compare but lot of changes , so, gave up :-(...But, I think eventually, if we want to increase the overall throughput we need to make individual osd efficient (both cpu/performance). So, we will definitely comeback to this.

> 2.   After that, we setup a proper cluster with 3 high performing
> nodes and total 30 osds. Here also, we are seeing single rados bech
> client as well as rbd client instance is not scaling beyond a certain
> limit. It is not able to generate much load as node cpu utilization
> remains very low. But running multiple client instance the performance is scaling till hit the cpu limit.
>
> So, it is pretty clear we are not able to saturate anything with
> single client and that's why the 'noshare' option was very helpful to
> measure the rbd performance benchmark. I have a single osd/single
> client level call grind  data attached here.

Something from perf that shows a call graph would be more helpful to identify where things are waiting.  We haven't done much optimizing at this level at all, so these results aren't entirely surprising.

[Somnath] We will get this.

> Now, I am doing the benchmark for radosgw and I think I am stuck with
> similar bottleneck here. Could you please confirm that if radosgw also
> opening single client instance to the cluster?

It is: each radosgw has a single librados client instance.

> If so, is there any similar option like 'noshare' in this case ? Here
> also, creating multiple radosgw instance on separate nodes the
> performance is scaling.

No, but

> BTW, is there a way to run multiple radosgw to a single node or it has
> to be one/node ?

yes.  You just need to make sure they have different fastcgi sockets they listen on and probably set up a separate web server in front of each one.

I think the next step to understanding what is going on is getting the right profiling tools in place so we can see where the client threads are spending their (non-idle and idle) time...

[Somnath] How do you want me to do this ? Run profiler on rados bench ?

sage


>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
>
>
>
>
> -----Original Message-----
> From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Sage Weil
> Sent: Tuesday, September 24, 2013 2:16 PM
> To: Travis Rhoden
> Cc: Josh Durgin; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Subject: Re: [ceph-users] Scaling RBD module
>
>
>
> On Tue, 24 Sep 2013, Travis Rhoden wrote:
>
> > This "noshare" option may have just helped me a ton -- I sure wish I
>
> > would have asked similar questions sooner, because I have seen the
>
> > same failure to scale.  =)
>
> >
>
> > One question -- when using the "noshare" option (or really, even
>
> > without it) are there any practical limits on the number of RBDs
> > that
>
> > can be mounted?  I have servers with ~100 RBDs on them each, and am
>
> > wondering if I switch them all over to using "noshare" if anything
> > is
>
> > going to blow up, use a ton more memory, etc.  Even without noshare,
>
> > are there any known limits to how many RBDs can be mapped?
>
>
>
> With noshare each mapped image will appear as a separate client
> instance, which means it will have it's own session with teh monitors
> and own TCP connections to the OSDs.  It may be a viable workaround
> for now but in general I would not recommend it.
>
>
>
> I'm very curious what the scaling issue is with the shared client.  Do
> you have a working perf that can capture callgraph information on this machine?
>
>
>
> sage
>
>
>
> >
>
> > Thanks!
>
> >
>
> >  - Travis
>
> >
>
> >
>
> > On Thu, Sep 19, 2013 at 8:03 PM, Somnath Roy
> > <Somnath.Roy-XdAiOPVOjttBDgjK7y7TUQ@public.gmane.org>
>
> > wrote:
>
> >       Thanks Josh !
>
> >       I am able to successfully add this noshare option in the image
>
> >       mapping now. Looking at dmesg output, I found that was indeed
>
> >       the secret key problem. Block performance is scaling now.
>
> >
>
> >       Regards
>
> >       Somnath
>
> >
>
> >       -----Original Message-----
>
> >       From: ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>
> >       [mailto:ceph-devel-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org] On Behalf Of Josh
>
> >       Durgin
>
> >       Sent: Thursday, September 19, 2013 12:24 PM
>
> >       To: Somnath Roy
>
> >       Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
>
> >       ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>
> >       Subject: Re: [ceph-users] Scaling RBD module
>
> >
>
> >       On 09/19/2013 12:04 PM, Somnath Roy wrote:
>
> >       > Hi Josh,
>
> >       > Thanks for the information. I am trying to add the following
>
> >       but hitting some permission issue.
>
> >       >
>
> >       > root@emsclient:/etc# echo
>
> >       <mon-1>:6789,<mon-2>:6789,<mon-3>:6789
>
> >       > name=admin,key=client.admin,noshare test_rbd ceph_block_test'
>
> >       >
>
> >       > /sys/bus/rbd/add
>
> >       > -bash: echo: write error: Operation not permitted
>
> >
>
> >       If you check dmesg, it will probably show an error trying to
>
> >       authenticate to the cluster.
>
> >
>
> >       Instead of key=client.admin, you can pass the base64 secret
>
> >       value as shown in 'ceph auth list' with the
>
> >       secret=XXXXXXXXXXXXXXXXXXXXX option.
>
> >
>
> >       BTW, there's a ticket for adding the noshare option to rbd map
>
> >       so using the sysfs interface like this is never necessary:
>
> >
>
> >       http://tracker.ceph.com/issues/6264
>
> >
>
> >       Josh
>
> >
>
> >       > Here is the contents of rbd directory..
>
> >       >
>
> >       > root@emsclient:/sys/bus/rbd# ll
>
> >       > total 0
>
> >       > drwxr-xr-x  4 root root    0 Sep 19 11:59 ./
>
> >       > drwxr-xr-x 30 root root    0 Sep 13 11:41 ../
>
> >       > --w-------  1 root root 4096 Sep 19 11:59 add
>
> >       > drwxr-xr-x  2 root root    0 Sep 19 12:03 devices/
>
> >       > drwxr-xr-x  2 root root    0 Sep 19 12:03 drivers/
>
> >       > -rw-r--r--  1 root root 4096 Sep 19 12:03 drivers_autoprobe
>
> >       > --w-------  1 root root 4096 Sep 19 12:03 drivers_probe
>
> >       > --w-------  1 root root 4096 Sep 19 12:03 remove
>
> >       > --w-------  1 root root 4096 Sep 19 11:59 uevent
>
> >       >
>
> >       >
>
> >       > I checked even if I am logged in as root , I can't write
>
> >       anything on /sys.
>
> >       >
>
> >       > Here is the Ubuntu version I am using..
>
> >       >
>
> >       > root@emsclient:/etc# lsb_release -a
>
> >       > No LSB modules are available.
>
> >       > Distributor ID: Ubuntu
>
> >       > Description:    Ubuntu 13.04
>
> >       > Release:        13.04
>
> >       > Codename:       raring
>
> >       >
>
> >       > Here is the mount information....
>
> >       >
>
> >       > root@emsclient:/etc# mount
>
> >       > /dev/mapper/emsclient--vg-root on / type ext4
>
> >       (rw,errors=remount-ro)
>
> >       > proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on
> >/sys
>
> >       type
>
> >       > sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type
>
> >       tmpfs (rw)
>
> >       > none on /sys/fs/fuse/connections type fusectl (rw) none on
>
> >       > /sys/kernel/debug type debugfs (rw) none on
>
> >       /sys/kernel/security type
>
> >       > securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755)
>
> >       devpts on
>
> >       > /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
>
> >       > tmpfs on /run type tmpfs
> >(rw,noexec,nosuid,size=10%,mode=0755)
>
> >       > none on /run/lock type tmpfs
>
> >       (rw,noexec,nosuid,nodev,size=5242880)
>
> >       > none on /run/shm type tmpfs (rw,nosuid,nodev) none on
>
> >       /run/user type
>
> >       > tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
>
> >       > /dev/sda1 on /boot type ext2 (rw)
>
> >       > /dev/mapper/emsclient--vg-home on /home type ext4 (rw)
>
> >       >
>
> >       >
>
> >       > Any idea what went wrong here ?
>
> >       >
>
> >       > Thanks & Regards
>
> >       > Somnath
>
> >       >
>
> >       > -----Original Message-----
>
> >       > From: Josh Durgin [mailto:josh.durgin-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org]
>
> >       > Sent: Wednesday, September 18, 2013 6:10 PM
>
> >       > To: Somnath Roy
>
> >       > Cc: Sage Weil; ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org; Anirban Ray;
>
> >       > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>
> >       > Subject: Re: [ceph-users] Scaling RBD module
>
> >       >
>
> >       > On 09/17/2013 03:30 PM, Somnath Roy wrote:
>
> >       >> Hi,
>
> >       >> I am running Ceph on a 3 node cluster and each of my server
>
> >       node is running 10 OSDs, one for each disk. I have one admin
>
> >       node and all the nodes are connected with 2 X 10G network. One
>
> >       network is for cluster and other one configured as public
>
> >       network.
>
> >       >>
>
> >       >> Here is the status of my cluster.
>
> >       >>
>
> >       >> ~/fio_test# ceph -s
>
> >       >>
>
> >       >>     cluster b2e0b4db-6342-490e-9c28-0aadf0188023
>
> >       >>      health HEALTH_WARN clock skew detected on mon.
>
> >       <server-name-2>, mon. <server-name-3>
>
> >       >>      monmap e1: 3 mons at
>
> >       {<server-name-1>=xxx.xxx.xxx.xxx:6789/0,
>
> >       <server-name-2>=xxx.xxx.xxx.xxx:6789/0,
>
> >       <server-name-3>=xxx.xxx.xxx.xxx:6789/0}, election epoch 64,
>
> >       quorum 0,1,2 <server-name-1>,<server-name-2>,<server-name-3>
>
> >       >>      osdmap e391: 30 osds: 30 up, 30 in
>
> >       >>       pgmap v5202: 30912 pgs: 30912 active+clean; 8494 MB
>
> >       data, 27912 MB used, 11145 GB / 11172 GB avail
>
> >       >>      mdsmap e1: 0/0/1 up
>
> >       >>
>
> >       >>
>
> >       >> I started with rados bench command to benchmark the read
>
> >       performance of this Cluster on a large pool (~10K PGs) and
> >found
>
> >       that each rados client has a limitation. Each client can only
>
> >       drive up to a certain mark. Each server  node cpu utilization
>
> >       shows it is  around 85-90% idle and the admin node (from where
>
> >       rados client is running) is around ~80-85% idle. I am trying
>
> >       with 4K object size.
>
> >       >
>
> >       > Note that rados bench with 4k objects is different from rbd
>
> >       with 4k-sized I/Os - rados bench sends each request to a new
>
> >       object, while rbd objects are 4M by default.
>
> >       >
>
> >       >> Now, I started running more clients on the admin node and
> >the
>
> >       performance is scaling till it hits the client cpu limit.
> >Server
>
> >       still has the cpu of 30-35% idle. With small object size I
> >must
>
> >       say that the ceph per osd cpu utilization is not promising!
>
> >       >>
>
> >       >> After this, I started testing the rados block interface
> >with
>
> >       kernel rbd module from my admin node.
>
> >       >> I have created 8 images mapped on the pool having around
> >10K
>
> >       PGs and I am not able to scale up the performance by running
> >fio
>
> >       (either by creating a software raid or running on individual
>
> >       /dev/rbd* instances). For example, running multiple fio
>
> >       instances (one in /dev/rbd1 and the other in /dev/rbd2)  the
>
> >       performance I am getting is half of what I am getting if
> >running
>
> >       one instance. Here is my fio job script.
>
> >       >>
>
> >       >> [random-reads]
>
> >       >> ioengine=libaio
>
> >       >> iodepth=32
>
> >       >> filename=/dev/rbd1
>
> >       >> rw=randread
>
> >       >> bs=4k
>
> >       >> direct=1
>
> >       >> size=2G
>
> >       >> numjobs=64
>
> >       >>
>
> >       >> Let me know if I am following the proper procedure or not.
>
> >       >>
>
> >       >> But, If my understanding is correct, kernel rbd module is
>
> >       acting as a client to the cluster and in one admin node I can
>
> >       run only one of such kernel instance.
>
> >       >> If so, I am then limited to the client bottleneck that I
>
> >       stated earlier. The cpu utilization of the server side is
> >around
>
> >       85-90% idle, so, it is clear that client is not driving.
>
> >       >>
>
> >       >> My question is, is there any way to hit the cluster  with
>
> >       more client from a single box while testing the rbd module ?
>
> >       >
>
> >       > You can run multiple librbd instances easily (for example
> >with
>
> >       multiple runs of the rbd bench-write command).
>
> >       >
>
> >       > The kernel rbd driver uses the same rados client instance
> >for
>
> >       multiple block devices by default. There's an option (noshare)
>
> >       to use a new rados client instance for a newly mapped device,
>
> >       but it's not exposed by the rbd cli. You need to use the sysfs
>
> >       interface that 'rbd map' uses instead.
>
> >       >
>
> >       > Once you've used rbd map once on a machine, the kernel will
>
> >       already have the auth key stored, and you can use:
>
> >       >
>
> >       > echo '1.2.3.4:6789 name=admin,key=client.admin,noshare
>
> >       poolname
>
> >       > imagename' > /sys/bus/rbd/add
>
> >       >
>
> >       > Where 1.2.3.4:6789 is the address of a monitor, and you're
>
> >       connecting as client.admin.
>
> >       >
>
> >       > You can use 'rbd unmap' as usual.
>
> >       >
>
> >       > Josh
>
> >       >
>
> >       >
>
> >       > ________________________________
>
> >       >
>
> >       > PLEASE NOTE: The information contained in this electronic
> >mail
>
> >       message is intended only for the use of the designated
>
> >       recipient(s) named above. If the reader of this message is not
>
> >       the intended recipient, you are hereby notified that you have
>
> >       received this message in error and that any review,
>
> >       dissemination, distribution, or copying of this message is
>
> >       strictly prohibited. If you have received this communication
> >in
>
> >       error, please notify the sender by telephone or e-mail (as
> >shown
>
> >       above) immediately and destroy any and all copies of this
>
> >       message in your possession (whether hard copies or
>
> >       electronically stored copies).
>
> >       >
>
> >       >
>
> >
>
> > --
>
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel"
>
> > in the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org More majordomo
>
> > info at  http://vger.kernel.org/majordomo-info.html
>
> >
>
> >
>
> > _______________________________________________
>
> > ceph-users mailing list
>
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> >
>
> >
>
> >
>
> >
>
>
> ______________________________________________________________________
> ______
>
> PLEASE NOTE: The information contained in this electronic mail message
> is intended only for the use of the designated recipient(s) named
> above. If the reader of this message is not the intended recipient,
> you are hereby notified that you have received this message in error
> and that any review, dissemination, distribution, or copying of this
> message is strictly prohibited. If you have received this
> communication in error, please notify the sender by telephone or
> e-mail (as shown above) immediately and destroy any and all copies of
> this message in your possession (whether hard copies or electronically stored copies).
>
>
>
>

________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2013-09-24 23:59 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-09-17 22:30 Scaling RBD module Somnath Roy
2013-09-19  1:10 ` [ceph-users] " Josh Durgin
2013-09-19 19:04   ` Somnath Roy
2013-09-19 19:23     ` Josh Durgin
2013-09-20  0:03       ` Somnath Roy
     [not found]         ` <755F6B91B3BE364F9BCA11EA3F9E0C6F0FC4A738-cXZ6iGhjG0il5HHZYNR2WTJ2aSJ780jGSxCzGc5ayCJWk0Htik3J/w@public.gmane.org>
2013-09-24 21:09           ` Travis Rhoden
     [not found]             ` <CACkq2mrfO+eFCYaEdoTQpJ2tOoDyVCkedSMAAztnQVYPBsv7gw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-09-24 21:16               ` Sage Weil
     [not found]                 ` <alpine.DEB.2.00.1309241413280.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2013-09-24 21:24                   ` Travis Rhoden
2013-09-24 22:17                   ` Somnath Roy
2013-09-24 22:47                     ` [ceph-users] " Sage Weil
     [not found]                       ` <alpine.DEB.2.00.1309241535430.25142-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2013-09-24 23:59                         ` Somnath Roy
2013-09-24 22:23                 ` Somnath Roy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.