All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: rados cppool and Openstack Glance and Cinder
       [not found] <16850452.137.1363384405239.JavaMail.dspano@it1>
@ 2013-03-15 21:55 ` Dave Spano
  2013-03-15 22:11   ` Josh Durgin
  0 siblings, 1 reply; 3+ messages in thread
From: Dave Spano @ 2013-03-15 21:55 UTC (permalink / raw)
  To: Greg Farnum, josh durgin
  Cc: Sébastien Han, ceph-devel, Sage Weil, Wido den Hollander,
	Sylvain Munaut, Samuel Just, Vladislav Gorbunov


During my journey of using rados cppool, which is an awesome feature by the way, I found an interesting behavior related to cephx. I wanted to share it for anyone else who may be using Openstack, that decides to rename, or copy a pool. 

My client.glance entry is currently set to this (with the exception of the key, of course): 

client.glance 
key: punkrawk 
caps: [mon] allow r 
caps: [osd] allow class-read object_prefix rbd_children, allow rwx 

It was limited to the images pool based on the following example listed at http://ceph.com/docs/master/rbd/rbd-openstack/ : 

ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

client.glance 
key: punkrawk 
caps: [mon] allow r 
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images 

 

What I found was that when I would create my pool as images-new or anything but images, then rename it to images I would have a problem. I could not even upload an image to an empty pool. 

I could, however; upload to the pool if I renamed the original to images-old, then created a brand new pool called images. 

My first guess is that there's a reference to the old name which would interfere whenever my client would try to use it with the client.glance keyring. I have not looked in the code yet, so I don't have any other concrete idea. 

As soon as I lifted the pool restriction, as if by the power greyskull, I could upload, delete and take snapshots in the renamed pool. 

I believe this would be rather easy for anyone to reproduce with a test install of Openstack. 

Just create pool named images-new. Rename it to images, then try to upload an image. It should fail. Remove the pool restriction, and it will work. 


Dave Spano 
Optogenics 
Systems Administrator 



----- Original Message ----- 

From: "Dave Spano" <dspano@optogenics.com> 
To: "Greg Farnum" <greg@inktank.com> 
Cc: "Sébastien Han" <han.sebastien@gmail.com>, "ceph-devel" <ceph-devel@vger.kernel.org>, "Sage Weil" <sage@inktank.com>, "Wido den Hollander" <wido@42on.com>, "Sylvain Munaut" <s.munaut@whatever-company.com>, "Samuel Just" <sam.just@inktank.com>, "Vladislav Gorbunov" <vadikgo@gmail.com> 
Sent: Wednesday, March 13, 2013 8:05:47 PM 
Subject: Re: OSD memory leaks? 

I renamed the old one from images to images-old, and the new one from images-new to images. 

Dave Spano 
Optogenics 
Systems Administrator 



----- Original Message ----- 
From: Greg Farnum &lt;greg@inktank.com&gt; 
To: Dave Spano &lt;dspano@optogenics.com&gt; 
Cc: Sébastien Han &lt;han.sebastien@gmail.com&gt;, ceph-devel &lt;ceph-devel@vger.kernel.org&gt;, Sage Weil &lt;sage@inktank.com&gt;, Wido den Hollander &lt;wido@42on.com&gt;, Sylvain Munaut &lt;s.munaut@whatever-company.com&gt;, Samuel Just &lt;sam.just@inktank.com&gt;, Vladislav Gorbunov &lt;vadikgo@gmail.com&gt; 
Sent: Wed, 13 Mar 2013 18:52:29 -0400 (EDT) 
Subject: Re: OSD memory leaks? 

It sounds like maybe you didn't rename the new pool to use the old pool's name? Glance is looking for a specific pool to store its data in; I believe it's configurable but you'll need to do one or the other. 
-Greg 

On Wednesday, March 13, 2013 at 3:38 PM, Dave Spano wrote: 

&gt; Sebastien, 
&gt; 
&gt; I'm not totally sure yet, but everything is still working. 
&gt; 
&gt; 
&gt; Sage and Greg, 
&gt; I copied my glance image pool per the posting I mentioned previously, and everything works when I use the ceph tools. I can export rbds from the new pool and delete them as well. 
&gt; 
&gt; I noticed that the copied images pool does not work with glance. 
&gt; 
&gt; I get this error when I try to create images in the new pool. If I put the old pool back, I can create images no problem. 
&gt; 
&gt; Is there something I'm missing in glance that I need to work with a pool created in bobtail? I'm using Openstack Folsom. 
&gt; 
&gt; File "/usr/lib/python2.7/dist-packages/glance/api/v1/images.py", line 437, in _upload 
&gt; image_meta['size']) 
&gt; File "/usr/lib/python2.7/dist-packages/glance/store/rbd.py", line 244, in add 
&gt; image_size, order) 
&gt; File "/usr/lib/python2.7/dist-packages/glance/store/rbd.py", line 207, in _create_image 
&gt; features=rbd.RBD_FEATURE_LAYERING) 
&gt; File "/usr/lib/python2.7/dist-packages/rbd.py", line 194, in create 
&gt; raise make_ex(ret, 'error creating image') 
&gt; PermissionError: error creating image 
&gt; 
&gt; 
&gt; Dave Spano 
&gt; 
&gt; 
&gt; 
&gt; 
&gt; ----- Original Message ----- 
&gt; 
&gt; From: "Sébastien Han" &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt; 
&gt; To: "Dave Spano" &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt; 
&gt; Cc: "Greg Farnum" &lt;greg@inktank.com (mailto:greg@inktank.com)&gt;, "ceph-devel" &lt;ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)&gt;, "Sage Weil" &lt;sage@inktank.com (mailto:sage@inktank.com)&gt;, "Wido den Hollander" &lt;wido@42on.com (mailto:wido@42on.com)&gt;, "Sylvain Munaut" &lt;s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)&gt;, "Samuel Just" &lt;sam.just@inktank.com (mailto:sam.just@inktank.com)&gt;, "Vladislav Gorbunov" &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt; 
&gt; Sent: Wednesday, March 13, 2013 3:59:03 PM 
&gt; Subject: Re: OSD memory leaks? 
&gt; 
&gt; Dave, 
&gt; 
&gt; Just to be sure, did the log max recent=10000 _completely_ stod the 
&gt; memory leak or did it slow it down? 
&gt; 
&gt; Thanks! 
&gt; -- 
&gt; Regards, 
&gt; Sébastien Han. 
&gt; 
&gt; 
&gt; On Wed, Mar 13, 2013 at 2:12 PM, Dave Spano &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt; wrote: 
&gt; &gt; Lol. I'm totally fine with that. My glance images pool isn't used too often. I'm going to give that a try today and see what happens. 
&gt; &gt; 
&gt; &gt; I'm still crossing my fingers, but since I added log max recent=10000 to ceph.conf, I've been okay despite the improper pg_num, and a lot of scrubbing/deep scrubbing yesterday. 
&gt; &gt; 
&gt; &gt; Dave Spano 
&gt; &gt; 
&gt; &gt; 
&gt; &gt; 
&gt; &gt; 
&gt; &gt; ----- Original Message ----- 
&gt; &gt; 
&gt; &gt; From: "Greg Farnum" &lt;greg@inktank.com (mailto:greg@inktank.com)&gt; 
&gt; &gt; To: "Dave Spano" &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt; 
&gt; &gt; Cc: "ceph-devel" &lt;ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)&gt;, "Sage Weil" &lt;sage@inktank.com (mailto:sage@inktank.com)&gt;, "Wido den Hollander" &lt;wido@42on.com (mailto:wido@42on.com)&gt;, "Sylvain Munaut" &lt;s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)&gt;, "Samuel Just" &lt;sam.just@inktank.com (mailto:sam.just@inktank.com)&gt;, "Vladislav Gorbunov" &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt;, "Sébastien Han" &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt; 
&gt; &gt; Sent: Tuesday, March 12, 2013 5:37:37 PM 
&gt; &gt; Subject: Re: OSD memory leaks? 
&gt; &gt; 
&gt; &gt; Yeah. There's not anything intelligent about that cppool mechanism. :) 
&gt; &gt; -Greg 
&gt; &gt; 
&gt; &gt; On Tuesday, March 12, 2013 at 2:15 PM, Dave Spano wrote: 
&gt; &gt; 
&gt; &gt; &gt; I'd rather shut the cloud down and copy the pool to a new one than take any chances of corruption by using an experimental feature. My guess is that there cannot be any i/o to the pool while copying, otherwise you'll lose the changes that are happening during the copy, correct? 
&gt; &gt; &gt; 
&gt; &gt; &gt; Dave Spano 
&gt; &gt; &gt; Optogenics 
&gt; &gt; &gt; Systems Administrator 
&gt; &gt; &gt; 
&gt; &gt; &gt; 
&gt; &gt; &gt; 
&gt; &gt; &gt; ----- Original Message ----- 
&gt; &gt; &gt; 
&gt; &gt; &gt; From: "Greg Farnum" &lt;greg@inktank.com (mailto:greg@inktank.com)&gt; 
&gt; &gt; &gt; To: "Sébastien Han" &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt; 
&gt; &gt; &gt; Cc: "Dave Spano" &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt;, "ceph-devel" &lt;ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)&gt;, "Sage Weil" &lt;sage@inktank.com (mailto:sage@inktank.com)&gt;, "Wido den Hollander" &lt;wido@42on.com (mailto:wido@42on.com)&gt;, "Sylvain Munaut" &lt;s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)&gt;, "Samuel Just" &lt;sam.just@inktank.com (mailto:sam.just@inktank.com)&gt;, "Vladislav Gorbunov" &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt; 
&gt; &gt; &gt; Sent: Tuesday, March 12, 2013 4:20:13 PM 
&gt; &gt; &gt; Subject: Re: OSD memory leaks? 
&gt; &gt; &gt; 
&gt; &gt; &gt; On Tuesday, March 12, 2013 at 1:10 PM, Sébastien Han wrote: 
&gt; &gt; &gt; &gt; Well to avoid un necessary data movement, there is also an 
&gt; &gt; &gt; &gt; _experimental_ feature to change on fly the number of PGs in a pool. 
&gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; ceph osd pool set &lt;poolname&gt; pg_num &lt;numpgs&gt; --allow-experimental-feature 
&gt; &gt; &gt; Don't do that. We've got a set of 3 patches which fix bugs we know about that aren't in bobtail yet, and I'm sure there's more we aren't aware of… 
&gt; &gt; &gt; -Greg 
&gt; &gt; &gt; 
&gt; &gt; &gt; Software Engineer #42 @ http://inktank.com | http://ceph.com 
&gt; &gt; &gt; 
&gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; Cheers! 
&gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; Regards, 
&gt; &gt; &gt; &gt; Sébastien Han. 
&gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; On Tue, Mar 12, 2013 at 7:09 PM, Dave Spano &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt; wrote: 
&gt; &gt; &gt; &gt; &gt; Disregard my previous question. I found my answer in the post below. Absolutely brilliant! I thought I was screwed! 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/8924 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; Dave Spano 
&gt; &gt; &gt; &gt; &gt; Optogenics 
&gt; &gt; &gt; &gt; &gt; Systems Administrator 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; ----- Original Message ----- 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; From: "Dave Spano" &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt; 
&gt; &gt; &gt; &gt; &gt; To: "Sébastien Han" &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt; 
&gt; &gt; &gt; &gt; &gt; Cc: "Sage Weil" &lt;sage@inktank.com (mailto:sage@inktank.com)&gt;, "Wido den Hollander" &lt;wido@42on.com (mailto:wido@42on.com)&gt;, "Gregory Farnum" &lt;greg@inktank.com (mailto:greg@inktank.com)&gt;, "Sylvain Munaut" &lt;s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)&gt;, "ceph-devel" &lt;ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)&gt;, "Samuel Just" &lt;sam.just@inktank.com (mailto:sam.just@inktank.com)&gt;, "Vladislav Gorbunov" &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt; 
&gt; &gt; &gt; &gt; &gt; Sent: Tuesday, March 12, 2013 1:41:21 PM 
&gt; &gt; &gt; &gt; &gt; Subject: Re: OSD memory leaks? 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; If one were stupid enough to have their pg_num and pgp_num set to 8 on two of their pools, how could you fix that? 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; Dave Spano 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; ----- Original Message ----- 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; From: "Sébastien Han" &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt; 
&gt; &gt; &gt; &gt; &gt; To: "Vladislav Gorbunov" &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt; 
&gt; &gt; &gt; &gt; &gt; Cc: "Sage Weil" &lt;sage@inktank.com (mailto:sage@inktank.com)&gt;, "Wido den Hollander" &lt;wido@42on.com (mailto:wido@42on.com)&gt;, "Gregory Farnum" &lt;greg@inktank.com (mailto:greg@inktank.com)&gt;, "Sylvain Munaut" &lt;s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)&gt;, "Dave Spano" &lt;dspano@optogenics.com (mailto:dspano@optogenics.com)&gt;, "ceph-devel" &lt;ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)&gt;, "Samuel Just" &lt;sam.just@inktank.com (mailto:sam.just@inktank.com)&gt; 
&gt; &gt; &gt; &gt; &gt; Sent: Tuesday, March 12, 2013 9:43:44 AM 
&gt; &gt; &gt; &gt; &gt; Subject: Re: OSD memory leaks? 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; Sorry, i mean pg_num and pgp_num on all pools. Shown by the "ceph osd 
&gt; &gt; &gt; &gt; &gt; &gt; dump | grep 'rep size'" 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; Well it's still 450 each... 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; The default pg_num value 8 is NOT suitable for big cluster. 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; Thanks I know, I'm not new with Ceph. What's your point here? I 
&gt; &gt; &gt; &gt; &gt; already said that pg_num was 450... 
&gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; Regards, 
&gt; &gt; &gt; &gt; &gt; Sébastien Han. 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; On Tue, Mar 12, 2013 at 2:00 PM, Vladislav Gorbunov &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt; wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; Sorry, i mean pg_num and pgp_num on all pools. Shown by the "ceph osd 
&gt; &gt; &gt; &gt; &gt; &gt; dump | grep 'rep size'" 
&gt; &gt; &gt; &gt; &gt; &gt; The default pg_num value 8 is NOT suitable for big cluster. 
&gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; 2013/3/13 Sébastien Han &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt;: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; Replica count has been set to 2. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; Why? 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; Regards, 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; Sébastien Han. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; On Tue, Mar 12, 2013 at 12:45 PM, Vladislav Gorbunov &lt;vadikgo@gmail.com (mailto:vadikgo@gmail.com)&gt; wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; FYI I'm using 450 pgs for my pools. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Please, can you show the number of object replicas? 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; ceph osd dump | grep 'rep size' 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Vlad Gorbunov 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 2013/3/5 Sébastien Han &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt;: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; FYI I'm using 450 pgs for my pools. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Regards, 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Sébastien Han. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; On Fri, Mar 1, 2013 at 8:10 PM, Sage Weil &lt;sage@inktank.com (mailto:sage@inktank.com)&gt; wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; On Fri, 1 Mar 2013, Wido den Hollander wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; On 02/23/2013 01:44 AM, Sage Weil wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; On Fri, 22 Feb 2013, S?bastien Han wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Hi all, 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; I finally got a core dump. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; I did it with a kill -SEGV on the OSD process. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; https://www.dropbox.com/s/ahv6hm0ipnak5rf/core-ceph-osd-11-0-0-20100-1361539008 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Hope we will get something out of it :-). 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; AHA! We have a theory. The pg log isnt trimmed during scrub (because teh 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; old scrub code required that), but the new (deep) scrub can take a very 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; long time, which means the pg log will eat ram in the meantime.. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; especially under high iops. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Does the number of PGs influence the memory leak? So my theory is that when 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; you have a high number of PGs with a low number of objects per PG you don't 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; see the memory leak. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; I saw the memory leak on a RBD system where a pool had just 8 PGs, but after 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; going to 1024 PGs in a new pool it seemed to be resolved. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; I've asked somebody else to try your patch since he's still seeing it on his 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; systems. Hopefully that gives us some results. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; The PGs were active+clean when you saw the leak? There is a problem (that 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; we just fixed in master) where pg logs aren't trimmed for degraded PGs. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; sage 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Wido 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Can you try wip-osd-log-trim (which is bobtail + a simple patch) and see 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; if that seems to work? Note that that patch shouldn't be run in a mixed 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; argonaut+bobtail cluster, since it isn't properly checking if the scrub is 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; class or chunky/deep. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Thanks! 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; sage 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Regards, 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; S?bastien Han. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; On Fri, Jan 11, 2013 at 7:13 PM, Gregory Farnum &lt;greg@inktank.com (mailto:greg@inktank.com)&gt; wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; On Fri, Jan 11, 2013 at 6:57 AM, S?bastien Han &lt;han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)&gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; wrote: 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Is osd.1 using the heap profiler as well? Keep in mind that active 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; use 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; of the memory profiler will itself cause memory usage to increase ? 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; this sounds a bit like that to me since it's staying stable at a 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; large 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; but finite portion of total memory. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Well, the memory consumption was already high before the profiler was 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; started. So yes with the memory profiler enable an OSD might consume 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; more memory but this doesn't cause the memory leaks. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; My concern is that maybe you saw a leak but when you restarted with 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; the memory profiling you lost whatever conditions caused it. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Any ideas? Nothing to say about my scrumbing theory? 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; I like it, but Sam indicates that without some heap dumps which 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; capture the actual leak then scrub is too large to effectively code 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; review for leaks. :( 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -Greg 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org) 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; More majordomo info at http://vger.kernel.org/majordomo-info.html 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org) 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; More majordomo info at http://vger.kernel.org/majordomo-info.html 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Wido den Hollander 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 42on B.V. 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Phone: +31 (0)20 700 9902 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; Skype: contact42on 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org) 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; More majordomo info at http://vger.kernel.org/majordomo-info.html 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; 
&gt; &gt; &gt; &gt; &gt; -- 
&gt; &gt; &gt; &gt; &gt; To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
&gt; &gt; &gt; &gt; &gt; the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org) 
&gt; &gt; &gt; &gt; &gt; More majordomo info at http://vger.kernel.org/majordomo-info.html 
&gt; &gt; &gt; &gt; 
&gt; &gt; &gt; 
&gt; &gt; 
&gt; 




-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in 
the body of a message to majordomo@vger.kernel.org 
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: rados cppool and Openstack Glance and Cinder
  2013-03-15 21:55 ` rados cppool and Openstack Glance and Cinder Dave Spano
@ 2013-03-15 22:11   ` Josh Durgin
  2013-03-15 22:29     ` Dave Spano
  0 siblings, 1 reply; 3+ messages in thread
From: Josh Durgin @ 2013-03-15 22:11 UTC (permalink / raw)
  To: Dave Spano
  Cc: Greg Farnum, Sébastien Han, ceph-devel, Sage Weil,
	Wido den Hollander, Sylvain Munaut, Samuel Just,
	Vladislav Gorbunov

On 03/15/2013 02:55 PM, Dave Spano wrote:
>
> During my journey of using rados cppool, which is an awesome feature by the way, I found an interesting behavior related to cephx. I wanted to share it for anyone else who may be using Openstack, that decides to rename, or copy a pool.
>
> My client.glance entry is currently set to this (with the exception of the key, of course):
>
> client.glance
> key: punkrawk
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>
> It was limited to the images pool based on the following example listed at http://ceph.com/docs/master/rbd/rbd-openstack/ :
>
> ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
>
> client.glance
> key: punkrawk
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
>
>
>
> What I found was that when I would create my pool as images-new or anything but images, then rename it to images I would have a problem. I could not even upload an image to an empty pool.
>
> I could, however; upload to the pool if I renamed the original to images-old, then created a brand new pool called images.
>
> My first guess is that there's a reference to the old name which would interfere whenever my client would try to use it with the client.glance keyring. I have not looked in the code yet, so I don't have any other concrete idea.

Yeah, someone ran into this before, but apparently I hadn't finished
creating the bug, so now there's http://tracker.ceph.com/issues/4471.

Each pg includes its pool name in memory, and that isn't updated when
the pool is renamed. Restarting the osd would refresh it, and creating
a new pool creates entirely new pgs.

> As soon as I lifted the pool restriction, as if by the power greyskull, I could upload, delete and take snapshots in the renamed pool.
>
> I believe this would be rather easy for anyone to reproduce with a test install of Openstack.

No openstack needed, just any ceph client with a restriction based on
pool name.

> Just create pool named images-new. Rename it to images, then try to upload an image. It should fail. Remove the pool restriction, and it will work.

Thanks for the detailed report!
Josh

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: rados cppool and Openstack Glance and Cinder
  2013-03-15 22:11   ` Josh Durgin
@ 2013-03-15 22:29     ` Dave Spano
  0 siblings, 0 replies; 3+ messages in thread
From: Dave Spano @ 2013-03-15 22:29 UTC (permalink / raw)
  To: Josh Durgin
  Cc: Greg Farnum, Sébastien Han, ceph-devel, Sage Weil,
	Wido den Hollander, Sylvain Munaut, Samuel Just,
	Vladislav Gorbunov

Thank you Josh. Have a great weekend. 

Dave Spano 



----- Original Message ----- 

From: "Josh Durgin" <josh.durgin@inktank.com> 
To: "Dave Spano" <dspano@optogenics.com> 
Cc: "Greg Farnum" <greg@inktank.com>, "Sébastien Han" <han.sebastien@gmail.com>, "ceph-devel" <ceph-devel@vger.kernel.org>, "Sage Weil" <sage@inktank.com>, "Wido den Hollander" <wido@42on.com>, "Sylvain Munaut" <s.munaut@whatever-company.com>, "Samuel Just" <sam.just@inktank.com>, "Vladislav Gorbunov" <vadikgo@gmail.com> 
Sent: Friday, March 15, 2013 6:11:17 PM 
Subject: Re: rados cppool and Openstack Glance and Cinder 

On 03/15/2013 02:55 PM, Dave Spano wrote: 
> 
> During my journey of using rados cppool, which is an awesome feature by the way, I found an interesting behavior related to cephx. I wanted to share it for anyone else who may be using Openstack, that decides to rename, or copy a pool. 
> 
> My client.glance entry is currently set to this (with the exception of the key, of course): 
> 
> client.glance 
> key: punkrawk 
> caps: [mon] allow r 
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
> 
> It was limited to the images pool based on the following example listed at http://ceph.com/docs/master/rbd/rbd-openstack/ : 
> 
> ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' 
> 
> client.glance 
> key: punkrawk 
> caps: [mon] allow r 
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images 
> 
> 
> 
> What I found was that when I would create my pool as images-new or anything but images, then rename it to images I would have a problem. I could not even upload an image to an empty pool. 
> 
> I could, however; upload to the pool if I renamed the original to images-old, then created a brand new pool called images. 
> 
> My first guess is that there's a reference to the old name which would interfere whenever my client would try to use it with the client.glance keyring. I have not looked in the code yet, so I don't have any other concrete idea. 

Yeah, someone ran into this before, but apparently I hadn't finished 
creating the bug, so now there's http://tracker.ceph.com/issues/4471. 

Each pg includes its pool name in memory, and that isn't updated when 
the pool is renamed. Restarting the osd would refresh it, and creating 
a new pool creates entirely new pgs. 

> As soon as I lifted the pool restriction, as if by the power greyskull, I could upload, delete and take snapshots in the renamed pool. 
> 
> I believe this would be rather easy for anyone to reproduce with a test install of Openstack. 

No openstack needed, just any ceph client with a restriction based on 
pool name. 

> Just create pool named images-new. Rename it to images, then try to upload an image. It should fail. Remove the pool restriction, and it will work. 

Thanks for the detailed report! 
Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-03-15 22:29 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <16850452.137.1363384405239.JavaMail.dspano@it1>
2013-03-15 21:55 ` rados cppool and Openstack Glance and Cinder Dave Spano
2013-03-15 22:11   ` Josh Durgin
2013-03-15 22:29     ` Dave Spano

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.