* Re: rados cppool and Openstack Glance and Cinder
[not found] <16850452.137.1363384405239.JavaMail.dspano@it1>
@ 2013-03-15 21:55 ` Dave Spano
2013-03-15 22:11 ` Josh Durgin
0 siblings, 1 reply; 3+ messages in thread
From: Dave Spano @ 2013-03-15 21:55 UTC (permalink / raw)
To: Greg Farnum, josh durgin
Cc: Sébastien Han, ceph-devel, Sage Weil, Wido den Hollander,
Sylvain Munaut, Samuel Just, Vladislav Gorbunov
During my journey of using rados cppool, which is an awesome feature by the way, I found an interesting behavior related to cephx. I wanted to share it for anyone else who may be using Openstack, that decides to rename, or copy a pool.
My client.glance entry is currently set to this (with the exception of the key, of course):
client.glance
key: punkrawk
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx
It was limited to the images pool based on the following example listed at http://ceph.com/docs/master/rbd/rbd-openstack/ :
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
client.glance
key: punkrawk
caps: [mon] allow r
caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
What I found was that when I would create my pool as images-new or anything but images, then rename it to images I would have a problem. I could not even upload an image to an empty pool.
I could, however; upload to the pool if I renamed the original to images-old, then created a brand new pool called images.
My first guess is that there's a reference to the old name which would interfere whenever my client would try to use it with the client.glance keyring. I have not looked in the code yet, so I don't have any other concrete idea.
As soon as I lifted the pool restriction, as if by the power greyskull, I could upload, delete and take snapshots in the renamed pool.
I believe this would be rather easy for anyone to reproduce with a test install of Openstack.
Just create pool named images-new. Rename it to images, then try to upload an image. It should fail. Remove the pool restriction, and it will work.
Dave Spano
Optogenics
Systems Administrator
----- Original Message -----
From: "Dave Spano" <dspano@optogenics.com>
To: "Greg Farnum" <greg@inktank.com>
Cc: "Sébastien Han" <han.sebastien@gmail.com>, "ceph-devel" <ceph-devel@vger.kernel.org>, "Sage Weil" <sage@inktank.com>, "Wido den Hollander" <wido@42on.com>, "Sylvain Munaut" <s.munaut@whatever-company.com>, "Samuel Just" <sam.just@inktank.com>, "Vladislav Gorbunov" <vadikgo@gmail.com>
Sent: Wednesday, March 13, 2013 8:05:47 PM
Subject: Re: OSD memory leaks?
I renamed the old one from images to images-old, and the new one from images-new to images.
Dave Spano
Optogenics
Systems Administrator
----- Original Message -----
From: Greg Farnum <greg@inktank.com>
To: Dave Spano <dspano@optogenics.com>
Cc: Sébastien Han <han.sebastien@gmail.com>, ceph-devel <ceph-devel@vger.kernel.org>, Sage Weil <sage@inktank.com>, Wido den Hollander <wido@42on.com>, Sylvain Munaut <s.munaut@whatever-company.com>, Samuel Just <sam.just@inktank.com>, Vladislav Gorbunov <vadikgo@gmail.com>
Sent: Wed, 13 Mar 2013 18:52:29 -0400 (EDT)
Subject: Re: OSD memory leaks?
It sounds like maybe you didn't rename the new pool to use the old pool's name? Glance is looking for a specific pool to store its data in; I believe it's configurable but you'll need to do one or the other.
-Greg
On Wednesday, March 13, 2013 at 3:38 PM, Dave Spano wrote:
> Sebastien,
>
> I'm not totally sure yet, but everything is still working.
>
>
> Sage and Greg,
> I copied my glance image pool per the posting I mentioned previously, and everything works when I use the ceph tools. I can export rbds from the new pool and delete them as well.
>
> I noticed that the copied images pool does not work with glance.
>
> I get this error when I try to create images in the new pool. If I put the old pool back, I can create images no problem.
>
> Is there something I'm missing in glance that I need to work with a pool created in bobtail? I'm using Openstack Folsom.
>
> File "/usr/lib/python2.7/dist-packages/glance/api/v1/images.py", line 437, in _upload
> image_meta['size'])
> File "/usr/lib/python2.7/dist-packages/glance/store/rbd.py", line 244, in add
> image_size, order)
> File "/usr/lib/python2.7/dist-packages/glance/store/rbd.py", line 207, in _create_image
> features=rbd.RBD_FEATURE_LAYERING)
> File "/usr/lib/python2.7/dist-packages/rbd.py", line 194, in create
> raise make_ex(ret, 'error creating image')
> PermissionError: error creating image
>
>
> Dave Spano
>
>
>
>
> ----- Original Message -----
>
> From: "Sébastien Han" <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>
> To: "Dave Spano" <dspano@optogenics.com (mailto:dspano@optogenics.com)>
> Cc: "Greg Farnum" <greg@inktank.com (mailto:greg@inktank.com)>, "ceph-devel" <ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)>, "Sage Weil" <sage@inktank.com (mailto:sage@inktank.com)>, "Wido den Hollander" <wido@42on.com (mailto:wido@42on.com)>, "Sylvain Munaut" <s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)>, "Samuel Just" <sam.just@inktank.com (mailto:sam.just@inktank.com)>, "Vladislav Gorbunov" <vadikgo@gmail.com (mailto:vadikgo@gmail.com)>
> Sent: Wednesday, March 13, 2013 3:59:03 PM
> Subject: Re: OSD memory leaks?
>
> Dave,
>
> Just to be sure, did the log max recent=10000 _completely_ stod the
> memory leak or did it slow it down?
>
> Thanks!
> --
> Regards,
> Sébastien Han.
>
>
> On Wed, Mar 13, 2013 at 2:12 PM, Dave Spano <dspano@optogenics.com (mailto:dspano@optogenics.com)> wrote:
> > Lol. I'm totally fine with that. My glance images pool isn't used too often. I'm going to give that a try today and see what happens.
> >
> > I'm still crossing my fingers, but since I added log max recent=10000 to ceph.conf, I've been okay despite the improper pg_num, and a lot of scrubbing/deep scrubbing yesterday.
> >
> > Dave Spano
> >
> >
> >
> >
> > ----- Original Message -----
> >
> > From: "Greg Farnum" <greg@inktank.com (mailto:greg@inktank.com)>
> > To: "Dave Spano" <dspano@optogenics.com (mailto:dspano@optogenics.com)>
> > Cc: "ceph-devel" <ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)>, "Sage Weil" <sage@inktank.com (mailto:sage@inktank.com)>, "Wido den Hollander" <wido@42on.com (mailto:wido@42on.com)>, "Sylvain Munaut" <s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)>, "Samuel Just" <sam.just@inktank.com (mailto:sam.just@inktank.com)>, "Vladislav Gorbunov" <vadikgo@gmail.com (mailto:vadikgo@gmail.com)>, "Sébastien Han" <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>
> > Sent: Tuesday, March 12, 2013 5:37:37 PM
> > Subject: Re: OSD memory leaks?
> >
> > Yeah. There's not anything intelligent about that cppool mechanism. :)
> > -Greg
> >
> > On Tuesday, March 12, 2013 at 2:15 PM, Dave Spano wrote:
> >
> > > I'd rather shut the cloud down and copy the pool to a new one than take any chances of corruption by using an experimental feature. My guess is that there cannot be any i/o to the pool while copying, otherwise you'll lose the changes that are happening during the copy, correct?
> > >
> > > Dave Spano
> > > Optogenics
> > > Systems Administrator
> > >
> > >
> > >
> > > ----- Original Message -----
> > >
> > > From: "Greg Farnum" <greg@inktank.com (mailto:greg@inktank.com)>
> > > To: "Sébastien Han" <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>
> > > Cc: "Dave Spano" <dspano@optogenics.com (mailto:dspano@optogenics.com)>, "ceph-devel" <ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)>, "Sage Weil" <sage@inktank.com (mailto:sage@inktank.com)>, "Wido den Hollander" <wido@42on.com (mailto:wido@42on.com)>, "Sylvain Munaut" <s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)>, "Samuel Just" <sam.just@inktank.com (mailto:sam.just@inktank.com)>, "Vladislav Gorbunov" <vadikgo@gmail.com (mailto:vadikgo@gmail.com)>
> > > Sent: Tuesday, March 12, 2013 4:20:13 PM
> > > Subject: Re: OSD memory leaks?
> > >
> > > On Tuesday, March 12, 2013 at 1:10 PM, Sébastien Han wrote:
> > > > Well to avoid un necessary data movement, there is also an
> > > > _experimental_ feature to change on fly the number of PGs in a pool.
> > > >
> > > > ceph osd pool set <poolname> pg_num <numpgs> --allow-experimental-feature
> > > Don't do that. We've got a set of 3 patches which fix bugs we know about that aren't in bobtail yet, and I'm sure there's more we aren't aware of…
> > > -Greg
> > >
> > > Software Engineer #42 @ http://inktank.com | http://ceph.com
> > >
> > > >
> > > > Cheers!
> > > > --
> > > > Regards,
> > > > Sébastien Han.
> > > >
> > > >
> > > > On Tue, Mar 12, 2013 at 7:09 PM, Dave Spano <dspano@optogenics.com (mailto:dspano@optogenics.com)> wrote:
> > > > > Disregard my previous question. I found my answer in the post below. Absolutely brilliant! I thought I was screwed!
> > > > >
> > > > > http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/8924
> > > > >
> > > > > Dave Spano
> > > > > Optogenics
> > > > > Systems Administrator
> > > > >
> > > > >
> > > > >
> > > > > ----- Original Message -----
> > > > >
> > > > > From: "Dave Spano" <dspano@optogenics.com (mailto:dspano@optogenics.com)>
> > > > > To: "Sébastien Han" <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>
> > > > > Cc: "Sage Weil" <sage@inktank.com (mailto:sage@inktank.com)>, "Wido den Hollander" <wido@42on.com (mailto:wido@42on.com)>, "Gregory Farnum" <greg@inktank.com (mailto:greg@inktank.com)>, "Sylvain Munaut" <s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)>, "ceph-devel" <ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)>, "Samuel Just" <sam.just@inktank.com (mailto:sam.just@inktank.com)>, "Vladislav Gorbunov" <vadikgo@gmail.com (mailto:vadikgo@gmail.com)>
> > > > > Sent: Tuesday, March 12, 2013 1:41:21 PM
> > > > > Subject: Re: OSD memory leaks?
> > > > >
> > > > >
> > > > > If one were stupid enough to have their pg_num and pgp_num set to 8 on two of their pools, how could you fix that?
> > > > >
> > > > >
> > > > > Dave Spano
> > > > >
> > > > >
> > > > >
> > > > > ----- Original Message -----
> > > > >
> > > > > From: "Sébastien Han" <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>
> > > > > To: "Vladislav Gorbunov" <vadikgo@gmail.com (mailto:vadikgo@gmail.com)>
> > > > > Cc: "Sage Weil" <sage@inktank.com (mailto:sage@inktank.com)>, "Wido den Hollander" <wido@42on.com (mailto:wido@42on.com)>, "Gregory Farnum" <greg@inktank.com (mailto:greg@inktank.com)>, "Sylvain Munaut" <s.munaut@whatever-company.com (mailto:s.munaut@whatever-company.com)>, "Dave Spano" <dspano@optogenics.com (mailto:dspano@optogenics.com)>, "ceph-devel" <ceph-devel@vger.kernel.org (mailto:ceph-devel@vger.kernel.org)>, "Samuel Just" <sam.just@inktank.com (mailto:sam.just@inktank.com)>
> > > > > Sent: Tuesday, March 12, 2013 9:43:44 AM
> > > > > Subject: Re: OSD memory leaks?
> > > > >
> > > > > > Sorry, i mean pg_num and pgp_num on all pools. Shown by the "ceph osd
> > > > > > dump | grep 'rep size'"
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > Well it's still 450 each...
> > > > >
> > > > > > The default pg_num value 8 is NOT suitable for big cluster.
> > > > >
> > > > > Thanks I know, I'm not new with Ceph. What's your point here? I
> > > > > already said that pg_num was 450...
> > > > > --
> > > > > Regards,
> > > > > Sébastien Han.
> > > > >
> > > > >
> > > > > On Tue, Mar 12, 2013 at 2:00 PM, Vladislav Gorbunov <vadikgo@gmail.com (mailto:vadikgo@gmail.com)> wrote:
> > > > > > Sorry, i mean pg_num and pgp_num on all pools. Shown by the "ceph osd
> > > > > > dump | grep 'rep size'"
> > > > > > The default pg_num value 8 is NOT suitable for big cluster.
> > > > > >
> > > > > > 2013/3/13 Sébastien Han <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>:
> > > > > > > Replica count has been set to 2.
> > > > > > >
> > > > > > > Why?
> > > > > > > --
> > > > > > > Regards,
> > > > > > > Sébastien Han.
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Mar 12, 2013 at 12:45 PM, Vladislav Gorbunov <vadikgo@gmail.com (mailto:vadikgo@gmail.com)> wrote:
> > > > > > > > > FYI I'm using 450 pgs for my pools.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Please, can you show the number of object replicas?
> > > > > > > >
> > > > > > > > ceph osd dump | grep 'rep size'
> > > > > > > >
> > > > > > > > Vlad Gorbunov
> > > > > > > >
> > > > > > > > 2013/3/5 Sébastien Han <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>:
> > > > > > > > > FYI I'm using 450 pgs for my pools.
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > Regards,
> > > > > > > > > Sébastien Han.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Fri, Mar 1, 2013 at 8:10 PM, Sage Weil <sage@inktank.com (mailto:sage@inktank.com)> wrote:
> > > > > > > > > >
> > > > > > > > > > On Fri, 1 Mar 2013, Wido den Hollander wrote:
> > > > > > > > > > > On 02/23/2013 01:44 AM, Sage Weil wrote:
> > > > > > > > > > > > On Fri, 22 Feb 2013, S?bastien Han wrote:
> > > > > > > > > > > > > Hi all,
> > > > > > > > > > > > >
> > > > > > > > > > > > > I finally got a core dump.
> > > > > > > > > > > > >
> > > > > > > > > > > > > I did it with a kill -SEGV on the OSD process.
> > > > > > > > > > > > >
> > > > > > > > > > > > > https://www.dropbox.com/s/ahv6hm0ipnak5rf/core-ceph-osd-11-0-0-20100-1361539008
> > > > > > > > > > > > >
> > > > > > > > > > > > > Hope we will get something out of it :-).
> > > > > > > > > > > >
> > > > > > > > > > > > AHA! We have a theory. The pg log isnt trimmed during scrub (because teh
> > > > > > > > > > > > old scrub code required that), but the new (deep) scrub can take a very
> > > > > > > > > > > > long time, which means the pg log will eat ram in the meantime..
> > > > > > > > > > > > especially under high iops.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Does the number of PGs influence the memory leak? So my theory is that when
> > > > > > > > > > > you have a high number of PGs with a low number of objects per PG you don't
> > > > > > > > > > > see the memory leak.
> > > > > > > > > > >
> > > > > > > > > > > I saw the memory leak on a RBD system where a pool had just 8 PGs, but after
> > > > > > > > > > > going to 1024 PGs in a new pool it seemed to be resolved.
> > > > > > > > > > >
> > > > > > > > > > > I've asked somebody else to try your patch since he's still seeing it on his
> > > > > > > > > > > systems. Hopefully that gives us some results.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > The PGs were active+clean when you saw the leak? There is a problem (that
> > > > > > > > > > we just fixed in master) where pg logs aren't trimmed for degraded PGs.
> > > > > > > > > >
> > > > > > > > > > sage
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Wido
> > > > > > > > > > >
> > > > > > > > > > > > Can you try wip-osd-log-trim (which is bobtail + a simple patch) and see
> > > > > > > > > > > > if that seems to work? Note that that patch shouldn't be run in a mixed
> > > > > > > > > > > > argonaut+bobtail cluster, since it isn't properly checking if the scrub is
> > > > > > > > > > > > class or chunky/deep.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks!
> > > > > > > > > > > > sage
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > > Regards,
> > > > > > > > > > > > > S?bastien Han.
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Fri, Jan 11, 2013 at 7:13 PM, Gregory Farnum <greg@inktank.com (mailto:greg@inktank.com)> wrote:
> > > > > > > > > > > > > > On Fri, Jan 11, 2013 at 6:57 AM, S?bastien Han <han.sebastien@gmail.com (mailto:han.sebastien@gmail.com)>
> > > > > > > > > > > > > > wrote:
> > > > > > > > > > > > > > > > Is osd.1 using the heap profiler as well? Keep in mind that active
> > > > > > > > > > > > > > > > use
> > > > > > > > > > > > > > > > of the memory profiler will itself cause memory usage to increase ?
> > > > > > > > > > > > > > > > this sounds a bit like that to me since it's staying stable at a
> > > > > > > > > > > > > > > > large
> > > > > > > > > > > > > > > > but finite portion of total memory.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Well, the memory consumption was already high before the profiler was
> > > > > > > > > > > > > > > started. So yes with the memory profiler enable an OSD might consume
> > > > > > > > > > > > > > > more memory but this doesn't cause the memory leaks.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > My concern is that maybe you saw a leak but when you restarted with
> > > > > > > > > > > > > > the memory profiling you lost whatever conditions caused it.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Any ideas? Nothing to say about my scrumbing theory?
> > > > > > > > > > > > > > I like it, but Sam indicates that without some heap dumps which
> > > > > > > > > > > > > > capture the actual leak then scrub is too large to effectively code
> > > > > > > > > > > > > > review for leaks. :(
> > > > > > > > > > > > > > -Greg
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > --
> > > > > > > > > > > > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > > > > > > > > > > > the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org)
> > > > > > > > > > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > > > > > > > > > > the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org)
> > > > > > > > > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > Wido den Hollander
> > > > > > > > > > > 42on B.V.
> > > > > > > > > > >
> > > > > > > > > > > Phone: +31 (0)20 700 9902
> > > > > > > > > > > Skype: contact42on
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > > > > > > > the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org)
> > > > > > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > > > > the body of a message to majordomo@vger.kernel.org (mailto:majordomo@vger.kernel.org)
> > > > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > > >
> > >
> >
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: rados cppool and Openstack Glance and Cinder
2013-03-15 21:55 ` rados cppool and Openstack Glance and Cinder Dave Spano
@ 2013-03-15 22:11 ` Josh Durgin
2013-03-15 22:29 ` Dave Spano
0 siblings, 1 reply; 3+ messages in thread
From: Josh Durgin @ 2013-03-15 22:11 UTC (permalink / raw)
To: Dave Spano
Cc: Greg Farnum, Sébastien Han, ceph-devel, Sage Weil,
Wido den Hollander, Sylvain Munaut, Samuel Just,
Vladislav Gorbunov
On 03/15/2013 02:55 PM, Dave Spano wrote:
>
> During my journey of using rados cppool, which is an awesome feature by the way, I found an interesting behavior related to cephx. I wanted to share it for anyone else who may be using Openstack, that decides to rename, or copy a pool.
>
> My client.glance entry is currently set to this (with the exception of the key, of course):
>
> client.glance
> key: punkrawk
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>
> It was limited to the images pool based on the following example listed at http://ceph.com/docs/master/rbd/rbd-openstack/ :
>
> ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
>
> client.glance
> key: punkrawk
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
>
>
>
> What I found was that when I would create my pool as images-new or anything but images, then rename it to images I would have a problem. I could not even upload an image to an empty pool.
>
> I could, however; upload to the pool if I renamed the original to images-old, then created a brand new pool called images.
>
> My first guess is that there's a reference to the old name which would interfere whenever my client would try to use it with the client.glance keyring. I have not looked in the code yet, so I don't have any other concrete idea.
Yeah, someone ran into this before, but apparently I hadn't finished
creating the bug, so now there's http://tracker.ceph.com/issues/4471.
Each pg includes its pool name in memory, and that isn't updated when
the pool is renamed. Restarting the osd would refresh it, and creating
a new pool creates entirely new pgs.
> As soon as I lifted the pool restriction, as if by the power greyskull, I could upload, delete and take snapshots in the renamed pool.
>
> I believe this would be rather easy for anyone to reproduce with a test install of Openstack.
No openstack needed, just any ceph client with a restriction based on
pool name.
> Just create pool named images-new. Rename it to images, then try to upload an image. It should fail. Remove the pool restriction, and it will work.
Thanks for the detailed report!
Josh
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: rados cppool and Openstack Glance and Cinder
2013-03-15 22:11 ` Josh Durgin
@ 2013-03-15 22:29 ` Dave Spano
0 siblings, 0 replies; 3+ messages in thread
From: Dave Spano @ 2013-03-15 22:29 UTC (permalink / raw)
To: Josh Durgin
Cc: Greg Farnum, Sébastien Han, ceph-devel, Sage Weil,
Wido den Hollander, Sylvain Munaut, Samuel Just,
Vladislav Gorbunov
Thank you Josh. Have a great weekend.
Dave Spano
----- Original Message -----
From: "Josh Durgin" <josh.durgin@inktank.com>
To: "Dave Spano" <dspano@optogenics.com>
Cc: "Greg Farnum" <greg@inktank.com>, "Sébastien Han" <han.sebastien@gmail.com>, "ceph-devel" <ceph-devel@vger.kernel.org>, "Sage Weil" <sage@inktank.com>, "Wido den Hollander" <wido@42on.com>, "Sylvain Munaut" <s.munaut@whatever-company.com>, "Samuel Just" <sam.just@inktank.com>, "Vladislav Gorbunov" <vadikgo@gmail.com>
Sent: Friday, March 15, 2013 6:11:17 PM
Subject: Re: rados cppool and Openstack Glance and Cinder
On 03/15/2013 02:55 PM, Dave Spano wrote:
>
> During my journey of using rados cppool, which is an awesome feature by the way, I found an interesting behavior related to cephx. I wanted to share it for anyone else who may be using Openstack, that decides to rename, or copy a pool.
>
> My client.glance entry is currently set to this (with the exception of the key, of course):
>
> client.glance
> key: punkrawk
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx
>
> It was limited to the images pool based on the following example listed at http://ceph.com/docs/master/rbd/rbd-openstack/ :
>
> ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
>
> client.glance
> key: punkrawk
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=images
>
>
>
> What I found was that when I would create my pool as images-new or anything but images, then rename it to images I would have a problem. I could not even upload an image to an empty pool.
>
> I could, however; upload to the pool if I renamed the original to images-old, then created a brand new pool called images.
>
> My first guess is that there's a reference to the old name which would interfere whenever my client would try to use it with the client.glance keyring. I have not looked in the code yet, so I don't have any other concrete idea.
Yeah, someone ran into this before, but apparently I hadn't finished
creating the bug, so now there's http://tracker.ceph.com/issues/4471.
Each pg includes its pool name in memory, and that isn't updated when
the pool is renamed. Restarting the osd would refresh it, and creating
a new pool creates entirely new pgs.
> As soon as I lifted the pool restriction, as if by the power greyskull, I could upload, delete and take snapshots in the renamed pool.
>
> I believe this would be rather easy for anyone to reproduce with a test install of Openstack.
No openstack needed, just any ceph client with a restriction based on
pool name.
> Just create pool named images-new. Rename it to images, then try to upload an image. It should fail. Remove the pool restriction, and it will work.
Thanks for the detailed report!
Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2013-03-15 22:29 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <16850452.137.1363384405239.JavaMail.dspano@it1>
2013-03-15 21:55 ` rados cppool and Openstack Glance and Cinder Dave Spano
2013-03-15 22:11 ` Josh Durgin
2013-03-15 22:29 ` Dave Spano
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.