From mboxrd@z Thu Jan 1 00:00:00 1970 From: Erik McCormick Subject: Re: removing cluster name support Date: Tue, 7 Nov 2017 16:08:36 -0500 Message-ID: References: Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============5119108909436670566==" Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Vasu Kulkarni Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Ceph-User List-Id: ceph-devel.vger.kernel.org --===============5119108909436670566== Content-Type: multipart/alternative; boundary="94eb2c05aba07d9f24055d6af9a3" --94eb2c05aba07d9f24055d6af9a3 Content-Type: text/plain; charset="UTF-8" On Nov 8, 2017 7:33 AM, "Vasu Kulkarni" wrote: On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil wrote: > On Tue, 7 Nov 2017, Alfredo Deza wrote: >> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai wrote: >> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil wrote: >> >> At CDM yesterday we talked about removing the ability to name your ceph >> >> clusters. There are a number of hurtles that make it difficult to fully >> >> get rid of this functionality, not the least of which is that some >> >> (many?) deployed clusters make use of it. We decided that the most we can >> >> do at this point is remove support for it in ceph-deploy and ceph-ansible >> >> so that no new clusters or deployed nodes use it. >> >> >> >> The first PR in this effort: >> >> >> >> https://github.com/ceph/ceph-deploy/pull/441 >> > >> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and >> > http://tracker.ceph.com/issues/3253 >> >> This brings us to a limbo were we aren't supporting it in some places >> but we do in some others. >> >> It was disabled for ceph-deploy, but ceph-ansible wants to support it >> still (see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861 ) > > I still haven't seen a case where customer server names for *daemons* is > needed. Only for client-side $cluster.conf info for connecting. > >> Sebastien argues that these reasons are strong enough to keep that support in: >> >> - Ceph cluster on demand with containers > > With kubernetes, the cluster will existin in a cluster namespace, and > daemons live in containers, so inside the container hte cluster will be > 'ceph'. > >> - Distributed compute nodes > > ? > >> - rbd-mirror integration as part of OSPd > > This is the client-side $cluster.conf for connecting to the remove > cluster. > >> - Disaster scenario with OpenStack Cinder in OSPd > > Ditto. > >> The problem is that, as you can see with the ceph-disk PR just closed, >> there are still other tools that have to implement the juggling of >> custom cluster names >> all over the place and they will hit some corner place where the >> cluster name was not added and things will fail. >> >> Just recently ceph-volume hit one of these places: >> https://bugzilla.redhat.com/show_bug.cgi?id=1507943 >> >> Are we going to support custom cluster names? In what >> context/scenarios are we going to allow it? > > It seems like we could drop this support in ceph-volume, unless someone > can present a compelling reason to keep it? > > ... > > I'd almost want to go a step further and change > > /var/lib/ceph/$type/$cluster-$id/ > > to > > /var/lib/ceph/$type/$id +1 for custom name support to be disabled from master/stable ansible releases, And I think rbd-mirror and openstack are mostly configuration issues that could use different conf files to talk to different clusters. Agreed on the Openstack part. I actually changed nothing on that side of things. The clients still run with a custom config name with no issues. -Erik > > In kubernetes, we're planning on bind mounting the host's > /var/lib/ceph/$namespace/$type/$id to the container's > /var/lib/ceph/$type/ceph-$id. It might be a good time to drop some of the > awkward path names, though. Or is it useless churn? > > sage > > > >> >> >> > >> >> >> >> Background: >> >> >> >> The cluster name concept was added to allow multiple clusters to have >> >> daemons coexist on the same host. At the type it was a hypothetical >> >> requirement for a user that never actually made use of it, and the >> >> support is kludgey: >> >> >> >> - default cluster name is 'ceph' >> >> - default config is /etc/ceph/$cluster.conf, so that the normal >> >> 'ceph.conf' still works >> >> - daemon data paths include the cluster name, >> >> /var/lib/ceph/osd/$cluster-$id >> >> which is weird (but mostly people are used to it?) >> >> - any cli command you want to touch a non-ceph cluster name >> >> needs -C $name or --cluster $name passed to it. >> >> >> >> Also, as of jewel, >> >> >> >> - systemd only supports a single cluster per host, as defined by $CLUSTER >> >> in /etc/{sysconfig,default}/ceph >> >> >> >> which you'll notice removes support for the original "requirement". >> >> >> >> Also note that you can get the same effect by specifying the config path >> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that >> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$ cluster-$id). >> >> >> >> >> >> Crap preventing us from removing this entirely: >> >> >> >> - existing daemon directories for existing clusters >> >> - various scripts parse the cluster name out of paths >> >> >> >> >> >> Converting an existing cluster "foo" back to "ceph": >> >> >> >> - rename /etc/ceph/foo.conf -> ceph.conf >> >> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-* >> >> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph >> >> - reboot >> >> >> >> >> >> Questions: >> >> >> >> - Does anybody on the list use a non-default cluster name? >> >> - If so, do you have a reason not to switch back to 'ceph'? >> >> >> >> Thanks! >> >> sage >> >> _______________________________________________ >> >> ceph-users mailing list >> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > >> > >> > >> > -- >> > Regards >> > Kefu Chai >> > _______________________________________________ >> > ceph-users mailing list >> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> >> > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > More majordomo info at http://vger.kernel.org/majordomo-info.html _______________________________________________ ceph-users mailing list ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com --94eb2c05aba07d9f24055d6af9a3 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Nov 8, 2017 7:33 AM, "Vasu Kulkarni" <vakulkar-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
On Tue, Nov= 7, 2017 at 11:38 AM, Sage Weil <sa= ge-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org> wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> >> At CDM yesterday we talked about removing the ability to = name your ceph
>> >> clusters.=C2=A0 There are a number of hurtles that make i= t difficult to fully
>> >> get rid of this functionality, not the least of which is = that some
>> >> (many?) deployed clusters make use of it.=C2=A0 We decide= d that the most we can
>> >> do at this point is remove support for it in ceph-deploy = and ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0https= ://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph= /pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some= places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support= it
>> still (see:=C2=A0 https://bugzilla.red= hat.com/show_bug.cgi?id=3D1459861 )
>
> I still haven't seen a case where customer server names for *daemo= ns* is
> needed.=C2=A0 Only for client-side $cluster.conf info for connecting.<= br> >
>> Sebastien argues that these reasons are strong enough to keep that= support in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and<= br> > daemons live in containers, so inside the container hte cluster will b= e
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just clo= sed,
>> there are still other tools that have to implement the juggling of=
>> custom cluster names
>> all over the place and they will hit some corner place where the >> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_= bug.cgi?id=3D1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someon= e
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
>=C2=A0 /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible = releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.

Agreed on the Openstack part. I actually= changed nothing on that side of things. The clients still run with a custo= m config name with no issues.

-Erik


>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id.=C2=A0 It might be a good time to drop so= me of the
> awkward path names, though.=C2=A0 Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clus= ters to have
>> >> daemons coexist on the same host.=C2=A0 At the type it wa= s a hypothetical
>> >> requirement for a user that never actually made use of it= , and the
>> >> support is kludgey:
>> >>
>> >>=C2=A0 - default cluster name is 'ceph'
>> >>=C2=A0 - default config is /etc/ceph/$cluster.conf, so tha= t the normal
>> >> 'ceph.conf' still works
>> >>=C2=A0 - daemon data paths include the cluster name,
>> >>=C2=A0 =C2=A0 =C2=A0 /var/lib/ceph/osd/$cluster-$id
>> >>=C2=A0 =C2=A0 which is weird (but mostly people are used t= o it?)
>> >>=C2=A0 - any cli command you want to touch a non-ceph clus= ter name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >>=C2=A0 - systemd only supports a single cluster per host, = as defined by $CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original = "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying = the config path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various= options that
>> >> substitute $cluster in (e.g., osd_data=3D/var/lib/ceph/os= d/$cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >>=C2=A0 - existing daemon directories for existing clusters=
>> >>=C2=A0 - various scripts parse the cluster name out of pat= hs
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to &q= uot;ceph":
>> >>
>> >>=C2=A0 - rename /etc/ceph/foo.conf -> ceph.conf
>> >>=C2=A0 - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/= */ceph-*
>> >>=C2=A0 - remove the CLUSTER=3Dfoo line in /etc/{default,sy= sconfig}/ceph
>> >>=C2=A0 - reboot
>> >>
>> >>
>> >> Questions:
>> >>
>> >>=C2=A0 - Does anybody on the list use a non-default cluste= r name?
>> >>=C2=A0 - If so, do you have a reason not to switch back to= 'ceph'?
>> >>
>> >> Thanks!
>> >> sage
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@l= ists.ceph.com
>> >> http://lists.ceph.com/l= istinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > --
>> > Regards
>> > Kefu Chai
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists= .ceph.com
>> > http://lists.ceph.com/listi= nfo.cgi/ceph-users-ceph.com
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-de= vel" in
> the body of a message to = majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at=C2=A0 http://vger.kernel.org/<= wbr>majordomo-info.html
_____________________________________= __________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org<= br> http://lists.ceph.com/listinfo.cgi/ceph-u= sers-ceph.com

--94eb2c05aba07d9f24055d6af9a3-- --===============5119108909436670566== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ ceph-users mailing list ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com --===============5119108909436670566==--