All of lore.kernel.org
 help / color / mirror / Atom feed
* removing cluster name support
@ 2017-06-08 19:37 Sage Weil
  2017-06-08 19:48 ` [ceph-users] " Bassam Tabbara
                   ` (6 more replies)
  0 siblings, 7 replies; 24+ messages in thread
From: Sage Weil @ 2017-06-08 19:37 UTC (permalink / raw)
  To: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

At CDM yesterday we talked about removing the ability to name your ceph 
clusters.  There are a number of hurtles that make it difficult to fully 
get rid of this functionality, not the least of which is that some 
(many?) deployed clusters make use of it.  We decided that the most we can 
do at this point is remove support for it in ceph-deploy and ceph-ansible 
so that no new clusters or deployed nodes use it.

The first PR in this effort:

	https://github.com/ceph/ceph-deploy/pull/441

Background:

The cluster name concept was added to allow multiple clusters to have 
daemons coexist on the same host.  At the type it was a hypothetical 
requirement for a user that never actually made use of it, and the 
support is kludgey:

 - default cluster name is 'ceph'
 - default config is /etc/ceph/$cluster.conf, so that the normal 
'ceph.conf' still works
 - daemon data paths include the cluster name,
     /var/lib/ceph/osd/$cluster-$id
   which is weird (but mostly people are used to it?)
 - any cli command you want to touch a non-ceph cluster name 
needs -C $name or --cluster $name passed to it.

Also, as of jewel,

 - systemd only supports a single cluster per host, as defined by $CLUSTER 
in /etc/{sysconfig,default}/ceph

which you'll notice removes support for the original "requirement".

Also note that you can get the same effect by specifying the config path 
explicitly (-c /etc/ceph/foo.conf) along with the various options that 
substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).


Crap preventing us from removing this entirely:

 - existing daemon directories for existing clusters
 - various scripts parse the cluster name out of paths


Converting an existing cluster "foo" back to "ceph":

 - rename /etc/ceph/foo.conf -> ceph.conf
 - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
 - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph 
 - reboot


Questions:

 - Does anybody on the list use a non-default cluster name?
 - If so, do you have a reason not to switch back to 'ceph'?

Thanks!
sage

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:37 removing cluster name support Sage Weil
@ 2017-06-08 19:48 ` Bassam Tabbara
  2017-06-08 19:54   ` Sage Weil
       [not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Bassam Tabbara @ 2017-06-08 19:48 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users

Thanks Sage.

> At CDM yesterday we talked about removing the ability to name your ceph 
> clusters. 


Just to be clear, it would still be possible to run multiple ceph clusters on the same nodes, right?



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:48 ` [ceph-users] " Bassam Tabbara
@ 2017-06-08 19:54   ` Sage Weil
  2017-06-09 12:19     ` Alfredo Deza
  0 siblings, 1 reply; 24+ messages in thread
From: Sage Weil @ 2017-06-08 19:54 UTC (permalink / raw)
  To: Bassam Tabbara; +Cc: ceph-devel, ceph-users

On Thu, 8 Jun 2017, Bassam Tabbara wrote:
> Thanks Sage.
> 
> > At CDM yesterday we talked about removing the ability to name your ceph 
> > clusters. 
> 
> Just to be clear, it would still be possible to run multiple ceph 
> clusters on the same nodes, right?

Yes, but you'd need to either (1) use containers (so that different 
daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd 
unit files to do... something.  

This is actually no different from Jewel. It's just that currently you can 
run a single cluster on a host (without containers) but call it 'foo' and 
knock yourself out by passing '--cluster foo' every time you invoke the 
CLI.

I'm guessing you're in the (1) case anyway and this doesn't affect you at 
all :)

sage

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
       [not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-06-08 19:55   ` Dan van der Ster
  2017-06-11 13:41   ` Peter Maloney
  1 sibling, 0 replies; 24+ messages in thread
From: Dan van der Ster @ 2017-06-08 19:55 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users


[-- Attachment #1.1: Type: text/plain, Size: 2623 bytes --]

Hi Sage,

We need named clusters on the client side. RBD or CephFS clients, or
monitoring/admin machines all need to be able to access several clusters.

Internally, each cluster is indeed called "ceph", but the clients use
distinct names to differentiate their configs/keyrings.

Cheers, Dan


On Jun 8, 2017 9:37 PM, "Sage Weil" <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

At CDM yesterday we talked about removing the ability to name your ceph
clusters.  There are a number of hurtles that make it difficult to fully
get rid of this functionality, not the least of which is that some
(many?) deployed clusters make use of it.  We decided that the most we can
do at this point is remove support for it in ceph-deploy and ceph-ansible
so that no new clusters or deployed nodes use it.

The first PR in this effort:

        https://github.com/ceph/ceph-deploy/pull/441

Background:

The cluster name concept was added to allow multiple clusters to have
daemons coexist on the same host.  At the type it was a hypothetical
requirement for a user that never actually made use of it, and the
support is kludgey:

 - default cluster name is 'ceph'
 - default config is /etc/ceph/$cluster.conf, so that the normal
'ceph.conf' still works
 - daemon data paths include the cluster name,
     /var/lib/ceph/osd/$cluster-$id
   which is weird (but mostly people are used to it?)
 - any cli command you want to touch a non-ceph cluster name
needs -C $name or --cluster $name passed to it.

Also, as of jewel,

 - systemd only supports a single cluster per host, as defined by $CLUSTER
in /etc/{sysconfig,default}/ceph

which you'll notice removes support for the original "requirement".

Also note that you can get the same effect by specifying the config path
explicitly (-c /etc/ceph/foo.conf) along with the various options that
substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).


Crap preventing us from removing this entirely:

 - existing daemon directories for existing clusters
 - various scripts parse the cluster name out of paths


Converting an existing cluster "foo" back to "ceph":

 - rename /etc/ceph/foo.conf -> ceph.conf
 - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
 - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
 - reboot


Questions:

 - Does anybody on the list use a non-default cluster name?
 - If so, do you have a reason not to switch back to 'ceph'?

Thanks!
sage
_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[-- Attachment #1.2: Type: text/html, Size: 3811 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:37 removing cluster name support Sage Weil
  2017-06-08 19:48 ` [ceph-users] " Bassam Tabbara
       [not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-06-08 20:41 ` Benjeman Meekhof
  2017-06-09 11:33   ` Tim Serong
  2017-06-08 21:33 ` Vaibhav Bhembre
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 24+ messages in thread
From: Benjeman Meekhof @ 2017-06-08 20:41 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users

Hi Sage,

We did at one time run multiple clusters on our OSD nodes and RGW
nodes (with Jewel).  We accomplished this by putting code in our
puppet-ceph module that would create additional systemd units with
appropriate CLUSTER=name environment settings for clusters not named
ceph.  IE, if the module were asked to configure OSD for a cluster
named 'test' it would copy/edit the ceph-osd service to create a
'test-osd@.service' unit that would start instances with CLUSTER=test
so they would point to the right config file, etc   Eventually on the
RGW side I started doing instance-specific overrides like
'/etc/systemd/system/ceph-radosgw@client.name.d/override.conf' so as
to avoid replicating the stock systemd unit.

We gave up on multiple clusters on the OSD nodes because it wasn't
really that useful to maintain a separate 'test' cluster on the same
hardware.  We continue to need ability to reference multiple clusters
for RGW nodes and other clients. For the other example, users of our
project might have their own Ceph clusters in addition to wanting to
use ours.

If the daemon solution in the no-clustername future is to 'modify
systemd unit files to do something' we're already doing that so it's
not a big issue.  However the current modification of over-riding
CLUSTER in the environment section of systemd files does seem cleaner
than over-riding an exec command to specify a different config file
and keyring path.   Maybe systemd units could ship with those
arguments as variables for easily over-riding.

thanks,
Ben

On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil <sweil@redhat.com> wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.  There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it.  We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
>         https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host.  At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>      /var/lib/ceph/osd/$cluster-$id
>    which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>  - reboot
>
>
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:37 removing cluster name support Sage Weil
                   ` (2 preceding siblings ...)
  2017-06-08 20:41 ` [ceph-users] " Benjeman Meekhof
@ 2017-06-08 21:33 ` Vaibhav Bhembre
  2017-06-09 16:07 ` Sage Weil
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 24+ messages in thread
From: Vaibhav Bhembre @ 2017-06-08 21:33 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users

We have an internal management service that works at a higher layer
upstream on top of multiple Ceph clusters. It needs a way to
differentiate and connect separately to each of those clusters.
Presently making that distinction is relatively easy since we create
those connections based on /etc/conf/$cluster.conf, where each cluster
name is unique. I am not sure how this will work for us if we go away
from the way of uniquely identifying multiple clusters from a single client.

On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil <sweil@redhat.com> wrote:
>
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.  There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it.  We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
>         https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host.  At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>      /var/lib/ceph/osd/$cluster-$id
>    which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>  - reboot
>
>
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 20:41 ` [ceph-users] " Benjeman Meekhof
@ 2017-06-09 11:33   ` Tim Serong
  0 siblings, 0 replies; 24+ messages in thread
From: Tim Serong @ 2017-06-09 11:33 UTC (permalink / raw)
  To: Benjeman Meekhof, Sage Weil; +Cc: ceph-devel, ceph-users

On 06/09/2017 06:41 AM, Benjeman Meekhof wrote:
> Hi Sage,
> 
> We did at one time run multiple clusters on our OSD nodes and RGW
> nodes (with Jewel).  We accomplished this by putting code in our
> puppet-ceph module that would create additional systemd units with
> appropriate CLUSTER=name environment settings for clusters not named
> ceph.  IE, if the module were asked to configure OSD for a cluster
> named 'test' it would copy/edit the ceph-osd service to create a
> 'test-osd@.service' unit that would start instances with CLUSTER=test
> so they would point to the right config file, etc   Eventually on the
> RGW side I started doing instance-specific overrides like
> '/etc/systemd/system/ceph-radosgw@client.name.d/override.conf' so as
> to avoid replicating the stock systemd unit.
> 
> We gave up on multiple clusters on the OSD nodes because it wasn't
> really that useful to maintain a separate 'test' cluster on the same
> hardware.  We continue to need ability to reference multiple clusters
> for RGW nodes and other clients. For the other example, users of our
> project might have their own Ceph clusters in addition to wanting to
> use ours.
> 
> If the daemon solution in the no-clustername future is to 'modify
> systemd unit files to do something' we're already doing that so it's
> not a big issue.  However the current modification of over-riding
> CLUSTER in the environment section of systemd files does seem cleaner
> than over-riding an exec command to specify a different config file
> and keyring path.   Maybe systemd units could ship with those
> arguments as variables for easily over-riding.

systemd units can be templated/parameterized, but with only one
parameter, the instance ID, which we're already using
(ceph-mon@$(hostname), ceph-osd@$ID, etc.)

> 
> thanks,
> Ben
> 
> On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil <sweil@redhat.com> wrote:
>> At CDM yesterday we talked about removing the ability to name your ceph
>> clusters.  There are a number of hurtles that make it difficult to fully
>> get rid of this functionality, not the least of which is that some
>> (many?) deployed clusters make use of it.  We decided that the most we can
>> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> so that no new clusters or deployed nodes use it.
>>
>> The first PR in this effort:
>>
>>         https://github.com/ceph/ceph-deploy/pull/441
>>
>> Background:
>>
>> The cluster name concept was added to allow multiple clusters to have
>> daemons coexist on the same host.  At the type it was a hypothetical
>> requirement for a user that never actually made use of it, and the
>> support is kludgey:
>>
>>  - default cluster name is 'ceph'
>>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> 'ceph.conf' still works
>>  - daemon data paths include the cluster name,
>>      /var/lib/ceph/osd/$cluster-$id
>>    which is weird (but mostly people are used to it?)
>>  - any cli command you want to touch a non-ceph cluster name
>> needs -C $name or --cluster $name passed to it.
>>
>> Also, as of jewel,
>>
>>  - systemd only supports a single cluster per host, as defined by $CLUSTER
>> in /etc/{sysconfig,default}/ceph
>>
>> which you'll notice removes support for the original "requirement".
>>
>> Also note that you can get the same effect by specifying the config path
>> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>>
>>
>> Crap preventing us from removing this entirely:
>>
>>  - existing daemon directories for existing clusters
>>  - various scripts parse the cluster name out of paths
>>
>>
>> Converting an existing cluster "foo" back to "ceph":
>>
>>  - rename /etc/ceph/foo.conf -> ceph.conf
>>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>>  - reboot
>>
>>
>> Questions:
>>
>>  - Does anybody on the list use a non-default cluster name?
>>  - If so, do you have a reason not to switch back to 'ceph'?
>>
>> Thanks!
>> sage
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 


-- 
Tim Serong
Senior Clustering Engineer
SUSE
tserong@suse.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:54   ` Sage Weil
@ 2017-06-09 12:19     ` Alfredo Deza
       [not found]       ` <CAC-Np1wjRX99N4q69XfWY0m0fDETpRQZj5Hrgoe6kbrh7riE+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 24+ messages in thread
From: Alfredo Deza @ 2017-06-09 12:19 UTC (permalink / raw)
  To: Sage Weil; +Cc: Bassam Tabbara, ceph-devel, ceph-users

On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil <sweil@redhat.com> wrote:
> On Thu, 8 Jun 2017, Bassam Tabbara wrote:
>> Thanks Sage.
>>
>> > At CDM yesterday we talked about removing the ability to name your ceph
>> > clusters.
>>
>> Just to be clear, it would still be possible to run multiple ceph
>> clusters on the same nodes, right?
>
> Yes, but you'd need to either (1) use containers (so that different
> daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
> unit files to do... something.

In the container case, I need to clarify that ceph-docker deployed
with ceph-ansible is not capable of doing this, since
the ad-hoc systemd units use the hostname as part of the identifier
for the daemon, e.g:

    systemctl enable ceph-mon@{{ ansible_hostname }}.service


>
> This is actually no different from Jewel. It's just that currently you can
> run a single cluster on a host (without containers) but call it 'foo' and
> knock yourself out by passing '--cluster foo' every time you invoke the
> CLI.
>
> I'm guessing you're in the (1) case anyway and this doesn't affect you at
> all :)
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
       [not found]       ` <CAC-Np1wjRX99N4q69XfWY0m0fDETpRQZj5Hrgoe6kbrh7riE+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-06-09 13:11         ` Wes Dillingham
  2017-06-09 15:58           ` [ceph-users] " Vasu Kulkarni
  0 siblings, 1 reply; 24+ messages in thread
From: Wes Dillingham @ 2017-06-09 13:11 UTC (permalink / raw)
  Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 2690 bytes --]

Similar to Dan's situation we utilize the --cluster name concept for our
operations. Primarily for "datamover" nodes which do incremental rbd
import/export between distinct clusters. This is entirely coordinated by
utilizing the --cluster option throughout.

The way we set it up is that all clusters are actually named "ceph" on the
mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
see the functionality of clients being able to specify which conf file to
read preserved.

As a note though we went the route of naming all clusters "ceph" to
workaround difficulties in non-standard naming so this issue does need some
attention.

On Fri, Jun 9, 2017 at 8:19 AM, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> > On Thu, 8 Jun 2017, Bassam Tabbara wrote:
> >> Thanks Sage.
> >>
> >> > At CDM yesterday we talked about removing the ability to name your
> ceph
> >> > clusters.
> >>
> >> Just to be clear, it would still be possible to run multiple ceph
> >> clusters on the same nodes, right?
> >
> > Yes, but you'd need to either (1) use containers (so that different
> > daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
> > unit files to do... something.
>
> In the container case, I need to clarify that ceph-docker deployed
> with ceph-ansible is not capable of doing this, since
> the ad-hoc systemd units use the hostname as part of the identifier
> for the daemon, e.g:
>
>     systemctl enable ceph-mon@{{ ansible_hostname }}.service
>
>
> >
> > This is actually no different from Jewel. It's just that currently you
> can
> > run a single cluster on a host (without containers) but call it 'foo' and
> > knock yourself out by passing '--cluster foo' every time you invoke the
> > CLI.
> >
> > I'm guessing you're in the (1) case anyway and this doesn't affect you at
> > all :)
> >
> > sage
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Respectfully,

Wes Dillingham
wes_dillingham-nRXb03v5zC6Vc3sceRu5cw@public.gmane.org
Research Computing | Senior CyberInfrastructure Storage Engineer
Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102

[-- Attachment #1.2: Type: text/html, Size: 4556 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-09 13:11         ` Wes Dillingham
@ 2017-06-09 15:58           ` Vasu Kulkarni
       [not found]             ` <CAKPXa=ZjsvhAMwdM9k47L4gaMGVispyJ7bMOyR7dVu0y7pb12A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 24+ messages in thread
From: Vasu Kulkarni @ 2017-06-09 15:58 UTC (permalink / raw)
  To: wes_dillingham; +Cc: ceph-devel, ceph-users

On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
<wes_dillingham@harvard.edu> wrote:
> Similar to Dan's situation we utilize the --cluster name concept for our
> operations. Primarily for "datamover" nodes which do incremental rbd
> import/export between distinct clusters. This is entirely coordinated by
> utilizing the --cluster option throughout.
>
> The way we set it up is that all clusters are actually named "ceph" on the
> mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
> and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
> see the functionality of clients being able to specify which conf file to
> read preserved.

ceph.conf along with keyring file can stay in any location, the
default location is /etc/ceph but one could use
other location for clusterB.conf (
http://docs.ceph.com/docs/jewel/rados/configuration/ceph-conf/ ), At
least
for client which doesn't run any daemon this should be sufficient to
make it talk to different clusters.

>
> As a note though we went the route of naming all clusters "ceph" to
> workaround difficulties in non-standard naming so this issue does need some
> attention.
It would be nice if you can throw in the steps in tracker which can be
then moved to docs so that it can help others follow
those steps to rename the cluster back to 'ceph'


>
> On Fri, Jun 9, 2017 at 8:19 AM, Alfredo Deza <adeza@redhat.com> wrote:
>>
>> On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil <sweil@redhat.com> wrote:
>> > On Thu, 8 Jun 2017, Bassam Tabbara wrote:
>> >> Thanks Sage.
>> >>
>> >> > At CDM yesterday we talked about removing the ability to name your
>> >> > ceph
>> >> > clusters.
>> >>
>> >> Just to be clear, it would still be possible to run multiple ceph
>> >> clusters on the same nodes, right?
>> >
>> > Yes, but you'd need to either (1) use containers (so that different
>> > daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
>> > unit files to do... something.
>>
>> In the container case, I need to clarify that ceph-docker deployed
>> with ceph-ansible is not capable of doing this, since
>> the ad-hoc systemd units use the hostname as part of the identifier
>> for the daemon, e.g:
>>
>>     systemctl enable ceph-mon@{{ ansible_hostname }}.service
>>
>>
>> >
>> > This is actually no different from Jewel. It's just that currently you
>> > can
>> > run a single cluster on a host (without containers) but call it 'foo'
>> > and
>> > knock yourself out by passing '--cluster foo' every time you invoke the
>> > CLI.
>> >
>> > I'm guessing you're in the (1) case anyway and this doesn't affect you
>> > at
>> > all :)
>> >
>> > sage
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Respectfully,
>
> Wes Dillingham
> wes_dillingham@harvard.edu
> Research Computing | Senior CyberInfrastructure Storage Engineer
> Harvard University | 38 Oxford Street, Cambridge, Ma 02138 | Room 102
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:37 removing cluster name support Sage Weil
                   ` (3 preceding siblings ...)
  2017-06-08 21:33 ` Vaibhav Bhembre
@ 2017-06-09 16:07 ` Sage Weil
  2017-06-09 16:16   ` Mykola Golub
  2017-06-09 16:19   ` Erik McCormick
  2017-06-09 16:10 ` Mykola Golub
  2017-11-07 12:09 ` [ceph-users] " kefu chai
  6 siblings, 2 replies; 24+ messages in thread
From: Sage Weil @ 2017-06-09 16:07 UTC (permalink / raw)
  To: ceph-devel, ceph-users

On Thu, 8 Jun 2017, Sage Weil wrote:
> Questions:
> 
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?

It sounds like the answer is "yes," but not for daemons. Several users use 
it on the client side to connect to multiple clusters from the same host.

Nobody is colocating multiple daemons from different clusters on the same 
host.  Some have in the past but stopped.  If they choose to in the 
future, they can customize the systemd units themselves.

The rbd-mirror daemon has a similar requirement to talk to multiple 
clusters as a client.

This makes me conclude our current path is fine:

 - leave existing --cluster infrastructure in place in the ceph code, but
 - remove support for deploying daemons with custom cluster names from the 
deployment tools.

This neatly avoids the systemd limitations for all but the most 
adventuresome admins and avoid the more common case of an admin falling 
into the "oh, I can name my cluster? cool! [...] oh, i have to add 
--cluster rover to every command? ick!" trap.

sage

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
  2017-06-08 19:37 removing cluster name support Sage Weil
                   ` (4 preceding siblings ...)
  2017-06-09 16:07 ` Sage Weil
@ 2017-06-09 16:10 ` Mykola Golub
  2017-11-07 12:09 ` [ceph-users] " kefu chai
  6 siblings, 0 replies; 24+ messages in thread
From: Mykola Golub @ 2017-06-09 16:10 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users, Jason Dillaman

RBD mirror uses cluster name when configuring its peer [1,2]

[1] http://docs.ceph.com/docs/master/rbd/rbd-mirroring/#add-cluster-peer
[2] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/block_device_guide/block_device_mirroring#configuring_one_way_mirroring

On Thu, Jun 08, 2017 at 07:37:23PM +0000, Sage Weil wrote:
> At CDM yesterday we talked about removing the ability to name your ceph 
> clusters.  There are a number of hurtles that make it difficult to fully 
> get rid of this functionality, not the least of which is that some 
> (many?) deployed clusters make use of it.  We decided that the most we can 
> do at this point is remove support for it in ceph-deploy and ceph-ansible 
> so that no new clusters or deployed nodes use it.
> 
> The first PR in this effort:
> 
> 	https://github.com/ceph/ceph-deploy/pull/441
> 
> Background:
> 
> The cluster name concept was added to allow multiple clusters to have 
> daemons coexist on the same host.  At the type it was a hypothetical 
> requirement for a user that never actually made use of it, and the 
> support is kludgey:
> 
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal 
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>      /var/lib/ceph/osd/$cluster-$id
>    which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name 
> needs -C $name or --cluster $name passed to it.
> 
> Also, as of jewel,
> 
>  - systemd only supports a single cluster per host, as defined by $CLUSTER 
> in /etc/{sysconfig,default}/ceph
> 
> which you'll notice removes support for the original "requirement".
> 
> Also note that you can get the same effect by specifying the config path 
> explicitly (-c /etc/ceph/foo.conf) along with the various options that 
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
> 
> 
> Crap preventing us from removing this entirely:
> 
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
> 
> 
> Converting an existing cluster "foo" back to "ceph":
> 
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph 
>  - reboot
> 
> 
> Questions:
> 
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
> 
> Thanks!
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Mykola Golub

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-09 16:07 ` Sage Weil
@ 2017-06-09 16:16   ` Mykola Golub
  2017-06-09 16:19   ` Erik McCormick
  1 sibling, 0 replies; 24+ messages in thread
From: Mykola Golub @ 2017-06-09 16:16 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users

On Fri, Jun 09, 2017 at 04:07:15PM +0000, Sage Weil wrote:

> This neatly avoids the systemd limitations for all but the most 
> adventuresome admins and avoid the more common case of an admin falling 
> into the "oh, I can name my cluster? cool! [...] oh, i have to add 
> --cluster rover to every command? ick!" trap.

"... oh, I can add --cluster to CEPH_ARGS. cool!" :-)

-- 
Mykola Golub

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-09 16:07 ` Sage Weil
  2017-06-09 16:16   ` Mykola Golub
@ 2017-06-09 16:19   ` Erik McCormick
       [not found]     ` <CAHUi5cOM8zrnZ80RMqJhEwowE6XmM3dnAKJmxNf8E82fM7Nfbg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 24+ messages in thread
From: Erik McCormick @ 2017-06-09 16:19 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users

On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil <sage@newdream.net> wrote:
> On Thu, 8 Jun 2017, Sage Weil wrote:
>> Questions:
>>
>>  - Does anybody on the list use a non-default cluster name?
>>  - If so, do you have a reason not to switch back to 'ceph'?
>
> It sounds like the answer is "yes," but not for daemons. Several users use
> it on the client side to connect to multiple clusters from the same host.
>

I thought some folks said they were running with non-default naming
for daemons, but if not, then count me as one who does. This was
mainly a relic of the past, where I thought I would be running
multiple clusters on one host. Before long I decided it would be a bad
idea, but by then the cluster was already in heavy use and I couldn't
undo it.

I will say that I am not opposed to renaming back to ceph, but it
would be great to have a documented process for accomplishing this
prior to deprecation. Even going so far as to remove --cluster from
deployment tools will leave me unable to add OSDs if I want to upgrade
when Luminous is released.

> Nobody is colocating multiple daemons from different clusters on the same
> host.  Some have in the past but stopped.  If they choose to in the
> future, they can customize the systemd units themselves.
>
> The rbd-mirror daemon has a similar requirement to talk to multiple
> clusters as a client.
>
> This makes me conclude our current path is fine:
>
>  - leave existing --cluster infrastructure in place in the ceph code, but
>  - remove support for deploying daemons with custom cluster names from the
> deployment tools.
>
> This neatly avoids the systemd limitations for all but the most
> adventuresome admins and avoid the more common case of an admin falling
> into the "oh, I can name my cluster? cool! [...] oh, i have to add
> --cluster rover to every command? ick!" trap.
>

Yeah, that was me in 2012. Oops.

-Erik

> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
       [not found]     ` <CAHUi5cOM8zrnZ80RMqJhEwowE6XmM3dnAKJmxNf8E82fM7Nfbg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-06-09 16:30       ` Sage Weil
  2017-11-07  6:39         ` [ceph-users] " Erik McCormick
  0 siblings, 1 reply; 24+ messages in thread
From: Sage Weil @ 2017-06-09 16:30 UTC (permalink / raw)
  To: Erik McCormick; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

On Fri, 9 Jun 2017, Erik McCormick wrote:
> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil <sage-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org> wrote:
> > On Thu, 8 Jun 2017, Sage Weil wrote:
> >> Questions:
> >>
> >>  - Does anybody on the list use a non-default cluster name?
> >>  - If so, do you have a reason not to switch back to 'ceph'?
> >
> > It sounds like the answer is "yes," but not for daemons. Several users use
> > it on the client side to connect to multiple clusters from the same host.
> >
> 
> I thought some folks said they were running with non-default naming
> for daemons, but if not, then count me as one who does. This was
> mainly a relic of the past, where I thought I would be running
> multiple clusters on one host. Before long I decided it would be a bad
> idea, but by then the cluster was already in heavy use and I couldn't
> undo it.
> 
> I will say that I am not opposed to renaming back to ceph, but it
> would be great to have a documented process for accomplishing this
> prior to deprecation. Even going so far as to remove --cluster from
> deployment tools will leave me unable to add OSDs if I want to upgrade
> when Luminous is released.

Note that even if the tool doesn't support it, the cluster name is a 
host-local thing, so you can always deploy ceph-named daemons on other 
hosts.

For an existing host, the removal process should be as simple as

 - stop the daemons on the host
 - rename /etc/ceph/foo.conf -> /etc/ceph/ceph.conf
 - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-* (this mainly 
matters for non-osds, since the osd dirs will get dynamically created by 
ceph-disk, but renaming will avoid leaving clutter behind)
 - comment out the CLUSTER= line in /etc/{sysconfig,default}/ceph (if 
you're on jewel)
 - reboot

If you wouldn't mind being a guinea pig and verifying that this is 
sufficient that would be really helpful!  We'll definitely want to 
document this process.

Thanks!
sage


> 
> > Nobody is colocating multiple daemons from different clusters on the same
> > host.  Some have in the past but stopped.  If they choose to in the
> > future, they can customize the systemd units themselves.
> >
> > The rbd-mirror daemon has a similar requirement to talk to multiple
> > clusters as a client.
> >
> > This makes me conclude our current path is fine:
> >
> >  - leave existing --cluster infrastructure in place in the ceph code, but
> >  - remove support for deploying daemons with custom cluster names from the
> > deployment tools.
> >
> > This neatly avoids the systemd limitations for all but the most
> > adventuresome admins and avoid the more common case of an admin falling
> > into the "oh, I can name my cluster? cool! [...] oh, i have to add
> > --cluster rover to every command? ick!" trap.
> >
> 
> Yeah, that was me in 2012. Oops.
> 
> -Erik
> 
> > sage
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
       [not found]             ` <CAKPXa=ZjsvhAMwdM9k47L4gaMGVispyJ7bMOyR7dVu0y7pb12A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-06-09 16:36               ` Dan van der Ster
  2017-06-09 16:42                 ` [ceph-users] " Sage Weil
  0 siblings, 1 reply; 24+ messages in thread
From: Dan van der Ster @ 2017-06-09 16:36 UTC (permalink / raw)
  To: Vasu Kulkarni
  Cc: wes_dillingham-nRXb03v5zC6Vc3sceRu5cw,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni <vakulkar-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
> <wes_dillingham-nRXb03v5zC6Vc3sceRu5cw@public.gmane.org> wrote:
>> Similar to Dan's situation we utilize the --cluster name concept for our
>> operations. Primarily for "datamover" nodes which do incremental rbd
>> import/export between distinct clusters. This is entirely coordinated by
>> utilizing the --cluster option throughout.
>>
>> The way we set it up is that all clusters are actually named "ceph" on the
>> mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
>> and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
>> see the functionality of clients being able to specify which conf file to
>> read preserved.
>
> ceph.conf along with keyring file can stay in any location, the
> default location is /etc/ceph but one could use
> other location for clusterB.conf (
> http://docs.ceph.com/docs/jewel/rados/configuration/ceph-conf/ ), At
> least
> for client which doesn't run any daemon this should be sufficient to
> make it talk to different clusters.

So we start with this:

> ceph --cluster=flax health
HEALTH_OK

Then for example do:
> cd /etc/ceph/
> mkdir flax
> cp flax.conf flax/ceph.conf
> cp flax.client.admin.keyring flax/ceph.client.admin.keyring

Now this works:

> ceph --conf=/etc/ceph/flax/ceph.conf --keyring=/etc/ceph/flax/ceph.client.admin.keyring health
HEALTH_OK

So --cluster is just convenient shorthand for the CLI.

I guess it won't be the end of the world if you drop it, but would it
be so costly to keep that working? (CLI only -- no use-case for
server-side named clusters over here).

--
Dan

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-09 16:36               ` Dan van der Ster
@ 2017-06-09 16:42                 ` Sage Weil
  0 siblings, 0 replies; 24+ messages in thread
From: Sage Weil @ 2017-06-09 16:42 UTC (permalink / raw)
  To: Dan van der Ster; +Cc: Vasu Kulkarni, wes_dillingham, ceph-devel, ceph-users

On Fri, 9 Jun 2017, Dan van der Ster wrote:
> On Fri, Jun 9, 2017 at 5:58 PM, Vasu Kulkarni <vakulkar@redhat.com> wrote:
> > On Fri, Jun 9, 2017 at 6:11 AM, Wes Dillingham
> > <wes_dillingham@harvard.edu> wrote:
> >> Similar to Dan's situation we utilize the --cluster name concept for our
> >> operations. Primarily for "datamover" nodes which do incremental rbd
> >> import/export between distinct clusters. This is entirely coordinated by
> >> utilizing the --cluster option throughout.
> >>
> >> The way we set it up is that all clusters are actually named "ceph" on the
> >> mons and osds etc, but the clients themselves get /etc/ceph/clusterA.conf
> >> and /etc/ceph/clusterB.conf so that we can differentiate. I would like to
> >> see the functionality of clients being able to specify which conf file to
> >> read preserved.
> >
> > ceph.conf along with keyring file can stay in any location, the
> > default location is /etc/ceph but one could use
> > other location for clusterB.conf (
> > http://docs.ceph.com/docs/jewel/rados/configuration/ceph-conf/ ), At
> > least
> > for client which doesn't run any daemon this should be sufficient to
> > make it talk to different clusters.
> 
> So we start with this:
> 
> > ceph --cluster=flax health
> HEALTH_OK
> 
> Then for example do:
> > cd /etc/ceph/
> > mkdir flax
> > cp flax.conf flax/ceph.conf
> > cp flax.client.admin.keyring flax/ceph.client.admin.keyring
> 
> Now this works:
> 
> > ceph --conf=/etc/ceph/flax/ceph.conf --keyring=/etc/ceph/flax/ceph.client.admin.keyring health
> HEALTH_OK
> 
> So --cluster is just convenient shorthand for the CLI.

Yeah, although it's used elsewhere too:

$ grep \$cluster ../src/common/config_opts.h 
OPTION(admin_socket, OPT_STR, "$run_dir/$cluster-$name.asok") // default changed by common_preinit()
OPTION(log_file, OPT_STR, "/var/log/ceph/$cluster-$name.log") // default changed by common_preinit()
    "default=/var/log/ceph/$cluster.$channel.log cluster=/var/log/ceph/$cluster.log")
    "/etc/ceph/$cluster.$name.keyring,/etc/ceph/$cluster.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin," 
    "/usr/local/etc/ceph/$cluster.$name.keyring,/usr/local/etc/ceph/$cluster.keyring,"
OPTION(mon_data, OPT_STR, "/var/lib/ceph/mon/$cluster-$id")
OPTION(mon_debug_dump_location, OPT_STR, "/var/log/ceph/$cluster-$name.tdump")
OPTION(mds_data, OPT_STR, "/var/lib/ceph/mds/$cluster-$id")
OPTION(osd_data, OPT_STR, "/var/lib/ceph/osd/$cluster-$id")
OPTION(osd_journal, OPT_STR, "/var/lib/ceph/osd/$cluster-$id/journal")
OPTION(rgw_data, OPT_STR, "/var/lib/ceph/radosgw/$cluster-$id")
OPTION(mgr_data, OPT_STR, "/var/lib/ceph/mgr/$cluster-$id") // where to find keyring etc

The only non-daemon ones are admin_socket and log_file, so keep that in 
mind.

> I guess it won't be the end of the world if you drop it, but would it
> be so costly to keep that working? (CLI only -- no use-case for
> server-side named clusters over here).

But yeah... I don't think we'll change any of this except to make the 
deployment tools' lives easier by not support it there.

sage

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
       [not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
  2017-06-08 19:55   ` Dan van der Ster
@ 2017-06-11 13:41   ` Peter Maloney
  1 sibling, 0 replies; 24+ messages in thread
From: Peter Maloney @ 2017-06-11 13:41 UTC (permalink / raw)
  To: Sage Weil, ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ

On 06/08/17 21:37, Sage Weil wrote:
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
Will it still be possible for clients to use multiple clusters?

Also how does this affect rbd mirroring?

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-09 16:30       ` Sage Weil
@ 2017-11-07  6:39         ` Erik McCormick
  0 siblings, 0 replies; 24+ messages in thread
From: Erik McCormick @ 2017-11-07  6:39 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users

On Fri, Jun 9, 2017 at 12:30 PM, Sage Weil <sage@newdream.net> wrote:
> On Fri, 9 Jun 2017, Erik McCormick wrote:
>> On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil <sage@newdream.net> wrote:
>> > On Thu, 8 Jun 2017, Sage Weil wrote:
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >>  - If so, do you have a reason not to switch back to 'ceph'?
>> >
>> > It sounds like the answer is "yes," but not for daemons. Several users use
>> > it on the client side to connect to multiple clusters from the same host.
>> >
>>
>> I thought some folks said they were running with non-default naming
>> for daemons, but if not, then count me as one who does. This was
>> mainly a relic of the past, where I thought I would be running
>> multiple clusters on one host. Before long I decided it would be a bad
>> idea, but by then the cluster was already in heavy use and I couldn't
>> undo it.
>>
>> I will say that I am not opposed to renaming back to ceph, but it
>> would be great to have a documented process for accomplishing this
>> prior to deprecation. Even going so far as to remove --cluster from
>> deployment tools will leave me unable to add OSDs if I want to upgrade
>> when Luminous is released.
>
> Note that even if the tool doesn't support it, the cluster name is a
> host-local thing, so you can always deploy ceph-named daemons on other
> hosts.
>
> For an existing host, the removal process should be as simple as
>
>  - stop the daemons on the host
>  - rename /etc/ceph/foo.conf -> /etc/ceph/ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-* (this mainly
> matters for non-osds, since the osd dirs will get dynamically created by
> ceph-disk, but renaming will avoid leaving clutter behind)
>  - comment out the CLUSTER= line in /etc/{sysconfig,default}/ceph (if
> you're on jewel)
>  - reboot
>
> If you wouldn't mind being a guinea pig and verifying that this is
> sufficient that would be really helpful!  We'll definitely want to
> document this process.
>
> Thanks!
> sage
>
Sitting here in a room with you reminded me I dropped the ball on
feeding back on the procedure. I did this a couple weeks ago and it
worked fine. I had a few problems with OSDs not wanting to unmount, so
I had to reboot each node along the way. I just used it as an excuse
to run updates.

-Erik
>
>>
>> > Nobody is colocating multiple daemons from different clusters on the same
>> > host.  Some have in the past but stopped.  If they choose to in the
>> > future, they can customize the systemd units themselves.
>> >
>> > The rbd-mirror daemon has a similar requirement to talk to multiple
>> > clusters as a client.
>> >
>> > This makes me conclude our current path is fine:
>> >
>> >  - leave existing --cluster infrastructure in place in the ceph code, but
>> >  - remove support for deploying daemons with custom cluster names from the
>> > deployment tools.
>> >
>> > This neatly avoids the systemd limitations for all but the most
>> > adventuresome admins and avoid the more common case of an admin falling
>> > into the "oh, I can name my cluster? cool! [...] oh, i have to add
>> > --cluster rover to every command? ick!" trap.
>> >
>>
>> Yeah, that was me in 2012. Oops.
>>
>> -Erik
>>
>> > sage
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-06-08 19:37 removing cluster name support Sage Weil
                   ` (5 preceding siblings ...)
  2017-06-09 16:10 ` Mykola Golub
@ 2017-11-07 12:09 ` kefu chai
  2017-11-07 12:45   ` Alfredo Deza
  6 siblings, 1 reply; 24+ messages in thread
From: kefu chai @ 2017-11-07 12:09 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, Ceph-User

On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.  There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it.  We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
>         https://github.com/ceph/ceph-deploy/pull/441

okay, i am closing https://github.com/ceph/ceph/pull/18638 and
http://tracker.ceph.com/issues/3253

>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host.  At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
>  - default cluster name is 'ceph'
>  - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
>  - daemon data paths include the cluster name,
>      /var/lib/ceph/osd/$cluster-$id
>    which is weird (but mostly people are used to it?)
>  - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
>  - existing daemon directories for existing clusters
>  - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
>  - rename /etc/ceph/foo.conf -> ceph.conf
>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>  - reboot
>
>
> Questions:
>
>  - Does anybody on the list use a non-default cluster name?
>  - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-11-07 12:09 ` [ceph-users] " kefu chai
@ 2017-11-07 12:45   ` Alfredo Deza
  2017-11-07 19:38     ` Sage Weil
  0 siblings, 1 reply; 24+ messages in thread
From: Alfredo Deza @ 2017-11-07 12:45 UTC (permalink / raw)
  To: kefu chai; +Cc: Sage Weil, ceph-devel, Ceph-User

On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov@gmail.com> wrote:
> On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
>> At CDM yesterday we talked about removing the ability to name your ceph
>> clusters.  There are a number of hurtles that make it difficult to fully
>> get rid of this functionality, not the least of which is that some
>> (many?) deployed clusters make use of it.  We decided that the most we can
>> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> so that no new clusters or deployed nodes use it.
>>
>> The first PR in this effort:
>>
>>         https://github.com/ceph/ceph-deploy/pull/441
>
> okay, i am closing https://github.com/ceph/ceph/pull/18638 and
> http://tracker.ceph.com/issues/3253

This brings us to a limbo were we aren't supporting it in some places
but we do in some others.

It was disabled for ceph-deploy, but ceph-ansible wants to support it
still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )

Sebastien argues that these reasons are strong enough to keep that support in:

- Ceph cluster on demand with containers
- Distributed compute nodes
- rbd-mirror integration as part of OSPd
- Disaster scenario with OpenStack Cinder in OSPd

The problem is that, as you can see with the ceph-disk PR just closed,
there are still other tools that have to implement the juggling of
custom cluster names
all over the place and they will hit some corner place where the
cluster name was not added and things will fail.

Just recently ceph-volume hit one of these places:
https://bugzilla.redhat.com/show_bug.cgi?id=1507943

Are we going to support custom cluster names? In what
context/scenarios are we going to allow it?


>
>>
>> Background:
>>
>> The cluster name concept was added to allow multiple clusters to have
>> daemons coexist on the same host.  At the type it was a hypothetical
>> requirement for a user that never actually made use of it, and the
>> support is kludgey:
>>
>>  - default cluster name is 'ceph'
>>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> 'ceph.conf' still works
>>  - daemon data paths include the cluster name,
>>      /var/lib/ceph/osd/$cluster-$id
>>    which is weird (but mostly people are used to it?)
>>  - any cli command you want to touch a non-ceph cluster name
>> needs -C $name or --cluster $name passed to it.
>>
>> Also, as of jewel,
>>
>>  - systemd only supports a single cluster per host, as defined by $CLUSTER
>> in /etc/{sysconfig,default}/ceph
>>
>> which you'll notice removes support for the original "requirement".
>>
>> Also note that you can get the same effect by specifying the config path
>> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>>
>>
>> Crap preventing us from removing this entirely:
>>
>>  - existing daemon directories for existing clusters
>>  - various scripts parse the cluster name out of paths
>>
>>
>> Converting an existing cluster "foo" back to "ceph":
>>
>>  - rename /etc/ceph/foo.conf -> ceph.conf
>>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>>  - reboot
>>
>>
>> Questions:
>>
>>  - Does anybody on the list use a non-default cluster name?
>>  - If so, do you have a reason not to switch back to 'ceph'?
>>
>> Thanks!
>> sage
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Regards
> Kefu Chai
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-11-07 12:45   ` Alfredo Deza
@ 2017-11-07 19:38     ` Sage Weil
  2017-11-07 20:33       ` Vasu Kulkarni
  0 siblings, 1 reply; 24+ messages in thread
From: Sage Weil @ 2017-11-07 19:38 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: kefu chai, ceph-devel, Ceph-User

On Tue, 7 Nov 2017, Alfredo Deza wrote:
> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov@gmail.com> wrote:
> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
> >> At CDM yesterday we talked about removing the ability to name your ceph
> >> clusters.  There are a number of hurtles that make it difficult to fully
> >> get rid of this functionality, not the least of which is that some
> >> (many?) deployed clusters make use of it.  We decided that the most we can
> >> do at this point is remove support for it in ceph-deploy and ceph-ansible
> >> so that no new clusters or deployed nodes use it.
> >>
> >> The first PR in this effort:
> >>
> >>         https://github.com/ceph/ceph-deploy/pull/441
> >
> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
> > http://tracker.ceph.com/issues/3253
> 
> This brings us to a limbo were we aren't supporting it in some places
> but we do in some others.
> 
> It was disabled for ceph-deploy, but ceph-ansible wants to support it
> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )

I still haven't seen a case where customer server names for *daemons* is 
needed.  Only for client-side $cluster.conf info for connecting.
 
> Sebastien argues that these reasons are strong enough to keep that support in:
> 
> - Ceph cluster on demand with containers

With kubernetes, the cluster will existin in a cluster namespace, and 
daemons live in containers, so inside the container hte cluster will be 
'ceph'.

> - Distributed compute nodes

?

> - rbd-mirror integration as part of OSPd

This is the client-side $cluster.conf for connecting to the remove 
cluster.

> - Disaster scenario with OpenStack Cinder in OSPd

Ditto.

> The problem is that, as you can see with the ceph-disk PR just closed,
> there are still other tools that have to implement the juggling of
> custom cluster names
> all over the place and they will hit some corner place where the
> cluster name was not added and things will fail.
> 
> Just recently ceph-volume hit one of these places:
> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
> 
> Are we going to support custom cluster names? In what
> context/scenarios are we going to allow it?

It seems like we could drop this support in ceph-volume, unless someone 
can present a compelling reason to keep it?

...

I'd almost want to go a step further and change

/var/lib/ceph/$type/$cluster-$id/

to

 /var/lib/ceph/$type/$id

In kubernetes, we're planning on bind mounting the host's 
/var/lib/ceph/$namespace/$type/$id to the container's 
/var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the 
awkward path names, though.  Or is it useless churn?

sage



> 
> 
> >
> >>
> >> Background:
> >>
> >> The cluster name concept was added to allow multiple clusters to have
> >> daemons coexist on the same host.  At the type it was a hypothetical
> >> requirement for a user that never actually made use of it, and the
> >> support is kludgey:
> >>
> >>  - default cluster name is 'ceph'
> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
> >> 'ceph.conf' still works
> >>  - daemon data paths include the cluster name,
> >>      /var/lib/ceph/osd/$cluster-$id
> >>    which is weird (but mostly people are used to it?)
> >>  - any cli command you want to touch a non-ceph cluster name
> >> needs -C $name or --cluster $name passed to it.
> >>
> >> Also, as of jewel,
> >>
> >>  - systemd only supports a single cluster per host, as defined by $CLUSTER
> >> in /etc/{sysconfig,default}/ceph
> >>
> >> which you'll notice removes support for the original "requirement".
> >>
> >> Also note that you can get the same effect by specifying the config path
> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
> >>
> >>
> >> Crap preventing us from removing this entirely:
> >>
> >>  - existing daemon directories for existing clusters
> >>  - various scripts parse the cluster name out of paths
> >>
> >>
> >> Converting an existing cluster "foo" back to "ceph":
> >>
> >>  - rename /etc/ceph/foo.conf -> ceph.conf
> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> >>  - reboot
> >>
> >>
> >> Questions:
> >>
> >>  - Does anybody on the list use a non-default cluster name?
> >>  - If so, do you have a reason not to switch back to 'ceph'?
> >>
> >> Thanks!
> >> sage
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Regards
> > Kefu Chai
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [ceph-users] removing cluster name support
  2017-11-07 19:38     ` Sage Weil
@ 2017-11-07 20:33       ` Vasu Kulkarni
       [not found]         ` <CAKPXa=YDxV1G-sgFEsJ9WpUwDn5N0o3eB1=WZKyG3Cr2uTRXWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 24+ messages in thread
From: Vasu Kulkarni @ 2017-11-07 20:33 UTC (permalink / raw)
  To: Sage Weil; +Cc: Alfredo Deza, kefu chai, ceph-devel, Ceph-User

On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil <sage@newdream.net> wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov@gmail.com> wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
>> >> At CDM yesterday we talked about removing the ability to name your ceph
>> >> clusters.  There are a number of hurtles that make it difficult to fully
>> >> get rid of this functionality, not the least of which is that some
>> >> (many?) deployed clusters make use of it.  We decided that the most we can
>> >> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >>         https://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support it
>> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
>
> I still haven't seen a case where customer server names for *daemons* is
> needed.  Only for client-side $cluster.conf info for connecting.
>
>> Sebastien argues that these reasons are strong enough to keep that support in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and
> daemons live in containers, so inside the container hte cluster will be
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just closed,
>> there are still other tools that have to implement the juggling of
>> custom cluster names
>> all over the place and they will hit some corner place where the
>> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someone
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
>  /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.

>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the
> awkward path names, though.  Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clusters to have
>> >> daemons coexist on the same host.  At the type it was a hypothetical
>> >> requirement for a user that never actually made use of it, and the
>> >> support is kludgey:
>> >>
>> >>  - default cluster name is 'ceph'
>> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> >> 'ceph.conf' still works
>> >>  - daemon data paths include the cluster name,
>> >>      /var/lib/ceph/osd/$cluster-$id
>> >>    which is weird (but mostly people are used to it?)
>> >>  - any cli command you want to touch a non-ceph cluster name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >>  - systemd only supports a single cluster per host, as defined by $CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying the config path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >>  - existing daemon directories for existing clusters
>> >>  - various scripts parse the cluster name out of paths
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to "ceph":
>> >>
>> >>  - rename /etc/ceph/foo.conf -> ceph.conf
>> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> >>  - reboot
>> >>
>> >>
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >>  - If so, do you have a reason not to switch back to 'ceph'?
>> >>
>> >> Thanks!
>> >> sage
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > --
>> > Regards
>> > Kefu Chai
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: removing cluster name support
       [not found]         ` <CAKPXa=YDxV1G-sgFEsJ9WpUwDn5N0o3eB1=WZKyG3Cr2uTRXWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-07 21:08           ` Erik McCormick
  0 siblings, 0 replies; 24+ messages in thread
From: Erik McCormick @ 2017-11-07 21:08 UTC (permalink / raw)
  To: Vasu Kulkarni; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, Ceph-User


[-- Attachment #1.1: Type: text/plain, Size: 6147 bytes --]

On Nov 8, 2017 7:33 AM, "Vasu Kulkarni" <vakulkar-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil <sage-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org> wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> >> At CDM yesterday we talked about removing the ability to name your
ceph
>> >> clusters.  There are a number of hurtles that make it difficult to
fully
>> >> get rid of this functionality, not the least of which is that some
>> >> (many?) deployed clusters make use of it.  We decided that the most
we can
>> >> do at this point is remove support for it in ceph-deploy and
ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >>         https://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support it
>> still (see:  https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
>
> I still haven't seen a case where customer server names for *daemons* is
> needed.  Only for client-side $cluster.conf info for connecting.
>
>> Sebastien argues that these reasons are strong enough to keep that
support in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and
> daemons live in containers, so inside the container hte cluster will be
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just closed,
>> there are still other tools that have to implement the juggling of
>> custom cluster names
>> all over the place and they will hit some corner place where the
>> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someone
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
>  /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible
releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.


Agreed on the Openstack part. I actually changed nothing on that side of
things. The clients still run with a custom config name with no issues.

-Erik


>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id.  It might be a good time to drop some of the
> awkward path names, though.  Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clusters to have
>> >> daemons coexist on the same host.  At the type it was a hypothetical
>> >> requirement for a user that never actually made use of it, and the
>> >> support is kludgey:
>> >>
>> >>  - default cluster name is 'ceph'
>> >>  - default config is /etc/ceph/$cluster.conf, so that the normal
>> >> 'ceph.conf' still works
>> >>  - daemon data paths include the cluster name,
>> >>      /var/lib/ceph/osd/$cluster-$id
>> >>    which is weird (but mostly people are used to it?)
>> >>  - any cli command you want to touch a non-ceph cluster name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >>  - systemd only supports a single cluster per host, as defined by
$CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying the config
path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$
cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >>  - existing daemon directories for existing clusters
>> >>  - various scripts parse the cluster name out of paths
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to "ceph":
>> >>
>> >>  - rename /etc/ceph/foo.conf -> ceph.conf
>> >>  - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> >>  - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> >>  - reboot
>> >>
>> >>
>> >> Questions:
>> >>
>> >>  - Does anybody on the list use a non-default cluster name?
>> >>  - If so, do you have a reason not to switch back to 'ceph'?
>> >>
>> >> Thanks!
>> >> sage
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > --
>> > Regards
>> > Kefu Chai
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[-- Attachment #1.2: Type: text/html, Size: 10509 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-11-07 21:08 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-08 19:37 removing cluster name support Sage Weil
2017-06-08 19:48 ` [ceph-users] " Bassam Tabbara
2017-06-08 19:54   ` Sage Weil
2017-06-09 12:19     ` Alfredo Deza
     [not found]       ` <CAC-Np1wjRX99N4q69XfWY0m0fDETpRQZj5Hrgoe6kbrh7riE+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-06-09 13:11         ` Wes Dillingham
2017-06-09 15:58           ` [ceph-users] " Vasu Kulkarni
     [not found]             ` <CAKPXa=ZjsvhAMwdM9k47L4gaMGVispyJ7bMOyR7dVu0y7pb12A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-06-09 16:36               ` Dan van der Ster
2017-06-09 16:42                 ` [ceph-users] " Sage Weil
     [not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-06-08 19:55   ` Dan van der Ster
2017-06-11 13:41   ` Peter Maloney
2017-06-08 20:41 ` [ceph-users] " Benjeman Meekhof
2017-06-09 11:33   ` Tim Serong
2017-06-08 21:33 ` Vaibhav Bhembre
2017-06-09 16:07 ` Sage Weil
2017-06-09 16:16   ` Mykola Golub
2017-06-09 16:19   ` Erik McCormick
     [not found]     ` <CAHUi5cOM8zrnZ80RMqJhEwowE6XmM3dnAKJmxNf8E82fM7Nfbg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-06-09 16:30       ` Sage Weil
2017-11-07  6:39         ` [ceph-users] " Erik McCormick
2017-06-09 16:10 ` Mykola Golub
2017-11-07 12:09 ` [ceph-users] " kefu chai
2017-11-07 12:45   ` Alfredo Deza
2017-11-07 19:38     ` Sage Weil
2017-11-07 20:33       ` Vasu Kulkarni
     [not found]         ` <CAKPXa=YDxV1G-sgFEsJ9WpUwDn5N0o3eB1=WZKyG3Cr2uTRXWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-07 21:08           ` Erik McCormick

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.