* Re: [ceph-users] removing cluster name support
2017-06-08 19:37 removing cluster name support Sage Weil
@ 2017-06-08 19:48 ` Bassam Tabbara
2017-06-08 19:54 ` Sage Weil
[not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
` (5 subsequent siblings)
6 siblings, 1 reply; 24+ messages in thread
From: Bassam Tabbara @ 2017-06-08 19:48 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users
Thanks Sage.
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters.
Just to be clear, it would still be possible to run multiple ceph clusters on the same nodes, right?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 19:48 ` [ceph-users] " Bassam Tabbara
@ 2017-06-08 19:54 ` Sage Weil
2017-06-09 12:19 ` Alfredo Deza
0 siblings, 1 reply; 24+ messages in thread
From: Sage Weil @ 2017-06-08 19:54 UTC (permalink / raw)
To: Bassam Tabbara; +Cc: ceph-devel, ceph-users
On Thu, 8 Jun 2017, Bassam Tabbara wrote:
> Thanks Sage.
>
> > At CDM yesterday we talked about removing the ability to name your ceph
> > clusters.
>
> Just to be clear, it would still be possible to run multiple ceph
> clusters on the same nodes, right?
Yes, but you'd need to either (1) use containers (so that different
daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
unit files to do... something.
This is actually no different from Jewel. It's just that currently you can
run a single cluster on a host (without containers) but call it 'foo' and
knock yourself out by passing '--cluster foo' every time you invoke the
CLI.
I'm guessing you're in the (1) case anyway and this doesn't affect you at
all :)
sage
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 19:54 ` Sage Weil
@ 2017-06-09 12:19 ` Alfredo Deza
[not found] ` <CAC-Np1wjRX99N4q69XfWY0m0fDETpRQZj5Hrgoe6kbrh7riE+A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 24+ messages in thread
From: Alfredo Deza @ 2017-06-09 12:19 UTC (permalink / raw)
To: Sage Weil; +Cc: Bassam Tabbara, ceph-devel, ceph-users
On Thu, Jun 8, 2017 at 3:54 PM, Sage Weil <sweil@redhat.com> wrote:
> On Thu, 8 Jun 2017, Bassam Tabbara wrote:
>> Thanks Sage.
>>
>> > At CDM yesterday we talked about removing the ability to name your ceph
>> > clusters.
>>
>> Just to be clear, it would still be possible to run multiple ceph
>> clusters on the same nodes, right?
>
> Yes, but you'd need to either (1) use containers (so that different
> daemons see a different /etc/ceph/ceph.conf) or (2) modify the systemd
> unit files to do... something.
In the container case, I need to clarify that ceph-docker deployed
with ceph-ansible is not capable of doing this, since
the ad-hoc systemd units use the hostname as part of the identifier
for the daemon, e.g:
systemctl enable ceph-mon@{{ ansible_hostname }}.service
>
> This is actually no different from Jewel. It's just that currently you can
> run a single cluster on a host (without containers) but call it 'foo' and
> knock yourself out by passing '--cluster foo' every time you invoke the
> CLI.
>
> I'm guessing you're in the (1) case anyway and this doesn't affect you at
> all :)
>
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 24+ messages in thread
[parent not found: <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>]
* Re: removing cluster name support
[not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-06-08 19:55 ` Dan van der Ster
2017-06-11 13:41 ` Peter Maloney
1 sibling, 0 replies; 24+ messages in thread
From: Dan van der Ster @ 2017-06-08 19:55 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users
[-- Attachment #1.1: Type: text/plain, Size: 2623 bytes --]
Hi Sage,
We need named clusters on the client side. RBD or CephFS clients, or
monitoring/admin machines all need to be able to access several clusters.
Internally, each cluster is indeed called "ceph", but the clients use
distinct names to differentiate their configs/keyrings.
Cheers, Dan
On Jun 8, 2017 9:37 PM, "Sage Weil" <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
At CDM yesterday we talked about removing the ability to name your ceph
clusters. There are a number of hurtles that make it difficult to fully
get rid of this functionality, not the least of which is that some
(many?) deployed clusters make use of it. We decided that the most we can
do at this point is remove support for it in ceph-deploy and ceph-ansible
so that no new clusters or deployed nodes use it.
The first PR in this effort:
https://github.com/ceph/ceph-deploy/pull/441
Background:
The cluster name concept was added to allow multiple clusters to have
daemons coexist on the same host. At the type it was a hypothetical
requirement for a user that never actually made use of it, and the
support is kludgey:
- default cluster name is 'ceph'
- default config is /etc/ceph/$cluster.conf, so that the normal
'ceph.conf' still works
- daemon data paths include the cluster name,
/var/lib/ceph/osd/$cluster-$id
which is weird (but mostly people are used to it?)
- any cli command you want to touch a non-ceph cluster name
needs -C $name or --cluster $name passed to it.
Also, as of jewel,
- systemd only supports a single cluster per host, as defined by $CLUSTER
in /etc/{sysconfig,default}/ceph
which you'll notice removes support for the original "requirement".
Also note that you can get the same effect by specifying the config path
explicitly (-c /etc/ceph/foo.conf) along with the various options that
substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
Crap preventing us from removing this entirely:
- existing daemon directories for existing clusters
- various scripts parse the cluster name out of paths
Converting an existing cluster "foo" back to "ceph":
- rename /etc/ceph/foo.conf -> ceph.conf
- rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
- remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
- reboot
Questions:
- Does anybody on the list use a non-default cluster name?
- If so, do you have a reason not to switch back to 'ceph'?
Thanks!
sage
_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[-- Attachment #1.2: Type: text/html, Size: 3811 bytes --]
[-- Attachment #2: Type: text/plain, Size: 178 bytes --]
_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: removing cluster name support
[not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
2017-06-08 19:55 ` Dan van der Ster
@ 2017-06-11 13:41 ` Peter Maloney
1 sibling, 0 replies; 24+ messages in thread
From: Peter Maloney @ 2017-06-11 13:41 UTC (permalink / raw)
To: Sage Weil, ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ
On 06/08/17 21:37, Sage Weil wrote:
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
Will it still be possible for clients to use multiple clusters?
Also how does this affect rbd mirroring?
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 19:37 removing cluster name support Sage Weil
2017-06-08 19:48 ` [ceph-users] " Bassam Tabbara
[not found] ` <alpine.DEB.2.11.1706081936570.3646-qHenpvqtifaMSRpgCs4c+g@public.gmane.org>
@ 2017-06-08 20:41 ` Benjeman Meekhof
2017-06-09 11:33 ` Tim Serong
2017-06-08 21:33 ` Vaibhav Bhembre
` (3 subsequent siblings)
6 siblings, 1 reply; 24+ messages in thread
From: Benjeman Meekhof @ 2017-06-08 20:41 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users
Hi Sage,
We did at one time run multiple clusters on our OSD nodes and RGW
nodes (with Jewel). We accomplished this by putting code in our
puppet-ceph module that would create additional systemd units with
appropriate CLUSTER=name environment settings for clusters not named
ceph. IE, if the module were asked to configure OSD for a cluster
named 'test' it would copy/edit the ceph-osd service to create a
'test-osd@.service' unit that would start instances with CLUSTER=test
so they would point to the right config file, etc Eventually on the
RGW side I started doing instance-specific overrides like
'/etc/systemd/system/ceph-radosgw@client.name.d/override.conf' so as
to avoid replicating the stock systemd unit.
We gave up on multiple clusters on the OSD nodes because it wasn't
really that useful to maintain a separate 'test' cluster on the same
hardware. We continue to need ability to reference multiple clusters
for RGW nodes and other clients. For the other example, users of our
project might have their own Ceph clusters in addition to wanting to
use ours.
If the daemon solution in the no-clustername future is to 'modify
systemd unit files to do something' we're already doing that so it's
not a big issue. However the current modification of over-riding
CLUSTER in the environment section of systemd files does seem cleaner
than over-riding an exec command to specify a different config file
and keyring path. Maybe systemd units could ship with those
arguments as variables for easily over-riding.
thanks,
Ben
On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil <sweil@redhat.com> wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters. There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it. We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host. At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
> - default cluster name is 'ceph'
> - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
> - daemon data paths include the cluster name,
> /var/lib/ceph/osd/$cluster-$id
> which is weird (but mostly people are used to it?)
> - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
> - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
> - existing daemon directories for existing clusters
> - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
> - rename /etc/ceph/foo.conf -> ceph.conf
> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> - reboot
>
>
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 20:41 ` [ceph-users] " Benjeman Meekhof
@ 2017-06-09 11:33 ` Tim Serong
0 siblings, 0 replies; 24+ messages in thread
From: Tim Serong @ 2017-06-09 11:33 UTC (permalink / raw)
To: Benjeman Meekhof, Sage Weil; +Cc: ceph-devel, ceph-users
On 06/09/2017 06:41 AM, Benjeman Meekhof wrote:
> Hi Sage,
>
> We did at one time run multiple clusters on our OSD nodes and RGW
> nodes (with Jewel). We accomplished this by putting code in our
> puppet-ceph module that would create additional systemd units with
> appropriate CLUSTER=name environment settings for clusters not named
> ceph. IE, if the module were asked to configure OSD for a cluster
> named 'test' it would copy/edit the ceph-osd service to create a
> 'test-osd@.service' unit that would start instances with CLUSTER=test
> so they would point to the right config file, etc Eventually on the
> RGW side I started doing instance-specific overrides like
> '/etc/systemd/system/ceph-radosgw@client.name.d/override.conf' so as
> to avoid replicating the stock systemd unit.
>
> We gave up on multiple clusters on the OSD nodes because it wasn't
> really that useful to maintain a separate 'test' cluster on the same
> hardware. We continue to need ability to reference multiple clusters
> for RGW nodes and other clients. For the other example, users of our
> project might have their own Ceph clusters in addition to wanting to
> use ours.
>
> If the daemon solution in the no-clustername future is to 'modify
> systemd unit files to do something' we're already doing that so it's
> not a big issue. However the current modification of over-riding
> CLUSTER in the environment section of systemd files does seem cleaner
> than over-riding an exec command to specify a different config file
> and keyring path. Maybe systemd units could ship with those
> arguments as variables for easily over-riding.
systemd units can be templated/parameterized, but with only one
parameter, the instance ID, which we're already using
(ceph-mon@$(hostname), ceph-osd@$ID, etc.)
>
> thanks,
> Ben
>
> On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil <sweil@redhat.com> wrote:
>> At CDM yesterday we talked about removing the ability to name your ceph
>> clusters. There are a number of hurtles that make it difficult to fully
>> get rid of this functionality, not the least of which is that some
>> (many?) deployed clusters make use of it. We decided that the most we can
>> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> so that no new clusters or deployed nodes use it.
>>
>> The first PR in this effort:
>>
>> https://github.com/ceph/ceph-deploy/pull/441
>>
>> Background:
>>
>> The cluster name concept was added to allow multiple clusters to have
>> daemons coexist on the same host. At the type it was a hypothetical
>> requirement for a user that never actually made use of it, and the
>> support is kludgey:
>>
>> - default cluster name is 'ceph'
>> - default config is /etc/ceph/$cluster.conf, so that the normal
>> 'ceph.conf' still works
>> - daemon data paths include the cluster name,
>> /var/lib/ceph/osd/$cluster-$id
>> which is weird (but mostly people are used to it?)
>> - any cli command you want to touch a non-ceph cluster name
>> needs -C $name or --cluster $name passed to it.
>>
>> Also, as of jewel,
>>
>> - systemd only supports a single cluster per host, as defined by $CLUSTER
>> in /etc/{sysconfig,default}/ceph
>>
>> which you'll notice removes support for the original "requirement".
>>
>> Also note that you can get the same effect by specifying the config path
>> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>>
>>
>> Crap preventing us from removing this entirely:
>>
>> - existing daemon directories for existing clusters
>> - various scripts parse the cluster name out of paths
>>
>>
>> Converting an existing cluster "foo" back to "ceph":
>>
>> - rename /etc/ceph/foo.conf -> ceph.conf
>> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> - reboot
>>
>>
>> Questions:
>>
>> - Does anybody on the list use a non-default cluster name?
>> - If so, do you have a reason not to switch back to 'ceph'?
>>
>> Thanks!
>> sage
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
--
Tim Serong
Senior Clustering Engineer
SUSE
tserong@suse.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 19:37 removing cluster name support Sage Weil
` (2 preceding siblings ...)
2017-06-08 20:41 ` [ceph-users] " Benjeman Meekhof
@ 2017-06-08 21:33 ` Vaibhav Bhembre
2017-06-09 16:07 ` Sage Weil
` (2 subsequent siblings)
6 siblings, 0 replies; 24+ messages in thread
From: Vaibhav Bhembre @ 2017-06-08 21:33 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users
We have an internal management service that works at a higher layer
upstream on top of multiple Ceph clusters. It needs a way to
differentiate and connect separately to each of those clusters.
Presently making that distinction is relatively easy since we create
those connections based on /etc/conf/$cluster.conf, where each cluster
name is unique. I am not sure how this will work for us if we go away
from the way of uniquely identifying multiple clusters from a single client.
On Thu, Jun 8, 2017 at 3:37 PM, Sage Weil <sweil@redhat.com> wrote:
>
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters. There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it. We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host. At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
> - default cluster name is 'ceph'
> - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
> - daemon data paths include the cluster name,
> /var/lib/ceph/osd/$cluster-$id
> which is weird (but mostly people are used to it?)
> - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
> - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
> - existing daemon directories for existing clusters
> - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
> - rename /etc/ceph/foo.conf -> ceph.conf
> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> - reboot
>
>
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 19:37 removing cluster name support Sage Weil
` (3 preceding siblings ...)
2017-06-08 21:33 ` Vaibhav Bhembre
@ 2017-06-09 16:07 ` Sage Weil
2017-06-09 16:16 ` Mykola Golub
2017-06-09 16:19 ` Erik McCormick
2017-06-09 16:10 ` Mykola Golub
2017-11-07 12:09 ` [ceph-users] " kefu chai
6 siblings, 2 replies; 24+ messages in thread
From: Sage Weil @ 2017-06-09 16:07 UTC (permalink / raw)
To: ceph-devel, ceph-users
On Thu, 8 Jun 2017, Sage Weil wrote:
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
It sounds like the answer is "yes," but not for daemons. Several users use
it on the client side to connect to multiple clusters from the same host.
Nobody is colocating multiple daemons from different clusters on the same
host. Some have in the past but stopped. If they choose to in the
future, they can customize the systemd units themselves.
The rbd-mirror daemon has a similar requirement to talk to multiple
clusters as a client.
This makes me conclude our current path is fine:
- leave existing --cluster infrastructure in place in the ceph code, but
- remove support for deploying daemons with custom cluster names from the
deployment tools.
This neatly avoids the systemd limitations for all but the most
adventuresome admins and avoid the more common case of an admin falling
into the "oh, I can name my cluster? cool! [...] oh, i have to add
--cluster rover to every command? ick!" trap.
sage
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-09 16:07 ` Sage Weil
@ 2017-06-09 16:16 ` Mykola Golub
2017-06-09 16:19 ` Erik McCormick
1 sibling, 0 replies; 24+ messages in thread
From: Mykola Golub @ 2017-06-09 16:16 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users
On Fri, Jun 09, 2017 at 04:07:15PM +0000, Sage Weil wrote:
> This neatly avoids the systemd limitations for all but the most
> adventuresome admins and avoid the more common case of an admin falling
> into the "oh, I can name my cluster? cool! [...] oh, i have to add
> --cluster rover to every command? ick!" trap.
"... oh, I can add --cluster to CEPH_ARGS. cool!" :-)
--
Mykola Golub
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-09 16:07 ` Sage Weil
2017-06-09 16:16 ` Mykola Golub
@ 2017-06-09 16:19 ` Erik McCormick
[not found] ` <CAHUi5cOM8zrnZ80RMqJhEwowE6XmM3dnAKJmxNf8E82fM7Nfbg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
1 sibling, 1 reply; 24+ messages in thread
From: Erik McCormick @ 2017-06-09 16:19 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users
On Fri, Jun 9, 2017 at 12:07 PM, Sage Weil <sage@newdream.net> wrote:
> On Thu, 8 Jun 2017, Sage Weil wrote:
>> Questions:
>>
>> - Does anybody on the list use a non-default cluster name?
>> - If so, do you have a reason not to switch back to 'ceph'?
>
> It sounds like the answer is "yes," but not for daemons. Several users use
> it on the client side to connect to multiple clusters from the same host.
>
I thought some folks said they were running with non-default naming
for daemons, but if not, then count me as one who does. This was
mainly a relic of the past, where I thought I would be running
multiple clusters on one host. Before long I decided it would be a bad
idea, but by then the cluster was already in heavy use and I couldn't
undo it.
I will say that I am not opposed to renaming back to ceph, but it
would be great to have a documented process for accomplishing this
prior to deprecation. Even going so far as to remove --cluster from
deployment tools will leave me unable to add OSDs if I want to upgrade
when Luminous is released.
> Nobody is colocating multiple daemons from different clusters on the same
> host. Some have in the past but stopped. If they choose to in the
> future, they can customize the systemd units themselves.
>
> The rbd-mirror daemon has a similar requirement to talk to multiple
> clusters as a client.
>
> This makes me conclude our current path is fine:
>
> - leave existing --cluster infrastructure in place in the ceph code, but
> - remove support for deploying daemons with custom cluster names from the
> deployment tools.
>
> This neatly avoids the systemd limitations for all but the most
> adventuresome admins and avoid the more common case of an admin falling
> into the "oh, I can name my cluster? cool! [...] oh, i have to add
> --cluster rover to every command? ick!" trap.
>
Yeah, that was me in 2012. Oops.
-Erik
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: removing cluster name support
2017-06-08 19:37 removing cluster name support Sage Weil
` (4 preceding siblings ...)
2017-06-09 16:07 ` Sage Weil
@ 2017-06-09 16:10 ` Mykola Golub
2017-11-07 12:09 ` [ceph-users] " kefu chai
6 siblings, 0 replies; 24+ messages in thread
From: Mykola Golub @ 2017-06-09 16:10 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users, Jason Dillaman
RBD mirror uses cluster name when configuring its peer [1,2]
[1] http://docs.ceph.com/docs/master/rbd/rbd-mirroring/#add-cluster-peer
[2] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/block_device_guide/block_device_mirroring#configuring_one_way_mirroring
On Thu, Jun 08, 2017 at 07:37:23PM +0000, Sage Weil wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters. There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it. We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host. At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
> - default cluster name is 'ceph'
> - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
> - daemon data paths include the cluster name,
> /var/lib/ceph/osd/$cluster-$id
> which is weird (but mostly people are used to it?)
> - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
> - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
> - existing daemon directories for existing clusters
> - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
> - rename /etc/ceph/foo.conf -> ceph.conf
> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> - reboot
>
>
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Mykola Golub
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-06-08 19:37 removing cluster name support Sage Weil
` (5 preceding siblings ...)
2017-06-09 16:10 ` Mykola Golub
@ 2017-11-07 12:09 ` kefu chai
2017-11-07 12:45 ` Alfredo Deza
6 siblings, 1 reply; 24+ messages in thread
From: kefu chai @ 2017-11-07 12:09 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, Ceph-User
On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
> At CDM yesterday we talked about removing the ability to name your ceph
> clusters. There are a number of hurtles that make it difficult to fully
> get rid of this functionality, not the least of which is that some
> (many?) deployed clusters make use of it. We decided that the most we can
> do at this point is remove support for it in ceph-deploy and ceph-ansible
> so that no new clusters or deployed nodes use it.
>
> The first PR in this effort:
>
> https://github.com/ceph/ceph-deploy/pull/441
okay, i am closing https://github.com/ceph/ceph/pull/18638 and
http://tracker.ceph.com/issues/3253
>
> Background:
>
> The cluster name concept was added to allow multiple clusters to have
> daemons coexist on the same host. At the type it was a hypothetical
> requirement for a user that never actually made use of it, and the
> support is kludgey:
>
> - default cluster name is 'ceph'
> - default config is /etc/ceph/$cluster.conf, so that the normal
> 'ceph.conf' still works
> - daemon data paths include the cluster name,
> /var/lib/ceph/osd/$cluster-$id
> which is weird (but mostly people are used to it?)
> - any cli command you want to touch a non-ceph cluster name
> needs -C $name or --cluster $name passed to it.
>
> Also, as of jewel,
>
> - systemd only supports a single cluster per host, as defined by $CLUSTER
> in /etc/{sysconfig,default}/ceph
>
> which you'll notice removes support for the original "requirement".
>
> Also note that you can get the same effect by specifying the config path
> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>
>
> Crap preventing us from removing this entirely:
>
> - existing daemon directories for existing clusters
> - various scripts parse the cluster name out of paths
>
>
> Converting an existing cluster "foo" back to "ceph":
>
> - rename /etc/ceph/foo.conf -> ceph.conf
> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> - reboot
>
>
> Questions:
>
> - Does anybody on the list use a non-default cluster name?
> - If so, do you have a reason not to switch back to 'ceph'?
>
> Thanks!
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Regards
Kefu Chai
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-11-07 12:09 ` [ceph-users] " kefu chai
@ 2017-11-07 12:45 ` Alfredo Deza
2017-11-07 19:38 ` Sage Weil
0 siblings, 1 reply; 24+ messages in thread
From: Alfredo Deza @ 2017-11-07 12:45 UTC (permalink / raw)
To: kefu chai; +Cc: Sage Weil, ceph-devel, Ceph-User
On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov@gmail.com> wrote:
> On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
>> At CDM yesterday we talked about removing the ability to name your ceph
>> clusters. There are a number of hurtles that make it difficult to fully
>> get rid of this functionality, not the least of which is that some
>> (many?) deployed clusters make use of it. We decided that the most we can
>> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> so that no new clusters or deployed nodes use it.
>>
>> The first PR in this effort:
>>
>> https://github.com/ceph/ceph-deploy/pull/441
>
> okay, i am closing https://github.com/ceph/ceph/pull/18638 and
> http://tracker.ceph.com/issues/3253
This brings us to a limbo were we aren't supporting it in some places
but we do in some others.
It was disabled for ceph-deploy, but ceph-ansible wants to support it
still (see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
Sebastien argues that these reasons are strong enough to keep that support in:
- Ceph cluster on demand with containers
- Distributed compute nodes
- rbd-mirror integration as part of OSPd
- Disaster scenario with OpenStack Cinder in OSPd
The problem is that, as you can see with the ceph-disk PR just closed,
there are still other tools that have to implement the juggling of
custom cluster names
all over the place and they will hit some corner place where the
cluster name was not added and things will fail.
Just recently ceph-volume hit one of these places:
https://bugzilla.redhat.com/show_bug.cgi?id=1507943
Are we going to support custom cluster names? In what
context/scenarios are we going to allow it?
>
>>
>> Background:
>>
>> The cluster name concept was added to allow multiple clusters to have
>> daemons coexist on the same host. At the type it was a hypothetical
>> requirement for a user that never actually made use of it, and the
>> support is kludgey:
>>
>> - default cluster name is 'ceph'
>> - default config is /etc/ceph/$cluster.conf, so that the normal
>> 'ceph.conf' still works
>> - daemon data paths include the cluster name,
>> /var/lib/ceph/osd/$cluster-$id
>> which is weird (but mostly people are used to it?)
>> - any cli command you want to touch a non-ceph cluster name
>> needs -C $name or --cluster $name passed to it.
>>
>> Also, as of jewel,
>>
>> - systemd only supports a single cluster per host, as defined by $CLUSTER
>> in /etc/{sysconfig,default}/ceph
>>
>> which you'll notice removes support for the original "requirement".
>>
>> Also note that you can get the same effect by specifying the config path
>> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>>
>>
>> Crap preventing us from removing this entirely:
>>
>> - existing daemon directories for existing clusters
>> - various scripts parse the cluster name out of paths
>>
>>
>> Converting an existing cluster "foo" back to "ceph":
>>
>> - rename /etc/ceph/foo.conf -> ceph.conf
>> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> - reboot
>>
>>
>> Questions:
>>
>> - Does anybody on the list use a non-default cluster name?
>> - If so, do you have a reason not to switch back to 'ceph'?
>>
>> Thanks!
>> sage
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Regards
> Kefu Chai
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-11-07 12:45 ` Alfredo Deza
@ 2017-11-07 19:38 ` Sage Weil
2017-11-07 20:33 ` Vasu Kulkarni
0 siblings, 1 reply; 24+ messages in thread
From: Sage Weil @ 2017-11-07 19:38 UTC (permalink / raw)
To: Alfredo Deza; +Cc: kefu chai, ceph-devel, Ceph-User
On Tue, 7 Nov 2017, Alfredo Deza wrote:
> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov@gmail.com> wrote:
> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
> >> At CDM yesterday we talked about removing the ability to name your ceph
> >> clusters. There are a number of hurtles that make it difficult to fully
> >> get rid of this functionality, not the least of which is that some
> >> (many?) deployed clusters make use of it. We decided that the most we can
> >> do at this point is remove support for it in ceph-deploy and ceph-ansible
> >> so that no new clusters or deployed nodes use it.
> >>
> >> The first PR in this effort:
> >>
> >> https://github.com/ceph/ceph-deploy/pull/441
> >
> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
> > http://tracker.ceph.com/issues/3253
>
> This brings us to a limbo were we aren't supporting it in some places
> but we do in some others.
>
> It was disabled for ceph-deploy, but ceph-ansible wants to support it
> still (see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
I still haven't seen a case where customer server names for *daemons* is
needed. Only for client-side $cluster.conf info for connecting.
> Sebastien argues that these reasons are strong enough to keep that support in:
>
> - Ceph cluster on demand with containers
With kubernetes, the cluster will existin in a cluster namespace, and
daemons live in containers, so inside the container hte cluster will be
'ceph'.
> - Distributed compute nodes
?
> - rbd-mirror integration as part of OSPd
This is the client-side $cluster.conf for connecting to the remove
cluster.
> - Disaster scenario with OpenStack Cinder in OSPd
Ditto.
> The problem is that, as you can see with the ceph-disk PR just closed,
> there are still other tools that have to implement the juggling of
> custom cluster names
> all over the place and they will hit some corner place where the
> cluster name was not added and things will fail.
>
> Just recently ceph-volume hit one of these places:
> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>
> Are we going to support custom cluster names? In what
> context/scenarios are we going to allow it?
It seems like we could drop this support in ceph-volume, unless someone
can present a compelling reason to keep it?
...
I'd almost want to go a step further and change
/var/lib/ceph/$type/$cluster-$id/
to
/var/lib/ceph/$type/$id
In kubernetes, we're planning on bind mounting the host's
/var/lib/ceph/$namespace/$type/$id to the container's
/var/lib/ceph/$type/ceph-$id. It might be a good time to drop some of the
awkward path names, though. Or is it useless churn?
sage
>
>
> >
> >>
> >> Background:
> >>
> >> The cluster name concept was added to allow multiple clusters to have
> >> daemons coexist on the same host. At the type it was a hypothetical
> >> requirement for a user that never actually made use of it, and the
> >> support is kludgey:
> >>
> >> - default cluster name is 'ceph'
> >> - default config is /etc/ceph/$cluster.conf, so that the normal
> >> 'ceph.conf' still works
> >> - daemon data paths include the cluster name,
> >> /var/lib/ceph/osd/$cluster-$id
> >> which is weird (but mostly people are used to it?)
> >> - any cli command you want to touch a non-ceph cluster name
> >> needs -C $name or --cluster $name passed to it.
> >>
> >> Also, as of jewel,
> >>
> >> - systemd only supports a single cluster per host, as defined by $CLUSTER
> >> in /etc/{sysconfig,default}/ceph
> >>
> >> which you'll notice removes support for the original "requirement".
> >>
> >> Also note that you can get the same effect by specifying the config path
> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
> >>
> >>
> >> Crap preventing us from removing this entirely:
> >>
> >> - existing daemon directories for existing clusters
> >> - various scripts parse the cluster name out of paths
> >>
> >>
> >> Converting an existing cluster "foo" back to "ceph":
> >>
> >> - rename /etc/ceph/foo.conf -> ceph.conf
> >> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
> >> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
> >> - reboot
> >>
> >>
> >> Questions:
> >>
> >> - Does anybody on the list use a non-default cluster name?
> >> - If so, do you have a reason not to switch back to 'ceph'?
> >>
> >> Thanks!
> >> sage
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@lists.ceph.com
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> > --
> > Regards
> > Kefu Chai
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [ceph-users] removing cluster name support
2017-11-07 19:38 ` Sage Weil
@ 2017-11-07 20:33 ` Vasu Kulkarni
[not found] ` <CAKPXa=YDxV1G-sgFEsJ9WpUwDn5N0o3eB1=WZKyG3Cr2uTRXWw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 24+ messages in thread
From: Vasu Kulkarni @ 2017-11-07 20:33 UTC (permalink / raw)
To: Sage Weil; +Cc: Alfredo Deza, kefu chai, ceph-devel, Ceph-User
On Tue, Nov 7, 2017 at 11:38 AM, Sage Weil <sage@newdream.net> wrote:
> On Tue, 7 Nov 2017, Alfredo Deza wrote:
>> On Tue, Nov 7, 2017 at 7:09 AM, kefu chai <tchaikov@gmail.com> wrote:
>> > On Fri, Jun 9, 2017 at 3:37 AM, Sage Weil <sweil@redhat.com> wrote:
>> >> At CDM yesterday we talked about removing the ability to name your ceph
>> >> clusters. There are a number of hurtles that make it difficult to fully
>> >> get rid of this functionality, not the least of which is that some
>> >> (many?) deployed clusters make use of it. We decided that the most we can
>> >> do at this point is remove support for it in ceph-deploy and ceph-ansible
>> >> so that no new clusters or deployed nodes use it.
>> >>
>> >> The first PR in this effort:
>> >>
>> >> https://github.com/ceph/ceph-deploy/pull/441
>> >
>> > okay, i am closing https://github.com/ceph/ceph/pull/18638 and
>> > http://tracker.ceph.com/issues/3253
>>
>> This brings us to a limbo were we aren't supporting it in some places
>> but we do in some others.
>>
>> It was disabled for ceph-deploy, but ceph-ansible wants to support it
>> still (see: https://bugzilla.redhat.com/show_bug.cgi?id=1459861 )
>
> I still haven't seen a case where customer server names for *daemons* is
> needed. Only for client-side $cluster.conf info for connecting.
>
>> Sebastien argues that these reasons are strong enough to keep that support in:
>>
>> - Ceph cluster on demand with containers
>
> With kubernetes, the cluster will existin in a cluster namespace, and
> daemons live in containers, so inside the container hte cluster will be
> 'ceph'.
>
>> - Distributed compute nodes
>
> ?
>
>> - rbd-mirror integration as part of OSPd
>
> This is the client-side $cluster.conf for connecting to the remove
> cluster.
>
>> - Disaster scenario with OpenStack Cinder in OSPd
>
> Ditto.
>
>> The problem is that, as you can see with the ceph-disk PR just closed,
>> there are still other tools that have to implement the juggling of
>> custom cluster names
>> all over the place and they will hit some corner place where the
>> cluster name was not added and things will fail.
>>
>> Just recently ceph-volume hit one of these places:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1507943
>>
>> Are we going to support custom cluster names? In what
>> context/scenarios are we going to allow it?
>
> It seems like we could drop this support in ceph-volume, unless someone
> can present a compelling reason to keep it?
>
> ...
>
> I'd almost want to go a step further and change
>
> /var/lib/ceph/$type/$cluster-$id/
>
> to
>
> /var/lib/ceph/$type/$id
+1 for custom name support to be disabled from master/stable ansible releases,
And I think rbd-mirror and openstack are mostly configuration issues
that could use different conf files to talk
to different clusters.
>
> In kubernetes, we're planning on bind mounting the host's
> /var/lib/ceph/$namespace/$type/$id to the container's
> /var/lib/ceph/$type/ceph-$id. It might be a good time to drop some of the
> awkward path names, though. Or is it useless churn?
>
> sage
>
>
>
>>
>>
>> >
>> >>
>> >> Background:
>> >>
>> >> The cluster name concept was added to allow multiple clusters to have
>> >> daemons coexist on the same host. At the type it was a hypothetical
>> >> requirement for a user that never actually made use of it, and the
>> >> support is kludgey:
>> >>
>> >> - default cluster name is 'ceph'
>> >> - default config is /etc/ceph/$cluster.conf, so that the normal
>> >> 'ceph.conf' still works
>> >> - daemon data paths include the cluster name,
>> >> /var/lib/ceph/osd/$cluster-$id
>> >> which is weird (but mostly people are used to it?)
>> >> - any cli command you want to touch a non-ceph cluster name
>> >> needs -C $name or --cluster $name passed to it.
>> >>
>> >> Also, as of jewel,
>> >>
>> >> - systemd only supports a single cluster per host, as defined by $CLUSTER
>> >> in /etc/{sysconfig,default}/ceph
>> >>
>> >> which you'll notice removes support for the original "requirement".
>> >>
>> >> Also note that you can get the same effect by specifying the config path
>> >> explicitly (-c /etc/ceph/foo.conf) along with the various options that
>> >> substitute $cluster in (e.g., osd_data=/var/lib/ceph/osd/$cluster-$id).
>> >>
>> >>
>> >> Crap preventing us from removing this entirely:
>> >>
>> >> - existing daemon directories for existing clusters
>> >> - various scripts parse the cluster name out of paths
>> >>
>> >>
>> >> Converting an existing cluster "foo" back to "ceph":
>> >>
>> >> - rename /etc/ceph/foo.conf -> ceph.conf
>> >> - rename /var/lib/ceph/*/foo-* -> /var/lib/ceph/*/ceph-*
>> >> - remove the CLUSTER=foo line in /etc/{default,sysconfig}/ceph
>> >> - reboot
>> >>
>> >>
>> >> Questions:
>> >>
>> >> - Does anybody on the list use a non-default cluster name?
>> >> - If so, do you have a reason not to switch back to 'ceph'?
>> >>
>> >> Thanks!
>> >> sage
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@lists.ceph.com
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>> >
>> >
>> > --
>> > Regards
>> > Kefu Chai
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 24+ messages in thread