All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph-disk is now deprecated
@ 2017-11-27 13:36 Alfredo Deza
  2017-11-27 20:07 ` Nathan Cutler
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Alfredo Deza @ 2017-11-27 13:36 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

For the upcoming Luminous release (12.2.2), ceph-disk will be
officially in 'deprecated' mode (bug fixes only). A large banner with
deprecation information has been added, which will try to raise
awareness.

We are strongly suggesting using ceph-volume for new (and old) OSD
deployments. The only current exceptions to this are encrypted OSDs
and FreeBSD systems

Encryption support is planned and will be coming soon to ceph-volume.

A few items to consider:

* ceph-disk is expected to be fully removed by the Mimic release
* Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
* ceph-ansible already fully supports ceph-volume and will soon default to it
* ceph-deploy support is planned and should be fully implemented soon


[0] http://docs.ceph.com/docs/master/ceph-volume/simple/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
  2017-11-27 13:36 ceph-disk is now deprecated Alfredo Deza
@ 2017-11-27 20:07 ` Nathan Cutler
  2017-11-28  8:24   ` Fabian Grünbichler
  2017-11-28  6:56 ` Andreas Calminder
       [not found] ` <CAC-Np1z7OJNxUeso+FEB8g+7RUABkvKF58_mmXDHt4SeOTHSDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2 siblings, 1 reply; 25+ messages in thread
From: Nathan Cutler @ 2017-11-27 20:07 UTC (permalink / raw)
  To: Alfredo Deza, ceph-devel

> A few items to consider:
> 
> * ceph-disk is expected to be fully removed by the Mimic release

It's pretty standard in the software world to deprecate in one release 
and remove in the next. Maybe I'm blind, but I didn't see anything in 
the announcement explaining why deprecation *and* removal in a single 
release cycle is warranted in this case?

Nathan

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
  2017-11-27 13:36 ceph-disk is now deprecated Alfredo Deza
  2017-11-27 20:07 ` Nathan Cutler
@ 2017-11-28  6:56 ` Andreas Calminder
  2017-11-28 11:47   ` Alfredo Deza
       [not found] ` <CAC-Np1z7OJNxUeso+FEB8g+7RUABkvKF58_mmXDHt4SeOTHSDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2 siblings, 1 reply; 25+ messages in thread
From: Andreas Calminder @ 2017-11-28  6:56 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users, ceph-devel

Hello,
Thanks for the heads-up. As someone who's currently maintaining a
Jewel cluster and are in the process of setting up a shiny new
Luminous cluster and writing Ansible roles in the process to make
setup reproducible. I immediately proceeded to look into ceph-volume
and I've some questions/concerns, mainly due to my own setup, which is
one osd per device, simple.

Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm
subcommand available and the man-page only covers lvm. The online
documentation http://docs.ceph.com/docs/master/ceph-volume/ lists
simple however it's lacking some of the ceph-disk commands, like
'prepare' which seems crucial in the 'simple' scenario. Does the
ceph-disk deprecation imply that lvm is mandatory for using devices
with ceph or is just the documentation and tool features lagging
behind, I.E the 'simple' parts will be added well in time for Mimic
and during the Luminous lifecycle? Or am I missing something?

Best regards,
Andreas

On 27 November 2017 at 14:36, Alfredo Deza <adeza@redhat.com> wrote:
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
>
> We are strongly suggesting using ceph-volume for new (and old) OSD
> deployments. The only current exceptions to this are encrypted OSDs
> and FreeBSD systems
>
> Encryption support is planned and will be coming soon to ceph-volume.
>
> A few items to consider:
>
> * ceph-disk is expected to be fully removed by the Mimic release
> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
> * ceph-ansible already fully supports ceph-volume and will soon default to it
> * ceph-deploy support is planned and should be fully implemented soon
>
>
> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found] ` <CAC-Np1z7OJNxUeso+FEB8g+7RUABkvKF58_mmXDHt4SeOTHSDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28  8:12   ` Wido den Hollander
  2017-11-28  8:39     ` [ceph-users] " Piotr Dałek
       [not found]     ` <1821337488.5487.1511856776936-4q+tAGQs9zLCXE5Mi8V/gA@public.gmane.org>
  2017-11-29 12:51   ` Yoann Moulin
  1 sibling, 2 replies; 25+ messages in thread
From: Wido den Hollander @ 2017-11-28  8:12 UTC (permalink / raw)
  To: Alfredo Deza, ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel


> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
> 
> 
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
> 

As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?

Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.

As ceph-deploy doesn't support ceph-disk either I don't think it's a good idea to deprecate it right now.

How do others feel about this?

Wido

> We are strongly suggesting using ceph-volume for new (and old) OSD
> deployments. The only current exceptions to this are encrypted OSDs
> and FreeBSD systems
> 
> Encryption support is planned and will be coming soon to ceph-volume.
> 
> A few items to consider:
> 
> * ceph-disk is expected to be fully removed by the Mimic release
> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
> * ceph-ansible already fully supports ceph-volume and will soon default to it
> * ceph-deploy support is planned and should be fully implemented soon
> 
> 
> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
  2017-11-27 20:07 ` Nathan Cutler
@ 2017-11-28  8:24   ` Fabian Grünbichler
       [not found]     ` <CALWuvY9mX0Pdr4aMwPi=Dqw0qng959Z5bg5hs5dOxTfSZ+QH4Q@mail.gmail.com>
  0 siblings, 1 reply; 25+ messages in thread
From: Fabian Grünbichler @ 2017-11-28  8:24 UTC (permalink / raw)
  To: Nathan Cutler; +Cc: Alfredo Deza, ceph-devel

On Mon, Nov 27, 2017 at 09:07:02PM +0100, Nathan Cutler wrote:
> > A few items to consider:
> > 
> > * ceph-disk is expected to be fully removed by the Mimic release
> 
> It's pretty standard in the software world to deprecate in one release and
> remove in the next. Maybe I'm blind, but I didn't see anything in the
> announcement explaining why deprecation *and* removal in a single release
> cycle is warranted in this case?

Especially since ceph-volume has been undergoing lots of changes in the
recent weeks and months, so I highly doubt it has seen enough testing to
support such an abrupt change[1].

Maybe removing the deprecation warning for now, highlighting ceph-volume
in the release notes for 12.2.2 and then re-adding the deprecation
warning later on in the Luminous cycle (or even just the Mimic RCs) is
more advisable?

1: 90 commits touching src/ceph-volume since v12.2.1, not counting those
having "test" in the commit subject - that's about 12% of all non-"test"
commits currently slated for v12.2.2!


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [ceph-users] ceph-disk is now deprecated
  2017-11-28  8:12   ` Wido den Hollander
@ 2017-11-28  8:39     ` Piotr Dałek
       [not found]       ` <4c4b5589-b1ae-3f8b-e900-af0f2895fbc9-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
       [not found]     ` <1821337488.5487.1511856776936-4q+tAGQs9zLCXE5Mi8V/gA@public.gmane.org>
  1 sibling, 1 reply; 25+ messages in thread
From: Piotr Dałek @ 2017-11-28  8:39 UTC (permalink / raw)
  To: ceph-users, ceph-devel

On 17-11-28 09:12 AM, Wido den Hollander wrote:
> 
>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza@redhat.com>:
>>
>>
>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>> officially in 'deprecated' mode (bug fixes only). A large banner with
>> deprecation information has been added, which will try to raise
>> awareness.
>>
> 
> As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?
> 
> Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.
> 
> As ceph-deploy doesn't support ceph-disk either I don't think it's a good idea to deprecate it right now.
> 
> How do others feel about this?

Same, although we don't have a *big* problem with this (we haven't upgraded 
to Luminous yet, so we can skip to next point release and move to 
ceph-volume together with Luminous). It's still a problem, though - now we 
have more of our infrastructure to migrate and test, meaning even more 
delays in production upgrades.

-- 
Piotr Dałek
piotr.dalek@corp.ovh.com
https://www.ovh.com/us/

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
  2017-11-28  6:56 ` Andreas Calminder
@ 2017-11-28 11:47   ` Alfredo Deza
       [not found]     ` <CAC-Np1wp1M=qapRO0sCr1HMLpZ2zgLoh-GVv3p01k2DqjxqaOw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Alfredo Deza @ 2017-11-28 11:47 UTC (permalink / raw)
  To: Andreas Calminder; +Cc: ceph-users, ceph-devel

On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder
<andreas.calminder@klarna.com> wrote:
> Hello,
> Thanks for the heads-up. As someone who's currently maintaining a
> Jewel cluster and are in the process of setting up a shiny new
> Luminous cluster and writing Ansible roles in the process to make
> setup reproducible. I immediately proceeded to look into ceph-volume
> and I've some questions/concerns, mainly due to my own setup, which is
> one osd per device, simple.
>
> Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm
> subcommand available and the man-page only covers lvm. The online
> documentation http://docs.ceph.com/docs/master/ceph-volume/ lists
> simple however it's lacking some of the ceph-disk commands, like
> 'prepare' which seems crucial in the 'simple' scenario. Does the
> ceph-disk deprecation imply that lvm is mandatory for using devices
> with ceph or is just the documentation and tool features lagging
> behind, I.E the 'simple' parts will be added well in time for Mimic
> and during the Luminous lifecycle? Or am I missing something?

In your case, all your existing OSDs will be able to be managed by
`ceph-volume` once scanned and the information persisted. So anything
from Jewel should still work. For 12.2.1 you are right, that command
is not yet available, it will be present in 12.2.2

For the `simple` sub-command there is no prepare/activate, it is just
a way of taking over management of an already deployed OSD. For *new*
OSDs, yes, we are implying that we are going only with Logical Volumes
for data devices. It is a bit more flexible for Journals, block.db,
and block.wal as those
can be either logical volumes or GPT partitions (ceph-volume will not
create these for you).

>
> Best regards,
> Andreas
>
> On 27 November 2017 at 14:36, Alfredo Deza <adeza@redhat.com> wrote:
>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>> officially in 'deprecated' mode (bug fixes only). A large banner with
>> deprecation information has been added, which will try to raise
>> awareness.
>>
>> We are strongly suggesting using ceph-volume for new (and old) OSD
>> deployments. The only current exceptions to this are encrypted OSDs
>> and FreeBSD systems
>>
>> Encryption support is planned and will be coming soon to ceph-volume.
>>
>> A few items to consider:
>>
>> * ceph-disk is expected to be fully removed by the Mimic release
>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
>> * ceph-ansible already fully supports ceph-volume and will soon default to it
>> * ceph-deploy support is planned and should be fully implemented soon
>>
>>
>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]     ` <1821337488.5487.1511856776936-4q+tAGQs9zLCXE5Mi8V/gA@public.gmane.org>
@ 2017-11-28 11:54       ` Alfredo Deza
       [not found]         ` <CAC-Np1zfMoqtW2M75ZAys1_NTEkns6MJ6HimuRR3nFhbMVzjPg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Alfredo Deza @ 2017-11-28 11:54 UTC (permalink / raw)
  To: Wido den Hollander; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido-fspyXLx8qC4@public.gmane.org> wrote:
>
>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
>>
>>
>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>> officially in 'deprecated' mode (bug fixes only). A large banner with
>> deprecation information has been added, which will try to raise
>> awareness.
>>
>
> As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?
>
> Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.

ceph-volume has been present since the very first release of Luminous,
the deprecation warning in ceph-disk is the only "new" thing
introduced for 12.2.2.

>
> As ceph-deploy doesn't support ceph-disk either I don't think it's a good idea to deprecate it right now.

ceph-deploy work is being done to support ceph-volume exclusively
(ceph-disk support is dropped fully), which will mean a change in its
API in a non-backwards compatible
way. A major version change in ceph-deploy, documentation, and a bunch
of documentation is being worked on to allow users to transition to
it.

>
> How do others feel about this?
>
> Wido
>
>> We are strongly suggesting using ceph-volume for new (and old) OSD
>> deployments. The only current exceptions to this are encrypted OSDs
>> and FreeBSD systems
>>
>> Encryption support is planned and will be coming soon to ceph-volume.
>>
>> A few items to consider:
>>
>> * ceph-disk is expected to be fully removed by the Mimic release
>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
>> * ceph-ansible already fully supports ceph-volume and will soon default to it
>> * ceph-deploy support is planned and should be fully implemented soon
>>
>>
>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]     ` <CALWuvY9mX0Pdr4aMwPi=Dqw0qng959Z5bg5hs5dOxTfSZ+QH4Q@mail.gmail.com>
@ 2017-11-28 12:00       ` Alfredo Deza
  0 siblings, 0 replies; 25+ messages in thread
From: Alfredo Deza @ 2017-11-28 12:00 UTC (permalink / raw)
  To: nokia ceph; +Cc: Nathan Cutler, ceph-devel

On Tue, Nov 28, 2017 at 3:42 AM, nokia ceph <nokiacephusers@gmail.com> wrote:
> Hello,
>
> Can you share repo url (https://shaman.ceph.com/repos/ceph/) to download
> these rpms to test this behavior.Currenly I'm testing with 12.2.1 version.

You might want to take a look at: https://shaman.ceph.com/repos/ceph/luminous/

And see what SHA1 (or latest build) you want to try.

>
>  Also please let us know when can we expect the official 12.2.2 release ?
>
> While is more recommended for bluestore , lvm or simple ? in terms of
> performance / stability ?

The `simple` command would just be for taking over existing OSDs, in
however manner you've deployed them (even if the OSDs were created
manually)

For bluestore (the default objectstore in 12.2.2) it will be
`ceph-volume lvm` what you want.

>
> As per
> http://docs.ceph.com/docs/master/ceph-volume/simple/#ceph-volume-simple I
> beleive for existing OSD's created with ceph-disk to convert to simple, I
> need to trigger below procedure.
>
> http://docs.ceph.com/docs/master/ceph-volume/simple/activate/#ceph-volume-simple-activate
>
> So by making this, do we need to disable system-ceph\x2ddisk.slice service ?

ceph-volume will disable all ceph-disk systemd units for you, so that
UDEV triggers will no longer be a problem.

>
> Thanks
>
>
>
> On Tue, Nov 28, 2017 at 1:54 PM, Fabian Grünbichler
> <f.gruenbichler@proxmox.com> wrote:
>>
>> On Mon, Nov 27, 2017 at 09:07:02PM +0100, Nathan Cutler wrote:
>> > > A few items to consider:
>> > >
>> > > * ceph-disk is expected to be fully removed by the Mimic release
>> >
>> > It's pretty standard in the software world to deprecate in one release
>> > and
>> > remove in the next. Maybe I'm blind, but I didn't see anything in the
>> > announcement explaining why deprecation *and* removal in a single
>> > release
>> > cycle is warranted in this case?
>>
>> Especially since ceph-volume has been undergoing lots of changes in the
>> recent weeks and months, so I highly doubt it has seen enough testing to
>> support such an abrupt change[1].
>>
>> Maybe removing the deprecation warning for now, highlighting ceph-volume
>> in the release notes for 12.2.2 and then re-adding the deprecation
>> warning later on in the Luminous cycle (or even just the Mimic RCs) is
>> more advisable?
>>
>> 1: 90 commits touching src/ceph-volume since v12.2.1, not counting those
>> having "test" in the commit subject - that's about 12% of all non-"test"
>> commits currently slated for v12.2.2!
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]         ` <CAC-Np1zfMoqtW2M75ZAys1_NTEkns6MJ6HimuRR3nFhbMVzjPg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28 12:20           ` Maged Mokhtar
  2017-11-28 12:27           ` Wido den Hollander
  2017-11-28 12:38           ` Joao Eduardo Luis
  2 siblings, 0 replies; 25+ messages in thread
From: Maged Mokhtar @ 2017-11-28 12:20 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 2672 bytes --]

I tend to agree with Wido. May of us still reply on ceph-disk and hope
to see it live a little longer. 

Maged 

On 2017-11-28 13:54, Alfredo Deza wrote:

> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido-fspyXLx8qC4@public.gmane.org> wrote: 
> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
> 
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
> 
> As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?
> 
> Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.

ceph-volume has been present since the very first release of Luminous,
the deprecation warning in ceph-disk is the only "new" thing
introduced for 12.2.2.

> As ceph-deploy doesn't support ceph-disk either I don't think it's a good idea to deprecate it right now.

ceph-deploy work is being done to support ceph-volume exclusively
(ceph-disk support is dropped fully), which will mean a change in its
API in a non-backwards compatible
way. A major version change in ceph-deploy, documentation, and a bunch
of documentation is being worked on to allow users to transition to
it.

> How do others feel about this?
> 
> Wido
> 
>> We are strongly suggesting using ceph-volume for new (and old) OSD
>> deployments. The only current exceptions to this are encrypted OSDs
>> and FreeBSD systems
>> 
>> Encryption support is planned and will be coming soon to ceph-volume.
>> 
>> A few items to consider:
>> 
>> * ceph-disk is expected to be fully removed by the Mimic release
>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0 [1]]
>> * ceph-ansible already fully supports ceph-volume and will soon default to it
>> * ceph-deploy support is planned and should be fully implemented soon
>> 
>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 _______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

 

Links:
------
[1] http://docs.ceph.com/docs/master/ceph-volume/simple/

[-- Attachment #1.2: Type: text/html, Size: 4569 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]     ` <CAC-Np1wp1M=qapRO0sCr1HMLpZ2zgLoh-GVv3p01k2DqjxqaOw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28 12:22       ` Andreas Calminder
  2017-11-28 12:37         ` Alfredo Deza
  0 siblings, 1 reply; 25+ messages in thread
From: Andreas Calminder @ 2017-11-28 12:22 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

> For the `simple` sub-command there is no prepare/activate, it is just
> a way of taking over management of an already deployed OSD. For *new*
> OSDs, yes, we are implying that we are going only with Logical Volumes
> for data devices. It is a bit more flexible for Journals, block.db,
> and block.wal as those
> can be either logical volumes or GPT partitions (ceph-volume will not
> create these for you).

Ok, so if I understand this correctly, for future one-device-per-osd
setups I would create a volume group per device before handing it over
to ceph-volume, to get the "same" functionality as ceph-disk. I
understand the flexibility aspect of this, my setup will have an extra
step setting up lvm for my osd devices which is fine. Apologies if I
missed the information, but is it possible to get command output as
json, something like "ceph-disk list --format json" since it's quite
helpful while setting up stuff through ansible

Thanks,
Andreas

On 28 November 2017 at 12:47, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder
> <andreas.calminder-HQhCbu9nrx3QT0dZR+AlfA@public.gmane.org> wrote:
>> Hello,
>> Thanks for the heads-up. As someone who's currently maintaining a
>> Jewel cluster and are in the process of setting up a shiny new
>> Luminous cluster and writing Ansible roles in the process to make
>> setup reproducible. I immediately proceeded to look into ceph-volume
>> and I've some questions/concerns, mainly due to my own setup, which is
>> one osd per device, simple.
>>
>> Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm
>> subcommand available and the man-page only covers lvm. The online
>> documentation http://docs.ceph.com/docs/master/ceph-volume/ lists
>> simple however it's lacking some of the ceph-disk commands, like
>> 'prepare' which seems crucial in the 'simple' scenario. Does the
>> ceph-disk deprecation imply that lvm is mandatory for using devices
>> with ceph or is just the documentation and tool features lagging
>> behind, I.E the 'simple' parts will be added well in time for Mimic
>> and during the Luminous lifecycle? Or am I missing something?
>
> In your case, all your existing OSDs will be able to be managed by
> `ceph-volume` once scanned and the information persisted. So anything
> from Jewel should still work. For 12.2.1 you are right, that command
> is not yet available, it will be present in 12.2.2
>
> For the `simple` sub-command there is no prepare/activate, it is just
> a way of taking over management of an already deployed OSD. For *new*
> OSDs, yes, we are implying that we are going only with Logical Volumes
> for data devices. It is a bit more flexible for Journals, block.db,
> and block.wal as those
> can be either logical volumes or GPT partitions (ceph-volume will not
> create these for you).
>
>>
>> Best regards,
>> Andreas
>>
>> On 27 November 2017 at 14:36, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>> deprecation information has been added, which will try to raise
>>> awareness.
>>>
>>> We are strongly suggesting using ceph-volume for new (and old) OSD
>>> deployments. The only current exceptions to this are encrypted OSDs
>>> and FreeBSD systems
>>>
>>> Encryption support is planned and will be coming soon to ceph-volume.
>>>
>>> A few items to consider:
>>>
>>> * ceph-disk is expected to be fully removed by the Mimic release
>>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
>>> * ceph-ansible already fully supports ceph-volume and will soon default to it
>>> * ceph-deploy support is planned and should be fully implemented soon
>>>
>>>
>>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]         ` <CAC-Np1zfMoqtW2M75ZAys1_NTEkns6MJ6HimuRR3nFhbMVzjPg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-11-28 12:20           ` Maged Mokhtar
@ 2017-11-28 12:27           ` Wido den Hollander
  2017-11-28 12:38           ` Joao Eduardo Luis
  2 siblings, 0 replies; 25+ messages in thread
From: Wido den Hollander @ 2017-11-28 12:27 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel


> Op 28 november 2017 om 12:54 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
> 
> 
> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido-fspyXLx8qC4@public.gmane.org> wrote:
> >
> >> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
> >>
> >>
> >> For the upcoming Luminous release (12.2.2), ceph-disk will be
> >> officially in 'deprecated' mode (bug fixes only). A large banner with
> >> deprecation information has been added, which will try to raise
> >> awareness.
> >>
> >
> > As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?
> >
> > Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.
> 
> ceph-volume has been present since the very first release of Luminous,
> the deprecation warning in ceph-disk is the only "new" thing
> introduced for 12.2.2.
> 

Yes, but deprecating a functional tool in a minor release? Yes, I am aware that ceph-volume works, but suddenly during a release saying it's now deprecated?

Why can't that be moved to the M release? Leave ceph-disk as-is and deprecate it in master.

Again, I really do like ceph-volume! Great work!

Wido

> >
> > As ceph-deploy doesn't support ceph-disk either I don't think it's a good idea to deprecate it right now.
> 
> ceph-deploy work is being done to support ceph-volume exclusively
> (ceph-disk support is dropped fully), which will mean a change in its
> API in a non-backwards compatible
> way. A major version change in ceph-deploy, documentation, and a bunch
> of documentation is being worked on to allow users to transition to
> it.
> 
> >
> > How do others feel about this?
> >
> > Wido
> >
> >> We are strongly suggesting using ceph-volume for new (and old) OSD
> >> deployments. The only current exceptions to this are encrypted OSDs
> >> and FreeBSD systems
> >>
> >> Encryption support is planned and will be coming soon to ceph-volume.
> >>
> >> A few items to consider:
> >>
> >> * ceph-disk is expected to be fully removed by the Mimic release
> >> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
> >> * ceph-ansible already fully supports ceph-volume and will soon default to it
> >> * ceph-deploy support is planned and should be fully implemented soon
> >>
> >>
> >> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]       ` <4c4b5589-b1ae-3f8b-e900-af0f2895fbc9-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
@ 2017-11-28 12:32         ` Alfredo Deza
       [not found]           ` <CAC-Np1zsf+uqFi+ZdR7K3=re5ODpJvWFrTX-JU_8DkkyVGOz7A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Alfredo Deza @ 2017-11-28 12:32 UTC (permalink / raw)
  To: Piotr Dałek; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Tue, Nov 28, 2017 at 3:39 AM, Piotr Dałek <piotr.dalek@corp.ovh.com> wrote:
> On 17-11-28 09:12 AM, Wido den Hollander wrote:
>>
>>
>>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza@redhat.com>:
>>>
>>>
>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>> deprecation information has been added, which will try to raise
>>> awareness.
>>>
>>
>> As much as I like ceph-volume and the work being done, is it really a good
>> idea to use a minor release to deprecate a tool?
>>
>> Can't we just introduce ceph-volume and deprecate ceph-disk at the release
>> of M? Because when you upgrade to 12.2.2 suddenly existing integrations will
>> have deprecation warnings being thrown at them while they haven't upgraded
>> to a new major version.
>>
>> As ceph-deploy doesn't support ceph-disk either I don't think it's a good
>> idea to deprecate it right now.
>>
>> How do others feel about this?
>
>
> Same, although we don't have a *big* problem with this (we haven't upgraded
> to Luminous yet, so we can skip to next point release and move to
> ceph-volume together with Luminous). It's still a problem, though - now we
> have more of our infrastructure to migrate and test, meaning even more
> delays in production upgrades.

I understand that this would involve a significant effort to fully
port over and drop ceph-disk entirely, and I don't think that dropping
ceph-disk in Mimic is set in stone (yet).

We could treat Luminous as a "soft" deprecation where ceph-disk will
still receive bug-fixes, and then in Mimic, it would be frozen - with
no updates whatsoever.

At some point a migration will have to happen for older clusters,
which is why we've added support in ceph-volume for existing OSDs. An
upgrade to Luminous doesn't mean ceph-disk
will not work, the only thing that has been added to ceph-disk is a
deprecation warning.


>
> --
> Piotr Dałek
> piotr.dalek@corp.ovh.com
> https://www.ovh.com/us/
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
  2017-11-28 12:22       ` Andreas Calminder
@ 2017-11-28 12:37         ` Alfredo Deza
       [not found]           ` <CAC-Np1xqg2-wAguu8AO6LSbj41uOGSBZTgdgL45OAi5M080grw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Alfredo Deza @ 2017-11-28 12:37 UTC (permalink / raw)
  To: Andreas Calminder; +Cc: ceph-users, ceph-devel

On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder
<andreas.calminder@klarna.com> wrote:
>> For the `simple` sub-command there is no prepare/activate, it is just
>> a way of taking over management of an already deployed OSD. For *new*
>> OSDs, yes, we are implying that we are going only with Logical Volumes
>> for data devices. It is a bit more flexible for Journals, block.db,
>> and block.wal as those
>> can be either logical volumes or GPT partitions (ceph-volume will not
>> create these for you).
>
> Ok, so if I understand this correctly, for future one-device-per-osd
> setups I would create a volume group per device before handing it over
> to ceph-volume, to get the "same" functionality as ceph-disk. I
> understand the flexibility aspect of this, my setup will have an extra
> step setting up lvm for my osd devices which is fine.

If you don't require any special configuration for your logical volume
and don't mind a naive LV handling, then ceph-volume can create
the logical volume for you from either a partition or a device (for
data), although it will still require a GPT partition for Journals,
block.wal, and block.db

For example:

    ceph-volume lvm create --data /path/to/device

Would create a new volume group with the device and then produce a
single LV from it.

> Apologies if I
> missed the information, but is it possible to get command output as
> json, something like "ceph-disk list --format json" since it's quite
> helpful while setting up stuff through ansible

Yes, this is implemented in both "pretty" and JSON formats:
http://docs.ceph.com/docs/master/ceph-volume/lvm/list/#ceph-volume-lvm-list
>
> Thanks,
> Andreas
>
> On 28 November 2017 at 12:47, Alfredo Deza <adeza@redhat.com> wrote:
>> On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder
>> <andreas.calminder@klarna.com> wrote:
>>> Hello,
>>> Thanks for the heads-up. As someone who's currently maintaining a
>>> Jewel cluster and are in the process of setting up a shiny new
>>> Luminous cluster and writing Ansible roles in the process to make
>>> setup reproducible. I immediately proceeded to look into ceph-volume
>>> and I've some questions/concerns, mainly due to my own setup, which is
>>> one osd per device, simple.
>>>
>>> Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm
>>> subcommand available and the man-page only covers lvm. The online
>>> documentation http://docs.ceph.com/docs/master/ceph-volume/ lists
>>> simple however it's lacking some of the ceph-disk commands, like
>>> 'prepare' which seems crucial in the 'simple' scenario. Does the
>>> ceph-disk deprecation imply that lvm is mandatory for using devices
>>> with ceph or is just the documentation and tool features lagging
>>> behind, I.E the 'simple' parts will be added well in time for Mimic
>>> and during the Luminous lifecycle? Or am I missing something?
>>
>> In your case, all your existing OSDs will be able to be managed by
>> `ceph-volume` once scanned and the information persisted. So anything
>> from Jewel should still work. For 12.2.1 you are right, that command
>> is not yet available, it will be present in 12.2.2
>>
>> For the `simple` sub-command there is no prepare/activate, it is just
>> a way of taking over management of an already deployed OSD. For *new*
>> OSDs, yes, we are implying that we are going only with Logical Volumes
>> for data devices. It is a bit more flexible for Journals, block.db,
>> and block.wal as those
>> can be either logical volumes or GPT partitions (ceph-volume will not
>> create these for you).
>>
>>>
>>> Best regards,
>>> Andreas
>>>
>>> On 27 November 2017 at 14:36, Alfredo Deza <adeza@redhat.com> wrote:
>>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>>> deprecation information has been added, which will try to raise
>>>> awareness.
>>>>
>>>> We are strongly suggesting using ceph-volume for new (and old) OSD
>>>> deployments. The only current exceptions to this are encrypted OSDs
>>>> and FreeBSD systems
>>>>
>>>> Encryption support is planned and will be coming soon to ceph-volume.
>>>>
>>>> A few items to consider:
>>>>
>>>> * ceph-disk is expected to be fully removed by the Mimic release
>>>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
>>>> * ceph-ansible already fully supports ceph-volume and will soon default to it
>>>> * ceph-deploy support is planned and should be fully implemented soon
>>>>
>>>>
>>>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]         ` <CAC-Np1zfMoqtW2M75ZAys1_NTEkns6MJ6HimuRR3nFhbMVzjPg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-11-28 12:20           ` Maged Mokhtar
  2017-11-28 12:27           ` Wido den Hollander
@ 2017-11-28 12:38           ` Joao Eduardo Luis
       [not found]             ` <1db3fa50-d022-bf37-eb4b-098882ea0984-l3A5Bk7waGM@public.gmane.org>
  2 siblings, 1 reply; 25+ messages in thread
From: Joao Eduardo Luis @ 2017-11-28 12:38 UTC (permalink / raw)
  To: Alfredo Deza, Wido den Hollander
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On 11/28/2017 11:54 AM, Alfredo Deza wrote:
> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido-fspyXLx8qC4@public.gmane.org> wrote:
>>
>>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
>>>
>>>
>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>> deprecation information has been added, which will try to raise
>>> awareness.
>>>
>>
>> As much as I like ceph-volume and the work being done, is it really a good idea to use a minor release to deprecate a tool?
>>
>> Can't we just introduce ceph-volume and deprecate ceph-disk at the release of M? Because when you upgrade to 12.2.2 suddenly existing integrations will have deprecation warnings being thrown at them while they haven't upgraded to a new major version.
> 
> ceph-volume has been present since the very first release of Luminous,
> the deprecation warning in ceph-disk is the only "new" thing
> introduced for 12.2.2.

I think Wido's question still stands: why can't ceph-disk be deprecated 
solely in M, and removed by N?

I get that it probably seems nuts to support ceph-disk and ceph-volume; 
and by deprecating and removing in (less than) a full release cycle will 
force people to actually move from one to the other. But we're also 
doing it when roughly 4 months away from Mimic being frozen.

This is the sort of last minute overall, core, changes that are not 
expected from a project that should be as mature as Ceph. This is not 
some internal feature that users won't notice - we're effectively 
changing the way users deploy and orchestrate their clusters.


   -Joao

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]             ` <1db3fa50-d022-bf37-eb4b-098882ea0984-l3A5Bk7waGM@public.gmane.org>
@ 2017-11-28 12:52               ` Alfredo Deza
       [not found]                 ` <CAC-Np1ynRyeOChGK2k_oBAx0A7XcAFkMDMrxVHZhN-pjGXfAJw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Alfredo Deza @ 2017-11-28 12:52 UTC (permalink / raw)
  To: Joao Eduardo Luis; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Tue, Nov 28, 2017 at 7:38 AM, Joao Eduardo Luis <joao-l3A5Bk7waGM@public.gmane.org> wrote:
> On 11/28/2017 11:54 AM, Alfredo Deza wrote:
>>
>> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido-fspyXLx8qC4@public.gmane.org> wrote:
>>>
>>>
>>>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
>>>>
>>>>
>>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>>> deprecation information has been added, which will try to raise
>>>> awareness.
>>>>
>>>
>>> As much as I like ceph-volume and the work being done, is it really a
>>> good idea to use a minor release to deprecate a tool?
>>>
>>> Can't we just introduce ceph-volume and deprecate ceph-disk at the
>>> release of M? Because when you upgrade to 12.2.2 suddenly existing
>>> integrations will have deprecation warnings being thrown at them while they
>>> haven't upgraded to a new major version.
>>
>>
>> ceph-volume has been present since the very first release of Luminous,
>> the deprecation warning in ceph-disk is the only "new" thing
>> introduced for 12.2.2.
>
>
> I think Wido's question still stands: why can't ceph-disk be deprecated
> solely in M, and removed by N?

Like I mentioned, I don't think this is set in stone (yet), but it was
the idea from the beginning (See Oct 9th thread "killing ceph-disk"),
and I don't think it would
be terribly bad to keep ceph-disk in Mimic, but fully frozen, with no
updates or bug fixes. And full removal in N

The deprecation warnings need to stay for Luminous though.

>
> I get that it probably seems nuts to support ceph-disk and ceph-volume; and
> by deprecating and removing in (less than) a full release cycle will force
> people to actually move from one to the other. But we're also doing it when
> roughly 4 months away from Mimic being frozen.
>
> This is the sort of last minute overall, core, changes that are not expected
> from a project that should be as mature as Ceph. This is not some internal
> feature that users won't notice - we're effectively changing the way users
> deploy and orchestrate their clusters.
>
>
>   -Joao

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]           ` <CAC-Np1xqg2-wAguu8AO6LSbj41uOGSBZTgdgL45OAi5M080grw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28 13:14             ` Andreas Calminder
  0 siblings, 0 replies; 25+ messages in thread
From: Andreas Calminder @ 2017-11-28 13:14 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

Thanks!
I'll start looking into rebuilding my roles once 12.2.2 is out then.

On 28 November 2017 at 13:37, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> On Tue, Nov 28, 2017 at 7:22 AM, Andreas Calminder
> <andreas.calminder-HQhCbu9nrx3QT0dZR+AlfA@public.gmane.org> wrote:
>>> For the `simple` sub-command there is no prepare/activate, it is just
>>> a way of taking over management of an already deployed OSD. For *new*
>>> OSDs, yes, we are implying that we are going only with Logical Volumes
>>> for data devices. It is a bit more flexible for Journals, block.db,
>>> and block.wal as those
>>> can be either logical volumes or GPT partitions (ceph-volume will not
>>> create these for you).
>>
>> Ok, so if I understand this correctly, for future one-device-per-osd
>> setups I would create a volume group per device before handing it over
>> to ceph-volume, to get the "same" functionality as ceph-disk. I
>> understand the flexibility aspect of this, my setup will have an extra
>> step setting up lvm for my osd devices which is fine.
>
> If you don't require any special configuration for your logical volume
> and don't mind a naive LV handling, then ceph-volume can create
> the logical volume for you from either a partition or a device (for
> data), although it will still require a GPT partition for Journals,
> block.wal, and block.db
>
> For example:
>
>     ceph-volume lvm create --data /path/to/device
>
> Would create a new volume group with the device and then produce a
> single LV from it.
>
>> Apologies if I
>> missed the information, but is it possible to get command output as
>> json, something like "ceph-disk list --format json" since it's quite
>> helpful while setting up stuff through ansible
>
> Yes, this is implemented in both "pretty" and JSON formats:
> http://docs.ceph.com/docs/master/ceph-volume/lvm/list/#ceph-volume-lvm-list
>>
>> Thanks,
>> Andreas
>>
>> On 28 November 2017 at 12:47, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>>> On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder
>>> <andreas.calminder-HQhCbu9nrx3QT0dZR+AlfA@public.gmane.org> wrote:
>>>> Hello,
>>>> Thanks for the heads-up. As someone who's currently maintaining a
>>>> Jewel cluster and are in the process of setting up a shiny new
>>>> Luminous cluster and writing Ansible roles in the process to make
>>>> setup reproducible. I immediately proceeded to look into ceph-volume
>>>> and I've some questions/concerns, mainly due to my own setup, which is
>>>> one osd per device, simple.
>>>>
>>>> Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm
>>>> subcommand available and the man-page only covers lvm. The online
>>>> documentation http://docs.ceph.com/docs/master/ceph-volume/ lists
>>>> simple however it's lacking some of the ceph-disk commands, like
>>>> 'prepare' which seems crucial in the 'simple' scenario. Does the
>>>> ceph-disk deprecation imply that lvm is mandatory for using devices
>>>> with ceph or is just the documentation and tool features lagging
>>>> behind, I.E the 'simple' parts will be added well in time for Mimic
>>>> and during the Luminous lifecycle? Or am I missing something?
>>>
>>> In your case, all your existing OSDs will be able to be managed by
>>> `ceph-volume` once scanned and the information persisted. So anything
>>> from Jewel should still work. For 12.2.1 you are right, that command
>>> is not yet available, it will be present in 12.2.2
>>>
>>> For the `simple` sub-command there is no prepare/activate, it is just
>>> a way of taking over management of an already deployed OSD. For *new*
>>> OSDs, yes, we are implying that we are going only with Logical Volumes
>>> for data devices. It is a bit more flexible for Journals, block.db,
>>> and block.wal as those
>>> can be either logical volumes or GPT partitions (ceph-volume will not
>>> create these for you).
>>>
>>>>
>>>> Best regards,
>>>> Andreas
>>>>
>>>> On 27 November 2017 at 14:36, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>>>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>>>> deprecation information has been added, which will try to raise
>>>>> awareness.
>>>>>
>>>>> We are strongly suggesting using ceph-volume for new (and old) OSD
>>>>> deployments. The only current exceptions to this are encrypted OSDs
>>>>> and FreeBSD systems
>>>>>
>>>>> Encryption support is planned and will be coming soon to ceph-volume.
>>>>>
>>>>> A few items to consider:
>>>>>
>>>>> * ceph-disk is expected to be fully removed by the Mimic release
>>>>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
>>>>> * ceph-ansible already fully supports ceph-volume and will soon default to it
>>>>> * ceph-deploy support is planned and should be fully implemented soon
>>>>>
>>>>>
>>>>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
>>>>> --
>>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]                 ` <CAC-Np1ynRyeOChGK2k_oBAx0A7XcAFkMDMrxVHZhN-pjGXfAJw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28 14:17                   ` Joao Eduardo Luis
  0 siblings, 0 replies; 25+ messages in thread
From: Joao Eduardo Luis @ 2017-11-28 14:17 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On 11/28/2017 12:52 PM, Alfredo Deza wrote:
> On Tue, Nov 28, 2017 at 7:38 AM, Joao Eduardo Luis <joao-l3A5Bk7waGM@public.gmane.org> wrote:
>> On 11/28/2017 11:54 AM, Alfredo Deza wrote:
>>>
>>> On Tue, Nov 28, 2017 at 3:12 AM, Wido den Hollander <wido-fspyXLx8qC4@public.gmane.org> wrote:
>>>>
>>>>
>>>>> Op 27 november 2017 om 14:36 schreef Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>:
>>>>>
>>>>>
>>>>> For the upcoming Luminous release (12.2.2), ceph-disk will be
>>>>> officially in 'deprecated' mode (bug fixes only). A large banner with
>>>>> deprecation information has been added, which will try to raise
>>>>> awareness.
>>>>>
>>>>
>>>> As much as I like ceph-volume and the work being done, is it really a
>>>> good idea to use a minor release to deprecate a tool?
>>>>
>>>> Can't we just introduce ceph-volume and deprecate ceph-disk at the
>>>> release of M? Because when you upgrade to 12.2.2 suddenly existing
>>>> integrations will have deprecation warnings being thrown at them while they
>>>> haven't upgraded to a new major version.
>>>
>>>
>>> ceph-volume has been present since the very first release of Luminous,
>>> the deprecation warning in ceph-disk is the only "new" thing
>>> introduced for 12.2.2.
>>
>>
>> I think Wido's question still stands: why can't ceph-disk be deprecated
>> solely in M, and removed by N?
> 
> Like I mentioned, I don't think this is set in stone (yet), but it was
> the idea from the beginning (See Oct 9th thread "killing ceph-disk"),
> and I don't think it would
> be terribly bad to keep ceph-disk in Mimic, but fully frozen, with no
> updates or bug fixes. And full removal in N
> 
> The deprecation warnings need to stay for Luminous though.

I can live with this, granted Luminous still sees bug fixes despite the 
deprecation warning - but I'm guessing that's what you meant by only 
fully freezing in Mimic :).

Thanks.

   -Joao

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]           ` <CAC-Np1zsf+uqFi+ZdR7K3=re5ODpJvWFrTX-JU_8DkkyVGOz7A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28 14:26             ` Willem Jan Withagen
       [not found]               ` <d548c5a8-c0dc-0737-66f4-e3fc04f900d7-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Willem Jan Withagen @ 2017-11-28 14:26 UTC (permalink / raw)
  To: Alfredo Deza, Piotr Dałek
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On 28-11-2017 13:32, Alfredo Deza wrote:
> 
> I understand that this would involve a significant effort to fully
> port over and drop ceph-disk entirely, and I don't think that dropping
> ceph-disk in Mimic is set in stone (yet).

Alfredo,

When I expressed my concers about deprecating ceph-disk, I was led to 
beleive that I had atleast two release cycles to come up with something 
of a 'ceph-volume zfs ....'

Reading this, there is a possibility that it will get dropped IN mimic?
Which means that there is less than 1 release cycle to get it working?

Thanx,
--WjW

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]               ` <d548c5a8-c0dc-0737-66f4-e3fc04f900d7-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
@ 2017-11-28 17:22                 ` David Turner
       [not found]                   ` <CAN-Gep+RmSB6gW5GRVsT8T7j60EAzGnfbgeqDjwCNDWocpgp3g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: David Turner @ 2017-11-28 17:22 UTC (permalink / raw)
  To: adeza-H+wXaHxf7aLQT0dZR+AlfA
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 1502 bytes --]

Isn't marking something as deprecated meaning that there is a better option
that we want you to use and you should switch to it sooner than later? I
don't understand how this is ready to be marked as such if ceph-volume
can't be switched to for all supported use cases. If ZFS, encryption,
FreeBSD, etc are all going to be supported under ceph-volume, then how can
ceph-disk be deprecated before ceph-volume can support them? I can imagine
many Ceph admins wasting time chasing an erroneous deprecated warning
because it came out before the new solution was mature enough to replace
the existing solution.
On Tue, Nov 28, 2017 at 9:26 AM Willem Jan Withagen <wjw-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org> wrote:

> On 28-11-2017 13:32, Alfredo Deza wrote:
> >
> > I understand that this would involve a significant effort to fully
> > port over and drop ceph-disk entirely, and I don't think that dropping
> > ceph-disk in Mimic is set in stone (yet).
>
> Alfredo,
>
> When I expressed my concers about deprecating ceph-disk, I was led to
> beleive that I had atleast two release cycles to come up with something
> of a 'ceph-volume zfs ....'
>
> Reading this, there is a possibility that it will get dropped IN mimic?
> Which means that there is less than 1 release cycle to get it working?
>
> Thanx,
> --WjW
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 2060 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]                   ` <CAN-Gep+RmSB6gW5GRVsT8T7j60EAzGnfbgeqDjwCNDWocpgp3g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-28 18:39                     ` Vasu Kulkarni
  2017-11-30 11:31                       ` [ceph-users] " Fabian Grünbichler
  0 siblings, 1 reply; 25+ messages in thread
From: Vasu Kulkarni @ 2017-11-28 18:39 UTC (permalink / raw)
  To: David Turner; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Tue, Nov 28, 2017 at 9:22 AM, David Turner <drakonstein-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Isn't marking something as deprecated meaning that there is a better option
> that we want you to use and you should switch to it sooner than later? I
> don't understand how this is ready to be marked as such if ceph-volume can't
> be switched to for all supported use cases. If ZFS, encryption, FreeBSD, etc
> are all going to be supported under ceph-volume, then how can ceph-disk be
> deprecated before ceph-volume can support them? I can imagine many Ceph
> admins wasting time chasing an erroneous deprecated warning because it came
> out before the new solution was mature enough to replace the existing
> solution.

There is no need to worry about this deprecation, Its mostly for
admins to be prepared
for the changes coming ahead and its mostly for *new* installations
that can plan on using ceph-volume which provides
great flexibility compared to ceph-disk.

a) many dont use ceph-disk or ceph-volume directly, so the tool you
have right now eg: ceph-deploy or ceph-ansible
will still support the ceph-disk, the previous ceph-deploy release is
still available from pypi
  https://pypi.python.org/pypi/ceph-deploy

b) also the current push will help anyone who is using ceph-deploy or
ceph-disk in scripts/chef/etc
   to have time to think about using newer cli based on ceph-volume


> On Tue, Nov 28, 2017 at 9:26 AM Willem Jan Withagen <wjw-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org> wrote:
>>
>> On 28-11-2017 13:32, Alfredo Deza wrote:
>> >
>> > I understand that this would involve a significant effort to fully
>> > port over and drop ceph-disk entirely, and I don't think that dropping
>> > ceph-disk in Mimic is set in stone (yet).
>>
>> Alfredo,
>>
>> When I expressed my concers about deprecating ceph-disk, I was led to
>> beleive that I had atleast two release cycles to come up with something
>> of a 'ceph-volume zfs ....'
>>
>> Reading this, there is a possibility that it will get dropped IN mimic?
>> Which means that there is less than 1 release cycle to get it working?
>>
>> Thanx,
>> --WjW
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found] ` <CAC-Np1z7OJNxUeso+FEB8g+7RUABkvKF58_mmXDHt4SeOTHSDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-11-28  8:12   ` Wido den Hollander
@ 2017-11-29 12:51   ` Yoann Moulin
  1 sibling, 0 replies; 25+ messages in thread
From: Yoann Moulin @ 2017-11-29 12:51 UTC (permalink / raw)
  To: Alfredo Deza, ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

Le 27/11/2017 à 14:36, Alfredo Deza a écrit :
> For the upcoming Luminous release (12.2.2), ceph-disk will be
> officially in 'deprecated' mode (bug fixes only). A large banner with
> deprecation information has been added, which will try to raise
> awareness.
> 
> We are strongly suggesting using ceph-volume for new (and old) OSD
> deployments. The only current exceptions to this are encrypted OSDs
> and FreeBSD systems
> 
> Encryption support is planned and will be coming soon to ceph-volume.
> 
> A few items to consider:
> 
> * ceph-disk is expected to be fully removed by the Mimic release
> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0]
> * ceph-ansible already fully supports ceph-volume and will soon default to it
> * ceph-deploy support is planned and should be fully implemented soon
> 
> 
> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Is that possible to update the "add-or-rm-osds" documentation to have also the process with ceph-volume. That would help to the adoption.

http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/

This page should be updated as well with ceph-volume command.

http://docs.ceph.com/docs/master/rados/operations/bluestore-migration/

Documentation (at least for master, maybe for luminous) should keep both options (ceph-disk and ceph-volume) but with a warning message to
encourage people to use ceph-volume instead of ceph-disk.

I agree with comments here that say changing the status of ceph-disk as deprecated in a minor release is not what I expect for a stable storage
systems but I also understand the necessity to move forward with ceph-volume (and bluestore). I think keeping ceph-disk in mimic is necessary,
even though there is no update, just for compatibility with old scripts.

-- 
Yoann Moulin
EPFL IC-IT
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [ceph-users] ceph-disk is now deprecated
  2017-11-28 18:39                     ` Vasu Kulkarni
@ 2017-11-30 11:31                       ` Fabian Grünbichler
  2017-11-30 12:04                         ` Alfredo Deza
  0 siblings, 1 reply; 25+ messages in thread
From: Fabian Grünbichler @ 2017-11-30 11:31 UTC (permalink / raw)
  To: Vasu Kulkarni; +Cc: David Turner, ceph-users, ceph-devel

On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> On Tue, Nov 28, 2017 at 9:22 AM, David Turner <drakonstein@gmail.com> wrote:
> > Isn't marking something as deprecated meaning that there is a better option
> > that we want you to use and you should switch to it sooner than later? I
> > don't understand how this is ready to be marked as such if ceph-volume can't
> > be switched to for all supported use cases. If ZFS, encryption, FreeBSD, etc
> > are all going to be supported under ceph-volume, then how can ceph-disk be
> > deprecated before ceph-volume can support them? I can imagine many Ceph
> > admins wasting time chasing an erroneous deprecated warning because it came
> > out before the new solution was mature enough to replace the existing
> > solution.
> 
> There is no need to worry about this deprecation, Its mostly for
> admins to be prepared
> for the changes coming ahead and its mostly for *new* installations
> that can plan on using ceph-volume which provides
> great flexibility compared to ceph-disk.

changing existing installations to output deprecation warnings from one
minor release to the next means it is not just for new installations
though, no matter how you spin it. a mention in the release notes and
docs would be enough to get admins to test and use ceph-volume on new
installations.

I am pretty sure many admins will be bothered by all nodes running OSDs
spamming the logs and their terminals with huge deprecation warnings on
each OSD activation[1] or other actions involving ceph-disk, and having
this state for the remainder of Luminous unless they switch to a new
(and as of yet not battle-tested) way of activating their OSDs seems
crazy to me.

I know our users will be, and given the short notice and huge impact
this would have we will likely have to remove the deprecation warnings
altogether in our (downstream) packages until we have completed testing
of and implementing support for ceph-volume..

> 
> a) many dont use ceph-disk or ceph-volume directly, so the tool you
> have right now eg: ceph-deploy or ceph-ansible
> will still support the ceph-disk, the previous ceph-deploy release is
> still available from pypi
>   https://pypi.python.org/pypi/ceph-deploy

we have >> 10k (user / customer managed!) installations on Ceph Luminous
alone, all using our wrapper around ceph-disk - changing something like
this in the middle of a release causes huge headaches for downstreams
like us, and is not how a stable project is supposed to be run.

> 
> b) also the current push will help anyone who is using ceph-deploy or
> ceph-disk in scripts/chef/etc
>    to have time to think about using newer cli based on ceph-volume

a regular deprecate at the beginning of the release cycle were the
replacement is deemed stable, remove in the next release cycle would be
adequate for this purpose.

I don't understand the rush to shoe-horn ceph-volume into existing
supposedly stable Ceph installations at all - especially given the
current state of ceph-volume (we'll file bugs once we are done writing
them up, but a quick rudimentary test already showed stuff like choking
on valid ceph.conf files because they contain leading whitespace and
incomplete error handling leading to crush map entries for failed OSD
creation attempts).

I DO understand the motivation behind ceph-volume and the desire to get
rid of the udev-based trigger mess, but the solution is not to scare
users into switching in the middle of a release by introducing
deprecation warnings for a core piece of the deployment stack.

IMHO the only reason to push or force such a switch in this manner would
be a (grave) security or data corruption bug, which is not the case at
all here..

1: have you looked at the journal / boot logs of a mid-sized OSD node
using ceph-disk for activation with the deprecation warning active?  if
my boot log is suddenly filled with 20% warnings, my first reaction will
be that something is very wrong.. my likely second reaction when
realizing what is going on is probably not fit for posting to a public
mailing list ;)


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [ceph-users] ceph-disk is now deprecated
  2017-11-30 11:31                       ` [ceph-users] " Fabian Grünbichler
@ 2017-11-30 12:04                         ` Alfredo Deza
       [not found]                           ` <CAC-Np1wo_j5MBXTzm5kp-MjWiV=vkL+5Xt88SS617MJ4qmh5UQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 25+ messages in thread
From: Alfredo Deza @ 2017-11-30 12:04 UTC (permalink / raw)
  To: Vasu Kulkarni, David Turner, ceph-users, ceph-devel

On Thu, Nov 30, 2017 at 6:31 AM, Fabian Grünbichler
<f.gruenbichler@proxmox.com> wrote:
> On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
>> On Tue, Nov 28, 2017 at 9:22 AM, David Turner <drakonstein@gmail.com> wrote:
>> > Isn't marking something as deprecated meaning that there is a better option
>> > that we want you to use and you should switch to it sooner than later? I
>> > don't understand how this is ready to be marked as such if ceph-volume can't
>> > be switched to for all supported use cases. If ZFS, encryption, FreeBSD, etc
>> > are all going to be supported under ceph-volume, then how can ceph-disk be
>> > deprecated before ceph-volume can support them? I can imagine many Ceph
>> > admins wasting time chasing an erroneous deprecated warning because it came
>> > out before the new solution was mature enough to replace the existing
>> > solution.
>>
>> There is no need to worry about this deprecation, Its mostly for
>> admins to be prepared
>> for the changes coming ahead and its mostly for *new* installations
>> that can plan on using ceph-volume which provides
>> great flexibility compared to ceph-disk.
>
> changing existing installations to output deprecation warnings from one
> minor release to the next means it is not just for new installations
> though, no matter how you spin it. a mention in the release notes and
> docs would be enough to get admins to test and use ceph-volume on new
> installations.
>
> I am pretty sure many admins will be bothered by all nodes running OSDs
> spamming the logs and their terminals with huge deprecation warnings on
> each OSD activation[1] or other actions involving ceph-disk, and having
> this state for the remainder of Luminous unless they switch to a new
> (and as of yet not battle-tested) way of activating their OSDs seems
> crazy to me.
>
> I know our users will be, and given the short notice and huge impact
> this would have we will likely have to remove the deprecation warnings
> altogether in our (downstream) packages until we have completed testing
> of and implementing support for ceph-volume..
>
>>
>> a) many dont use ceph-disk or ceph-volume directly, so the tool you
>> have right now eg: ceph-deploy or ceph-ansible
>> will still support the ceph-disk, the previous ceph-deploy release is
>> still available from pypi
>>   https://pypi.python.org/pypi/ceph-deploy
>
> we have >> 10k (user / customer managed!) installations on Ceph Luminous
> alone, all using our wrapper around ceph-disk - changing something like
> this in the middle of a release causes huge headaches for downstreams
> like us, and is not how a stable project is supposed to be run.

If you are using a wrapper around ceph-disk, then silencing the
deprecation warnings should be easy to do.

These are plain Python warnings, and can be silenced within Python or
environment variables. There are some details
on how to do that here https://github.com/ceph/ceph/pull/18989
>
>>
>> b) also the current push will help anyone who is using ceph-deploy or
>> ceph-disk in scripts/chef/etc
>>    to have time to think about using newer cli based on ceph-volume
>
> a regular deprecate at the beginning of the release cycle were the
> replacement is deemed stable, remove in the next release cycle would be
> adequate for this purpose.
>
> I don't understand the rush to shoe-horn ceph-volume into existing
> supposedly stable Ceph installations at all - especially given the
> current state of ceph-volume (we'll file bugs once we are done writing
> them up, but a quick rudimentary test already showed stuff like choking
> on valid ceph.conf files because they contain leading whitespace and
> incomplete error handling leading to crush map entries for failed OSD
> creation attempts).

Any ceph-volume bugs are welcomed as soon as you can get them to us.
Waiting to get them reported is a problem, since ceph-volume
is tied to Ceph releases, it means that these will now have to wait
for another point release instead of having them in the upcoming one.

>
> I DO understand the motivation behind ceph-volume and the desire to get
> rid of the udev-based trigger mess, but the solution is not to scare
> users into switching in the middle of a release by introducing
> deprecation warnings for a core piece of the deployment stack.
>
> IMHO the only reason to push or force such a switch in this manner would
> be a (grave) security or data corruption bug, which is not the case at
> all here..

There is no forcing here. A deprecation warning was added, which can
be silenced.
>
> 1: have you looked at the journal / boot logs of a mid-sized OSD node
> using ceph-disk for activation with the deprecation warning active?  if
> my boot log is suddenly filled with 20% warnings, my first reaction will
> be that something is very wrong.. my likely second reaction when
> realizing what is going on is probably not fit for posting to a public
> mailing list ;)

The purpose of the deprecation warning is to be annoying as you imply
here, and again, there are mechanisms on how to omit them
if you understand the issue.

>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: ceph-disk is now deprecated
       [not found]                           ` <CAC-Np1wo_j5MBXTzm5kp-MjWiV=vkL+5Xt88SS617MJ4qmh5UQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-30 13:04                             ` Fabian Grünbichler
  0 siblings, 0 replies; 25+ messages in thread
From: Fabian Grünbichler @ 2017-11-30 13:04 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Thu, Nov 30, 2017 at 07:04:33AM -0500, Alfredo Deza wrote:
> On Thu, Nov 30, 2017 at 6:31 AM, Fabian Grünbichler
> <f.gruenbichler-YTcQvvOqK21BDgjK7y7TUQ@public.gmane.org> wrote:
> > On Tue, Nov 28, 2017 at 10:39:31AM -0800, Vasu Kulkarni wrote:
> >> On Tue, Nov 28, 2017 at 9:22 AM, David Turner <drakonstein-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> >> > Isn't marking something as deprecated meaning that there is a better option
> >> > that we want you to use and you should switch to it sooner than later? I
> >> > don't understand how this is ready to be marked as such if ceph-volume can't
> >> > be switched to for all supported use cases. If ZFS, encryption, FreeBSD, etc
> >> > are all going to be supported under ceph-volume, then how can ceph-disk be
> >> > deprecated before ceph-volume can support them? I can imagine many Ceph
> >> > admins wasting time chasing an erroneous deprecated warning because it came
> >> > out before the new solution was mature enough to replace the existing
> >> > solution.
> >>
> >> There is no need to worry about this deprecation, Its mostly for
> >> admins to be prepared
> >> for the changes coming ahead and its mostly for *new* installations
> >> that can plan on using ceph-volume which provides
> >> great flexibility compared to ceph-disk.
> >
> > changing existing installations to output deprecation warnings from one
> > minor release to the next means it is not just for new installations
> > though, no matter how you spin it. a mention in the release notes and
> > docs would be enough to get admins to test and use ceph-volume on new
> > installations.
> >
> > I am pretty sure many admins will be bothered by all nodes running OSDs
> > spamming the logs and their terminals with huge deprecation warnings on
> > each OSD activation[1] or other actions involving ceph-disk, and having
> > this state for the remainder of Luminous unless they switch to a new
> > (and as of yet not battle-tested) way of activating their OSDs seems
> > crazy to me.
> >
> > I know our users will be, and given the short notice and huge impact
> > this would have we will likely have to remove the deprecation warnings
> > altogether in our (downstream) packages until we have completed testing
> > of and implementing support for ceph-volume..
> >
> >>
> >> a) many dont use ceph-disk or ceph-volume directly, so the tool you
> >> have right now eg: ceph-deploy or ceph-ansible
> >> will still support the ceph-disk, the previous ceph-deploy release is
> >> still available from pypi
> >>   https://pypi.python.org/pypi/ceph-deploy
> >
> > we have >> 10k (user / customer managed!) installations on Ceph Luminous
> > alone, all using our wrapper around ceph-disk - changing something like
> > this in the middle of a release causes huge headaches for downstreams
> > like us, and is not how a stable project is supposed to be run.
> 
> If you are using a wrapper around ceph-disk, then silencing the
> deprecation warnings should be easy to do.
> 
> These are plain Python warnings, and can be silenced within Python or
> environment variables. There are some details
> on how to do that here https://github.com/ceph/ceph/pull/18989

the problem is not how to get rid of the warnings, but having to when
upgrading from one bug fix release to the next.

> >
> >>
> >> b) also the current push will help anyone who is using ceph-deploy or
> >> ceph-disk in scripts/chef/etc
> >>    to have time to think about using newer cli based on ceph-volume
> >
> > a regular deprecate at the beginning of the release cycle were the
> > replacement is deemed stable, remove in the next release cycle would be
> > adequate for this purpose.
> >
> > I don't understand the rush to shoe-horn ceph-volume into existing
> > supposedly stable Ceph installations at all - especially given the
> > current state of ceph-volume (we'll file bugs once we are done writing
> > them up, but a quick rudimentary test already showed stuff like choking
> > on valid ceph.conf files because they contain leading whitespace and
> > incomplete error handling leading to crush map entries for failed OSD
> > creation attempts).
> 
> Any ceph-volume bugs are welcomed as soon as you can get them to us.
> Waiting to get them reported is a problem, since ceph-volume
> is tied to Ceph releases, it means that these will now have to wait
> for another point release instead of having them in the upcoming one.

we started evaluating ceph-volume at the start of this thread in order
to see whether a switch-over pre-Mimic is feasible. we don't
artificially delay bug reports, it just takes time to to test, find bugs
and report them properly.

> 
> >
> > I DO understand the motivation behind ceph-volume and the desire to get
> > rid of the udev-based trigger mess, but the solution is not to scare
> > users into switching in the middle of a release by introducing
> > deprecation warnings for a core piece of the deployment stack.
> >
> > IMHO the only reason to push or force such a switch in this manner would
> > be a (grave) security or data corruption bug, which is not the case at
> > all here..
> 
> There is no forcing here. A deprecation warning was added, which can
> be silenced.

I did not say you ARE forcing, I said the only reason to push OR force
something like this WOULD be.

> >
> > 1: have you looked at the journal / boot logs of a mid-sized OSD node
> > using ceph-disk for activation with the deprecation warning active?  if
> > my boot log is suddenly filled with 20% warnings, my first reaction will
> > be that something is very wrong.. my likely second reaction when
> > realizing what is going on is probably not fit for posting to a public
> > mailing list ;)
> 
> The purpose of the deprecation warning is to be annoying as you imply
> here, and again, there are mechanisms on how to omit them
> if you understand the issue.

point is - you should not purposefully attempt to annoy users and/or
downstreams by changing behaviour in the middle of an LTS release cycle,
unless there is an important reason to do so. something like this would
not be appropriate during the RC stage in most projects, no matter how
easy it is to work around in case you roll your own deployment scripts /
wrappers / packages..

you'd get almost the same net effect by introducing ceph-volume now (as
new, alternative way of creating and activating OSDs), deprecating
ceph-disk in Mimic (with the big fat warning), and removing it in
Mimic+1 - with much less irritation and annoyed users, and only a little
less new OSDs deployed with ceph-disk instead of ceph-volume.

I still don't see a big enough justification for this push - but maybe I
am missing an important factor? (although based on the other reactions
in this thread, it does not seem like we are the only ones who are
surprised/irritated by this course of action).

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2017-11-30 13:04 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-27 13:36 ceph-disk is now deprecated Alfredo Deza
2017-11-27 20:07 ` Nathan Cutler
2017-11-28  8:24   ` Fabian Grünbichler
     [not found]     ` <CALWuvY9mX0Pdr4aMwPi=Dqw0qng959Z5bg5hs5dOxTfSZ+QH4Q@mail.gmail.com>
2017-11-28 12:00       ` Alfredo Deza
2017-11-28  6:56 ` Andreas Calminder
2017-11-28 11:47   ` Alfredo Deza
     [not found]     ` <CAC-Np1wp1M=qapRO0sCr1HMLpZ2zgLoh-GVv3p01k2DqjxqaOw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28 12:22       ` Andreas Calminder
2017-11-28 12:37         ` Alfredo Deza
     [not found]           ` <CAC-Np1xqg2-wAguu8AO6LSbj41uOGSBZTgdgL45OAi5M080grw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28 13:14             ` Andreas Calminder
     [not found] ` <CAC-Np1z7OJNxUeso+FEB8g+7RUABkvKF58_mmXDHt4SeOTHSDw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28  8:12   ` Wido den Hollander
2017-11-28  8:39     ` [ceph-users] " Piotr Dałek
     [not found]       ` <4c4b5589-b1ae-3f8b-e900-af0f2895fbc9-Rm6v+N6rxxBWk0Htik3J/w@public.gmane.org>
2017-11-28 12:32         ` Alfredo Deza
     [not found]           ` <CAC-Np1zsf+uqFi+ZdR7K3=re5ODpJvWFrTX-JU_8DkkyVGOz7A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28 14:26             ` Willem Jan Withagen
     [not found]               ` <d548c5a8-c0dc-0737-66f4-e3fc04f900d7-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
2017-11-28 17:22                 ` David Turner
     [not found]                   ` <CAN-Gep+RmSB6gW5GRVsT8T7j60EAzGnfbgeqDjwCNDWocpgp3g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28 18:39                     ` Vasu Kulkarni
2017-11-30 11:31                       ` [ceph-users] " Fabian Grünbichler
2017-11-30 12:04                         ` Alfredo Deza
     [not found]                           ` <CAC-Np1wo_j5MBXTzm5kp-MjWiV=vkL+5Xt88SS617MJ4qmh5UQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-30 13:04                             ` Fabian Grünbichler
     [not found]     ` <1821337488.5487.1511856776936-4q+tAGQs9zLCXE5Mi8V/gA@public.gmane.org>
2017-11-28 11:54       ` Alfredo Deza
     [not found]         ` <CAC-Np1zfMoqtW2M75ZAys1_NTEkns6MJ6HimuRR3nFhbMVzjPg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28 12:20           ` Maged Mokhtar
2017-11-28 12:27           ` Wido den Hollander
2017-11-28 12:38           ` Joao Eduardo Luis
     [not found]             ` <1db3fa50-d022-bf37-eb4b-098882ea0984-l3A5Bk7waGM@public.gmane.org>
2017-11-28 12:52               ` Alfredo Deza
     [not found]                 ` <CAC-Np1ynRyeOChGK2k_oBAx0A7XcAFkMDMrxVHZhN-pjGXfAJw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-28 14:17                   ` Joao Eduardo Luis
2017-11-29 12:51   ` Yoann Moulin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.