All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph-disk removal roadmap (was ceph-disk is now deprecated)
@ 2017-11-30 16:25 Alfredo Deza
       [not found] ` <CAC-Np1zdwLALBE_eheCJ+bR_A4-Gway6fpv6smcQNjt-4=9RxA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Alfredo Deza @ 2017-11-30 16:25 UTC (permalink / raw)
  To: ceph-users, ceph-devel

Thanks all for your feedback on deprecating ceph-disk, we are very
excited to be able to move forwards on a much more robust tool and
process for deploying and handling activation of OSDs, removing the
dependency on UDEV which has been a tremendous source of constant
issues.

Initially (see "killing ceph-disk" thread [0]) we planned for removal
of Mimic, but we didn't want to introduce the deprecation warnings up
until we had an out for those who had OSDs deployed in previous
releases with ceph-disk (we are now able to handle those as well).
That is the reason ceph-volume, although present since the first
Luminous release, hasn't been pushed forward much.

Now that we feel like we can cover almost all cases, we would really
like to see a wider usage so that we can improve on issues/experience.

Given that 12.2.2 is already in the process of getting released, we
can't undo the deprecation warnings for that version, but we will
remove them for 12.2.3, add them back again in Mimic, which will mean
ceph-disk will be kept around a bit longer, and finally fully removed
by N.

To recap:

* ceph-disk deprecation warnings will stay for 12.2.2
* deprecation warnings will be removed in 12.2.3 (and from all later
Luminous releases)
* deprecation warnings will be added again in ceph-disk for all Mimic releases
* ceph-disk will no longer be available for the 'N' release, along
with the UDEV rules

I believe these four points address most of the concerns voiced in
this thread, and should give enough time to port clusters over to
ceph-volume.

[0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html

On Thu, Nov 30, 2017 at 8:22 AM, Daniel Baumann <daniel.baumann@bfh.ch> wrote:
> On 11/30/17 14:04, Fabian Grünbichler wrote:
>> point is - you should not purposefully attempt to annoy users and/or
>> downstreams by changing behaviour in the middle of an LTS release cycle,
>
> exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
> introduce a behaviour-change, regardless if it's "just" adding new
> warnings or not.
>
> this is a stable update we're talking about, even more so since it's an
> LTS release. you never know how people use stuff (e.g. by parsing stupid
> things), so such behaviour-change will break stuff for *some* people
> (granted, most likely a really low number).
>
> my expection to an stable release is, that it stays, literally, stable.
> that's the whole point of having it in the first place. otherwise we
> would all be running git snapshots and update randomly to newer ones.
>
> adding deprecation messages in mimic makes sense, and getting rid of
> it/not provide support for it in mimic+1 is reasonable.
>
> Regards,
> Daniel
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph-disk removal roadmap (was ceph-disk is now deprecated)
       [not found] ` <CAC-Np1zdwLALBE_eheCJ+bR_A4-Gway6fpv6smcQNjt-4=9RxA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-30 16:35   ` Peter Woodman
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Woodman @ 2017-11-30 16:35 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users, ceph-devel


[-- Attachment #1.1: Type: text/plain, Size: 3461 bytes --]

how quickly are you planning to cut 12.2.3?

On Thu, Nov 30, 2017 at 4:25 PM, Alfredo Deza <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
>
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
>
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
>
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
>
> To recap:
>
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic
> releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
>
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
>
> [0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/
> 2017-October/021358.html
>
> On Thu, Nov 30, 2017 at 8:22 AM, Daniel Baumann <daniel.baumann-omB+W0Dpw2o@public.gmane.org>
> wrote:
> > On 11/30/17 14:04, Fabian Grünbichler wrote:
> >> point is - you should not purposefully attempt to annoy users and/or
> >> downstreams by changing behaviour in the middle of an LTS release cycle,
> >
> > exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
> > introduce a behaviour-change, regardless if it's "just" adding new
> > warnings or not.
> >
> > this is a stable update we're talking about, even more so since it's an
> > LTS release. you never know how people use stuff (e.g. by parsing stupid
> > things), so such behaviour-change will break stuff for *some* people
> > (granted, most likely a really low number).
> >
> > my expection to an stable release is, that it stays, literally, stable.
> > that's the whole point of having it in the first place. otherwise we
> > would all be running git snapshots and update randomly to newer ones.
> >
> > adding deprecation messages in mimic makes sense, and getting rid of
> > it/not provide support for it in mimic+1 is reasonable.
> >
> > Regards,
> > Daniel
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 4632 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
  2017-11-30 16:25 ceph-disk removal roadmap (was ceph-disk is now deprecated) Alfredo Deza
       [not found] ` <CAC-Np1zdwLALBE_eheCJ+bR_A4-Gway6fpv6smcQNjt-4=9RxA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-11-30 16:36 ` Peter Woodman
  2017-12-01  8:17 ` Fabian Grünbichler
  2 siblings, 0 replies; 7+ messages in thread
From: Peter Woodman @ 2017-11-30 16:36 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users, ceph-devel

How quickly are you planning to cut 12.2.3?

On Thu, Nov 30, 2017 at 4:25 PM, Alfredo Deza <adeza@redhat.com> wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
>
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
>
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
>
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
>
> To recap:
>
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
>
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
>
> [0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html
>
> On Thu, Nov 30, 2017 at 8:22 AM, Daniel Baumann <daniel.baumann@bfh.ch> wrote:
>> On 11/30/17 14:04, Fabian Grünbichler wrote:
>>> point is - you should not purposefully attempt to annoy users and/or
>>> downstreams by changing behaviour in the middle of an LTS release cycle,
>>
>> exactly. upgrading the patch level (x.y.z to x.y.z+1) should imho never
>> introduce a behaviour-change, regardless if it's "just" adding new
>> warnings or not.
>>
>> this is a stable update we're talking about, even more so since it's an
>> LTS release. you never know how people use stuff (e.g. by parsing stupid
>> things), so such behaviour-change will break stuff for *some* people
>> (granted, most likely a really low number).
>>
>> my expection to an stable release is, that it stays, literally, stable.
>> that's the whole point of having it in the first place. otherwise we
>> would all be running git snapshots and update randomly to newer ones.
>>
>> adding deprecation messages in mimic makes sense, and getting rid of
>> it/not provide support for it in mimic+1 is reasonable.
>>
>> Regards,
>> Daniel
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph-disk removal roadmap (was ceph-disk is now deprecated)
  2017-11-30 16:25 ceph-disk removal roadmap (was ceph-disk is now deprecated) Alfredo Deza
       [not found] ` <CAC-Np1zdwLALBE_eheCJ+bR_A4-Gway6fpv6smcQNjt-4=9RxA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-11-30 16:36 ` [ceph-users] " Peter Woodman
@ 2017-12-01  8:17 ` Fabian Grünbichler
       [not found]   ` <20171201081757.qwfo2lrhpmg77jgd-aVfaTQcAavps8ZkLLAvlZtBPR1lH4CV8@public.gmane.org>
  2 siblings, 1 reply; 7+ messages in thread
From: Fabian Grünbichler @ 2017-12-01  8:17 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users, ceph-devel

On Thu, Nov 30, 2017 at 11:25:03AM -0500, Alfredo Deza wrote:
> Thanks all for your feedback on deprecating ceph-disk, we are very
> excited to be able to move forwards on a much more robust tool and
> process for deploying and handling activation of OSDs, removing the
> dependency on UDEV which has been a tremendous source of constant
> issues.
> 
> Initially (see "killing ceph-disk" thread [0]) we planned for removal
> of Mimic, but we didn't want to introduce the deprecation warnings up
> until we had an out for those who had OSDs deployed in previous
> releases with ceph-disk (we are now able to handle those as well).
> That is the reason ceph-volume, although present since the first
> Luminous release, hasn't been pushed forward much.
> 
> Now that we feel like we can cover almost all cases, we would really
> like to see a wider usage so that we can improve on issues/experience.
> 
> Given that 12.2.2 is already in the process of getting released, we
> can't undo the deprecation warnings for that version, but we will
> remove them for 12.2.3, add them back again in Mimic, which will mean
> ceph-disk will be kept around a bit longer, and finally fully removed
> by N.
> 
> To recap:
> 
> * ceph-disk deprecation warnings will stay for 12.2.2
> * deprecation warnings will be removed in 12.2.3 (and from all later
> Luminous releases)
> * deprecation warnings will be added again in ceph-disk for all Mimic releases
> * ceph-disk will no longer be available for the 'N' release, along
> with the UDEV rules
> 
> I believe these four points address most of the concerns voiced in
> this thread, and should give enough time to port clusters over to
> ceph-volume.
> 
> [0] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021358.html

Thank you for listening to the feedback - I think most of us know the
balance that needs to be struck between moving a project forward and
decrufting a code base versus providing a stable enough interface for
users is not always easy to find.

I think the above roadmap is a good compromise for all involved parties,
and I hope we can use the remainder of Luminous to prepare for a
seam- and painless transition to ceph-volume in time for the Mimic
release, and then finally retire ceph-disk for good!


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: ceph-disk removal roadmap (was ceph-disk is now deprecated)
       [not found]   ` <20171201081757.qwfo2lrhpmg77jgd-aVfaTQcAavps8ZkLLAvlZtBPR1lH4CV8@public.gmane.org>
@ 2017-12-01  8:28     ` Stefan Kooman
  2017-12-01 12:45       ` [ceph-users] " Alfredo Deza
  2017-12-01 17:28       ` Alfredo Deza
  0 siblings, 2 replies; 7+ messages in thread
From: Stefan Kooman @ 2017-12-01  8:28 UTC (permalink / raw)
  To: Alfredo Deza, ceph-users, ceph-devel

Quoting Fabian Grünbichler (f.gruenbichler-YTcQvvOqK21BDgjK7y7TUQ@public.gmane.org):
> I think the above roadmap is a good compromise for all involved parties,
> and I hope we can use the remainder of Luminous to prepare for a
> seam- and painless transition to ceph-volume in time for the Mimic
> release, and then finally retire ceph-disk for good!

Will the upcoming 12.2.2 release ship with a ceph-volume capable of
doing bluestore on top of LVM? Eager to use ceph-volume for that, and
skip entirely over ceph-disk and our manual osd prepare process ...

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info-68+x73Hep80@public.gmane.org

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
  2017-12-01  8:28     ` Stefan Kooman
@ 2017-12-01 12:45       ` Alfredo Deza
  2017-12-01 17:28       ` Alfredo Deza
  1 sibling, 0 replies; 7+ messages in thread
From: Alfredo Deza @ 2017-12-01 12:45 UTC (permalink / raw)
  To: Stefan Kooman; +Cc: ceph-users, ceph-devel

On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman <stefan@bit.nl> wrote:
> Quoting Fabian Grünbichler (f.gruenbichler@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless transition to ceph-volume in time for the Mimic
>> release, and then finally retire ceph-disk for good!
>
> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
> doing bluestore on top of LVM? Eager to use ceph-volume for that, and
> skip entirely over ceph-disk and our manual osd prepare process ...

Yes. I think that for 12.2.1 this was the case as well, in 12.2.2 is
the default.


>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@bit.nl

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ceph-users] ceph-disk removal roadmap (was ceph-disk is now deprecated)
  2017-12-01  8:28     ` Stefan Kooman
  2017-12-01 12:45       ` [ceph-users] " Alfredo Deza
@ 2017-12-01 17:28       ` Alfredo Deza
  1 sibling, 0 replies; 7+ messages in thread
From: Alfredo Deza @ 2017-12-01 17:28 UTC (permalink / raw)
  To: Stefan Kooman; +Cc: ceph-users, ceph-devel

On Fri, Dec 1, 2017 at 3:28 AM, Stefan Kooman <stefan@bit.nl> wrote:
> Quoting Fabian Grünbichler (f.gruenbichler@proxmox.com):
>> I think the above roadmap is a good compromise for all involved parties,
>> and I hope we can use the remainder of Luminous to prepare for a
>> seam- and painless transition to ceph-volume in time for the Mimic
>> release, and then finally retire ceph-disk for good!
>
> Will the upcoming 12.2.2 release ship with a ceph-volume capable of
> doing bluestore on top of LVM?

Yes, see the open pr for it (https://github.com/ceph/ceph-deploy/pull/455)

> Eager to use ceph-volume for that, and
> skip entirely over ceph-disk and our manual osd prepare process ...

Now please note that the API will change in a non backwards compatible
way, so a major release of ceph-deploy will
be done after that is merged.

>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@bit.nl

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-12-01 17:28 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-30 16:25 ceph-disk removal roadmap (was ceph-disk is now deprecated) Alfredo Deza
     [not found] ` <CAC-Np1zdwLALBE_eheCJ+bR_A4-Gway6fpv6smcQNjt-4=9RxA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-11-30 16:35   ` Peter Woodman
2017-11-30 16:36 ` [ceph-users] " Peter Woodman
2017-12-01  8:17 ` Fabian Grünbichler
     [not found]   ` <20171201081757.qwfo2lrhpmg77jgd-aVfaTQcAavps8ZkLLAvlZtBPR1lH4CV8@public.gmane.org>
2017-12-01  8:28     ` Stefan Kooman
2017-12-01 12:45       ` [ceph-users] " Alfredo Deza
2017-12-01 17:28       ` Alfredo Deza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.