All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph-volume: migration and disk partition support
@ 2017-10-06 16:56 Alfredo Deza
  2017-10-09 15:09 ` killing ceph-disk [was Re: ceph-volume: migration and disk partition support] Sage Weil
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Alfredo Deza @ 2017-10-06 16:56 UTC (permalink / raw)
  To: ceph-devel, ceph-users

Hi,

Now that ceph-volume is part of the Luminous release, we've been able
to provide filestore support for LVM-based OSDs. We are making use of
LVM's powerful mechanisms to store metadata which allows the process
to no longer rely on UDEV and GPT labels (unlike ceph-disk).

Bluestore support should be the next step for `ceph-volume lvm`, and
while that is planned we are thinking of ways to improve the current
caveats (like OSDs not coming up) for clusters that have deployed OSDs
with ceph-disk.

--- New clusters ---
The `ceph-volume lvm` deployment is straightforward (currently
supported in ceph-ansible), but there isn't support for plain disks
(with partitions) currently, like there is with ceph-disk.

Is there a pressing interest in supporting plain disks with
partitions? Or only supporting LVM-based OSDs fine?

--- Existing clusters ---
Migration to ceph-volume, even with plain disk support means
re-creating the OSD from scratch, which would end up moving data.
There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
without starting from scratch.

A temporary workaround would be to provide a way for existing OSDs to
be brought up without UDEV and ceph-disk, by creating logic in
ceph-volume that could load them with systemd directly. This wouldn't
make them lvm-based, nor it would mean there is direct support for
them, just a temporary workaround to make them start without UDEV and
ceph-disk.

I'm interested in what current users might look for here,: is it fine
to provide this workaround if the issues are that problematic? Or is
it OK to plan a migration towards ceph-volume OSDs?

-Alfredo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
  2017-10-06 16:56 ceph-volume: migration and disk partition support Alfredo Deza
@ 2017-10-09 15:09 ` Sage Weil
       [not found]   ` <alpine.DEB.2.11.1710091448500.26711-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
       [not found] ` <CAC-Np1xUasg0C34HEuCFims18zPyRZ17p9-mp3xYCUETAYnn_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2017-10-10  8:14 ` Dan van der Ster
  2 siblings, 1 reply; 14+ messages in thread
From: Sage Weil @ 2017-10-09 15:09 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-devel, ceph-users

To put this in context, the goal here is to kill ceph-disk in mimic.  

One proposal is to make it so new OSDs can *only* be deployed with LVM, 
and old OSDs with the ceph-disk GPT partitions would be started via 
ceph-volume support that can only start (but not deploy new) OSDs in that 
style.

Is the LVM-only-ness concerning to anyone?

Looking further forward, NVMe OSDs will probably be handled a bit 
differently, as they'll eventually be using SPDK and kernel-bypass (hence, 
no LVM).  For the time being, though, they would use LVM.


On Fri, 6 Oct 2017, Alfredo Deza wrote:
> Now that ceph-volume is part of the Luminous release, we've been able
> to provide filestore support for LVM-based OSDs. We are making use of
> LVM's powerful mechanisms to store metadata which allows the process
> to no longer rely on UDEV and GPT labels (unlike ceph-disk).
> 
> Bluestore support should be the next step for `ceph-volume lvm`, and
> while that is planned we are thinking of ways to improve the current
> caveats (like OSDs not coming up) for clusters that have deployed OSDs
> with ceph-disk.
> 
> --- New clusters ---
> The `ceph-volume lvm` deployment is straightforward (currently
> supported in ceph-ansible), but there isn't support for plain disks
> (with partitions) currently, like there is with ceph-disk.
> 
> Is there a pressing interest in supporting plain disks with
> partitions? Or only supporting LVM-based OSDs fine?

Perhaps the "out" here is to support a "dir" option where the user can 
manually provision and mount an OSD on /var/lib/ceph/osd/*, with 'journal' 
or 'block' symlinks, and ceph-volume will do the last bits that initialize 
the filestore or bluestore OSD from there.  Then if someone has a scenario 
that isn't captured by LVM (or whatever else we support) they can always 
do it manually?

> --- Existing clusters ---
> Migration to ceph-volume, even with plain disk support means
> re-creating the OSD from scratch, which would end up moving data.
> There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
> without starting from scratch.
> 
> A temporary workaround would be to provide a way for existing OSDs to
> be brought up without UDEV and ceph-disk, by creating logic in
> ceph-volume that could load them with systemd directly. This wouldn't
> make them lvm-based, nor it would mean there is direct support for
> them, just a temporary workaround to make them start without UDEV and
> ceph-disk.
> 
> I'm interested in what current users might look for here,: is it fine
> to provide this workaround if the issues are that problematic? Or is
> it OK to plan a migration towards ceph-volume OSDs?

IMO we can't require any kind of data migration in order to upgrade, which 
means we either have to (1) keep ceph-disk around indefinitely, or (2) 
teach ceph-volume to start existing GPT-style OSDs.  Given all of the 
flakiness around udev, I'm partial to #2.  The big question for me is 
whether #2 alone is sufficient, or whether ceph-volume should also know 
how to provision new OSDs using partitions and no LVM.  Hopefully not?

sage

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
       [not found]   ` <alpine.DEB.2.11.1710091448500.26711-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
@ 2017-10-10  0:50     ` Christian Balzer
       [not found]       ` <20171010095014.64940afa-9yhXNL7Kh0lSCLKNlHTxZM8NsWr+9BEh@public.gmane.org>
  2017-10-12 11:39     ` Matthew Vernon
  2017-10-16 22:32     ` Anthony Verevkin
  2 siblings, 1 reply; 14+ messages in thread
From: Christian Balzer @ 2017-10-10  0:50 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw; +Cc: ceph-devel


Hello,

(pet peeve alert)
On Mon, 9 Oct 2017 15:09:29 +0000 (UTC) Sage Weil wrote:

> To put this in context, the goal here is to kill ceph-disk in mimic.  
> 
> One proposal is to make it so new OSDs can *only* be deployed with LVM, 
> and old OSDs with the ceph-disk GPT partitions would be started via 
> ceph-volume support that can only start (but not deploy new) OSDs in that 
> style.
> 
> Is the LVM-only-ness concerning to anyone?
>
If the provision below is met, not really.
 
> Looking further forward, NVMe OSDs will probably be handled a bit 
> differently, as they'll eventually be using SPDK and kernel-bypass (hence, 
> no LVM).  For the time being, though, they would use LVM.
>
And so it begins.
LVM does a lot of nice things, but not everything for everybody. 
It is also another layer added with all the (minor) reductions in
performance (with normal storage, not NVMe) and of course potential bugs.
 
> 
> On Fri, 6 Oct 2017, Alfredo Deza wrote:
> > Now that ceph-volume is part of the Luminous release, we've been able
> > to provide filestore support for LVM-based OSDs. We are making use of
> > LVM's powerful mechanisms to store metadata which allows the process
> > to no longer rely on UDEV and GPT labels (unlike ceph-disk).
> > 
> > Bluestore support should be the next step for `ceph-volume lvm`, and
> > while that is planned we are thinking of ways to improve the current
> > caveats (like OSDs not coming up) for clusters that have deployed OSDs
> > with ceph-disk.
> > 
> > --- New clusters ---
> > The `ceph-volume lvm` deployment is straightforward (currently
> > supported in ceph-ansible), but there isn't support for plain disks
> > (with partitions) currently, like there is with ceph-disk.
> > 
> > Is there a pressing interest in supporting plain disks with
> > partitions? Or only supporting LVM-based OSDs fine?  
> 
> Perhaps the "out" here is to support a "dir" option where the user can 
> manually provision and mount an OSD on /var/lib/ceph/osd/*, with 'journal' 
> or 'block' symlinks, and ceph-volume will do the last bits that initialize 
> the filestore or bluestore OSD from there.  Then if someone has a scenario 
> that isn't captured by LVM (or whatever else we support) they can always 
> do it manually?
> 
Basically this.
Since all my old clusters were deployed like this, with no
chance/intention to upgrade to GPT or even LVM.
How would symlinks work with Bluestore, the tiny XFS bit?

> > --- Existing clusters ---
> > Migration to ceph-volume, even with plain disk support means
> > re-creating the OSD from scratch, which would end up moving data.
> > There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
> > without starting from scratch.
> > 
> > A temporary workaround would be to provide a way for existing OSDs to
> > be brought up without UDEV and ceph-disk, by creating logic in
> > ceph-volume that could load them with systemd directly. This wouldn't
> > make them lvm-based, nor it would mean there is direct support for
> > them, just a temporary workaround to make them start without UDEV and
> > ceph-disk.
> > 
> > I'm interested in what current users might look for here,: is it fine
> > to provide this workaround if the issues are that problematic? Or is
> > it OK to plan a migration towards ceph-volume OSDs?  
> 
> IMO we can't require any kind of data migration in order to upgrade, which 
> means we either have to (1) keep ceph-disk around indefinitely, or (2) 
> teach ceph-volume to start existing GPT-style OSDs.  Given all of the 
> flakiness around udev, I'm partial to #2.  The big question for me is 
> whether #2 alone is sufficient, or whether ceph-volume should also know 
> how to provision new OSDs using partitions and no LVM.  Hopefully not?
> 
I really disliked the udev/GPT stuff from the get-go and flakiness is
being kind for sometimes completely indeterministic behavior.

Since there never was an (non-disruptive) upgrade process from non-GPT
based OSDs to GPT based ones, I wonder what changed minds here.
Not that the GPT based users won't appreciate it.

Christian
> sage
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi-FW+hd8ioUD0@public.gmane.org   	Rakuten Communications

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ceph-volume: migration and disk partition support
       [not found] ` <CAC-Np1xUasg0C34HEuCFims18zPyRZ17p9-mp3xYCUETAYnn_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-10-10  7:28   ` Stefan Kooman
  2017-10-10 12:25     ` [ceph-users] " Alfredo Deza
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Kooman @ 2017-10-10  7:28 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-devel, ceph-users-idqoXFIVOFJgJs9I8MT0rw

Hi,

Quoting Alfredo Deza (adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org):
> Hi,
> 
> Now that ceph-volume is part of the Luminous release, we've been able
> to provide filestore support for LVM-based OSDs. We are making use of
> LVM's powerful mechanisms to store metadata which allows the process
> to no longer rely on UDEV and GPT labels (unlike ceph-disk).
> 
> Bluestore support should be the next step for `ceph-volume lvm`, and
> while that is planned we are thinking of ways to improve the current
> caveats (like OSDs not coming up) for clusters that have deployed OSDs
> with ceph-disk.

I'm a bit confused after reading this. Just to make things clear. Would
bluestore be put on top of a LVM volume (in an ideal world)? Has
bluestore in Ceph luminious support for LVM? I.e. is there code in
bluestore to support LVM? Or is it _just_ support of `ceph-volume lvm`
for bluestore?

> --- New clusters ---
> The `ceph-volume lvm` deployment is straightforward (currently
> supported in ceph-ansible), but there isn't support for plain disks
> (with partitions) currently, like there is with ceph-disk.
> 
> Is there a pressing interest in supporting plain disks with
> partitions? Or only supporting LVM-based OSDs fine?

We're still in a green field situation. Users with an installed base
will have to comment on this. If the assumption that bluestore would be
put on top of LVM is true, it would make things simpler (in our own Ceph
ansible playbook).

Gr. Stefan

-- 
| BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
| GPG: 0xD14839C6                   +31 318 648 688 / info-68+x73Hep80@public.gmane.org

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] ceph-volume: migration and disk partition support
  2017-10-06 16:56 ceph-volume: migration and disk partition support Alfredo Deza
  2017-10-09 15:09 ` killing ceph-disk [was Re: ceph-volume: migration and disk partition support] Sage Weil
       [not found] ` <CAC-Np1xUasg0C34HEuCFims18zPyRZ17p9-mp3xYCUETAYnn_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-10-10  8:14 ` Dan van der Ster
  2 siblings, 0 replies; 14+ messages in thread
From: Dan van der Ster @ 2017-10-10  8:14 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-devel, ceph-users

On Fri, Oct 6, 2017 at 6:56 PM, Alfredo Deza <adeza@redhat.com> wrote:
> Hi,
>
> Now that ceph-volume is part of the Luminous release, we've been able
> to provide filestore support for LVM-based OSDs. We are making use of
> LVM's powerful mechanisms to store metadata which allows the process
> to no longer rely on UDEV and GPT labels (unlike ceph-disk).
>
> Bluestore support should be the next step for `ceph-volume lvm`, and
> while that is planned we are thinking of ways to improve the current
> caveats (like OSDs not coming up) for clusters that have deployed OSDs
> with ceph-disk.
>
> --- New clusters ---
> The `ceph-volume lvm` deployment is straightforward (currently
> supported in ceph-ansible), but there isn't support for plain disks
> (with partitions) currently, like there is with ceph-disk.
>
> Is there a pressing interest in supporting plain disks with
> partitions? Or only supporting LVM-based OSDs fine?
>
> --- Existing clusters ---
> Migration to ceph-volume, even with plain disk support means
> re-creating the OSD from scratch, which would end up moving data.
> There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
> without starting from scratch.
>
> A temporary workaround would be to provide a way for existing OSDs to
> be brought up without UDEV and ceph-disk, by creating logic in
> ceph-volume that could load them with systemd directly. This wouldn't
> make them lvm-based, nor it would mean there is direct support for
> them, just a temporary workaround to make them start without UDEV and
> ceph-disk.
>
> I'm interested in what current users might look for here,: is it fine
> to provide this workaround if the issues are that problematic? Or is
> it OK to plan a migration towards ceph-volume OSDs?

Without fully understanding the technical details and plans, it will
be hard to answer this.

In general, I wouldn't plan to recreate all OSDs. In our case, we
don't currently plan to recreate FileStore OSDs as Bluestore after the
Luminous upgrade, as that would be too much work. *New* OSDs will be
created the *new* way (is that ceph-disk bluestore? ceph-volume lvm
bluestore??) It wouldn't be nice if we created new OSDs today with
ceph-disk bluestore, then have to recreate all those with ceph-volume
bluestore in a few months.

Disks/servers have a ~5 year lifetime, and we want to format OSDs
exactly once. I'd hope those OSDs remain bootable for the upcoming
releases.

(ceph-disk activation works reliably enough here -- just don't remove
the existing functionality and we'll be happy).

-- dan

>
> -Alfredo
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
       [not found]       ` <20171010095014.64940afa-9yhXNL7Kh0lSCLKNlHTxZM8NsWr+9BEh@public.gmane.org>
@ 2017-10-10 11:51         ` Alfredo Deza
  2017-10-10 12:14           ` [ceph-users] " Willem Jan Withagen
  0 siblings, 1 reply; 14+ messages in thread
From: Alfredo Deza @ 2017-10-10 11:51 UTC (permalink / raw)
  To: Christian Balzer; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer <chibi-FW+hd8ioUD0@public.gmane.org> wrote:
>
> Hello,
>
> (pet peeve alert)
> On Mon, 9 Oct 2017 15:09:29 +0000 (UTC) Sage Weil wrote:
>
>> To put this in context, the goal here is to kill ceph-disk in mimic.
>>
>> One proposal is to make it so new OSDs can *only* be deployed with LVM,
>> and old OSDs with the ceph-disk GPT partitions would be started via
>> ceph-volume support that can only start (but not deploy new) OSDs in that
>> style.
>>
>> Is the LVM-only-ness concerning to anyone?
>>
> If the provision below is met, not really.
>
>> Looking further forward, NVMe OSDs will probably be handled a bit
>> differently, as they'll eventually be using SPDK and kernel-bypass (hence,
>> no LVM).  For the time being, though, they would use LVM.
>>
> And so it begins.
> LVM does a lot of nice things, but not everything for everybody.
> It is also another layer added with all the (minor) reductions in
> performance (with normal storage, not NVMe) and of course potential bugs.
>

ceph-volume was crafted in a way that we wouldn't be forcing anyone to
a single backend (e.g. LVM). Initially it went even further,
as just being a simple orchestrator for getting devices mounted and
starting the OSD with minimal configuration and *regardless* of what
type of devices were being used.

The current status of the LVM portion is *very* robust, although it is
lacking a big chunk of feature parity with ceph-disk. I anticipate
potential bugs
anyway :)

>>
>> On Fri, 6 Oct 2017, Alfredo Deza wrote:
>> > Now that ceph-volume is part of the Luminous release, we've been able
>> > to provide filestore support for LVM-based OSDs. We are making use of
>> > LVM's powerful mechanisms to store metadata which allows the process
>> > to no longer rely on UDEV and GPT labels (unlike ceph-disk).
>> >
>> > Bluestore support should be the next step for `ceph-volume lvm`, and
>> > while that is planned we are thinking of ways to improve the current
>> > caveats (like OSDs not coming up) for clusters that have deployed OSDs
>> > with ceph-disk.
>> >
>> > --- New clusters ---
>> > The `ceph-volume lvm` deployment is straightforward (currently
>> > supported in ceph-ansible), but there isn't support for plain disks
>> > (with partitions) currently, like there is with ceph-disk.
>> >
>> > Is there a pressing interest in supporting plain disks with
>> > partitions? Or only supporting LVM-based OSDs fine?
>>
>> Perhaps the "out" here is to support a "dir" option where the user can
>> manually provision and mount an OSD on /var/lib/ceph/osd/*, with 'journal'
>> or 'block' symlinks, and ceph-volume will do the last bits that initialize
>> the filestore or bluestore OSD from there.  Then if someone has a scenario
>> that isn't captured by LVM (or whatever else we support) they can always
>> do it manually?
>>
> Basically this.
> Since all my old clusters were deployed like this, with no
> chance/intention to upgrade to GPT or even LVM.
> How would symlinks work with Bluestore, the tiny XFS bit?

In this case, we are looking to allow ceph-volume to scan currently
deployed OSDs, and get all the information
needed and save it as a plain configuration file that will be read at
boot time. That is the only other option that
is not dependent on udev/ceph-disk that doesn't mean redoing an OSD
from scratch.

It would be a one-time operation to get out of old deployment's tie
into udev/gpt/ceph-disk

>
>> > --- Existing clusters ---
>> > Migration to ceph-volume, even with plain disk support means
>> > re-creating the OSD from scratch, which would end up moving data.
>> > There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
>> > without starting from scratch.
>> >
>> > A temporary workaround would be to provide a way for existing OSDs to
>> > be brought up without UDEV and ceph-disk, by creating logic in
>> > ceph-volume that could load them with systemd directly. This wouldn't
>> > make them lvm-based, nor it would mean there is direct support for
>> > them, just a temporary workaround to make them start without UDEV and
>> > ceph-disk.
>> >
>> > I'm interested in what current users might look for here,: is it fine
>> > to provide this workaround if the issues are that problematic? Or is
>> > it OK to plan a migration towards ceph-volume OSDs?
>>
>> IMO we can't require any kind of data migration in order to upgrade, which
>> means we either have to (1) keep ceph-disk around indefinitely, or (2)
>> teach ceph-volume to start existing GPT-style OSDs.  Given all of the
>> flakiness around udev, I'm partial to #2.  The big question for me is
>> whether #2 alone is sufficient, or whether ceph-volume should also know
>> how to provision new OSDs using partitions and no LVM.  Hopefully not?
>>
> I really disliked the udev/GPT stuff from the get-go and flakiness is
> being kind for sometimes completely indeterministic behavior.
>

Yep, forcing users to always fit one model seemed annoying to me. I
understand the attractiveness of the idea: just like LVM today, it
provides a narrower path for supporting more features and having a
more robust implementation.



> Since there never was an (non-disruptive) upgrade process from non-GPT
> based OSDs to GPT based ones, I wonder what changed minds here.
> Not that the GPT based users won't appreciate it.
>

We really want users to start consuming ceph-volume exclusively, but
to get there we need to find a way to deprecate ceph-disk while at the
same
time not requiring everyone to start from scratch again.

It wasn't possible to "fix" ceph-disk, and with ceph-volume we are
already doing well. My hope is that by finding the middle ground
between the two
we can eventually get to no longer support anything related to ceph-disk.

> Christian
>> sage
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> --
> Christian Balzer        Network/Systems Engineer
> chibi-FW+hd8ioUD0@public.gmane.org           Rakuten Communications

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
  2017-10-10 11:51         ` Alfredo Deza
@ 2017-10-10 12:14           ` Willem Jan Withagen
       [not found]             ` <a1c7e8df-44a1-a03f-ddbc-2bb66494256a-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Willem Jan Withagen @ 2017-10-10 12:14 UTC (permalink / raw)
  To: Alfredo Deza, Christian Balzer; +Cc: ceph-users, Sage Weil, ceph-devel

On 10-10-2017 13:51, Alfredo Deza wrote:
> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer <chibi@gol.com> wrote:
>>
>> Hello,
>>
>> (pet peeve alert)
>> On Mon, 9 Oct 2017 15:09:29 +0000 (UTC) Sage Weil wrote:
>>
>>> To put this in context, the goal here is to kill ceph-disk in mimic.

Right, that means we need a ceph-volume zfs before things get shot down.
Fortunately there is little history to carry over.

But then still somebody needs to do the work. ;-|
Haven't looked at ceph-volume, but I'll put it on the agenda.

--WjW



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
       [not found]             ` <a1c7e8df-44a1-a03f-ddbc-2bb66494256a-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
@ 2017-10-10 12:21               ` Alfredo Deza
       [not found]                 ` <CAC-Np1wLx-fqJ2kJv9TT0Hny520bCLZAcen=8rTU0zsS-zGJhQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 14+ messages in thread
From: Alfredo Deza @ 2017-10-10 12:21 UTC (permalink / raw)
  To: Willem Jan Withagen; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On Tue, Oct 10, 2017 at 8:14 AM, Willem Jan Withagen <wjw-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org> wrote:
> On 10-10-2017 13:51, Alfredo Deza wrote:
>> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer <chibi-FW+hd8ioUD0@public.gmane.org> wrote:
>>>
>>> Hello,
>>>
>>> (pet peeve alert)
>>> On Mon, 9 Oct 2017 15:09:29 +0000 (UTC) Sage Weil wrote:
>>>
>>>> To put this in context, the goal here is to kill ceph-disk in mimic.
>
> Right, that means we need a ceph-volume zfs before things get shot down.
> Fortunately there is little history to carry over.
>
> But then still somebody needs to do the work. ;-|
> Haven't looked at ceph-volume, but I'll put it on the agenda.

An interesting take on zfs (and anything else we didn't set up from
the get-go) is that we envisioned developers might
want to craft plugins for ceph-volume and expand its capabilities,
without placing the burden of coming up
with new device technology to support.

The other nice aspect of this is that a plugin would get to re-use all
the tooling in place in ceph-volume. The plugin architecture
exists but it isn't fully developed/documented yet.

>
> --WjW
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] ceph-volume: migration and disk partition support
  2017-10-10  7:28   ` ceph-volume: migration and disk partition support Stefan Kooman
@ 2017-10-10 12:25     ` Alfredo Deza
  0 siblings, 0 replies; 14+ messages in thread
From: Alfredo Deza @ 2017-10-10 12:25 UTC (permalink / raw)
  To: Stefan Kooman; +Cc: ceph-devel, ceph-users

On Tue, Oct 10, 2017 at 3:28 AM, Stefan Kooman <stefan@bit.nl> wrote:
> Hi,
>
> Quoting Alfredo Deza (adeza@redhat.com):
>> Hi,
>>
>> Now that ceph-volume is part of the Luminous release, we've been able
>> to provide filestore support for LVM-based OSDs. We are making use of
>> LVM's powerful mechanisms to store metadata which allows the process
>> to no longer rely on UDEV and GPT labels (unlike ceph-disk).
>>
>> Bluestore support should be the next step for `ceph-volume lvm`, and
>> while that is planned we are thinking of ways to improve the current
>> caveats (like OSDs not coming up) for clusters that have deployed OSDs
>> with ceph-disk.
>
> I'm a bit confused after reading this. Just to make things clear. Would
> bluestore be put on top of a LVM volume (in an ideal world)? Has
> bluestore in Ceph luminious support for LVM? I.e. is there code in
> bluestore to support LVM? Or is it _just_ support of `ceph-volume lvm`
> for bluestore?

There is currently no support in `ceph-volume lvm` for bluestore yet.
It is being worked on today and should be ready soon (hopefully in the
next Luminous release).

And yes, in the case of  `ceph-volume lvm` it means that bluestore
would be "on top" of LVM volumes.

>
>> --- New clusters ---
>> The `ceph-volume lvm` deployment is straightforward (currently
>> supported in ceph-ansible), but there isn't support for plain disks
>> (with partitions) currently, like there is with ceph-disk.
>>
>> Is there a pressing interest in supporting plain disks with
>> partitions? Or only supporting LVM-based OSDs fine?
>
> We're still in a green field situation. Users with an installed base
> will have to comment on this. If the assumption that bluestore would be
> put on top of LVM is true, it would make things simpler (in our own Ceph
> ansible playbook).

There is already support in ceph-ansible too, which will mean that
when bluestore support is added, it will be added in ceph-ansible at
the same time.
>
> Gr. Stefan
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@bit.nl

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
       [not found]                 ` <CAC-Np1wLx-fqJ2kJv9TT0Hny520bCLZAcen=8rTU0zsS-zGJhQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2017-10-10 12:42                   ` Willem Jan Withagen
  0 siblings, 0 replies; 14+ messages in thread
From: Willem Jan Withagen @ 2017-10-10 12:42 UTC (permalink / raw)
  To: Alfredo Deza; +Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel

On 10-10-2017 14:21, Alfredo Deza wrote:
> On Tue, Oct 10, 2017 at 8:14 AM, Willem Jan Withagen <wjw-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org> wrote:
>> On 10-10-2017 13:51, Alfredo Deza wrote:
>>> On Mon, Oct 9, 2017 at 8:50 PM, Christian Balzer <chibi-FW+hd8ioUD0@public.gmane.org> wrote:
>>>>
>>>> Hello,
>>>>
>>>> (pet peeve alert)
>>>> On Mon, 9 Oct 2017 15:09:29 +0000 (UTC) Sage Weil wrote:
>>>>
>>>>> To put this in context, the goal here is to kill ceph-disk in mimic.
>>
>> Right, that means we need a ceph-volume zfs before things get shot down.
>> Fortunately there is little history to carry over.
>>
>> But then still somebody needs to do the work. ;-|
>> Haven't looked at ceph-volume, but I'll put it on the agenda.
> 
> An interesting take on zfs (and anything else we didn't set up from
> the get-go) is that we envisioned developers might
> want to craft plugins for ceph-volume and expand its capabilities,
> without placing the burden of coming up
> with new device technology to support.
> 
> The other nice aspect of this is that a plugin would get to re-use all
> the tooling in place in ceph-volume. The plugin architecture
> exists but it isn't fully developed/documented yet.

I was part of the original discussion when ceph-volume said it was going
to be plugable... And would be a great proponent of thye plugins.
If only because ceph-disk is rather convoluted to add to. Not that it
cannot be done, but the code is rather loaded with linuxisms for its
devices. And it takes some care to not upset the old code, even to the
point that code for a routine is refactored into 3 new routines: one OS
selctor and then the old code for Linux, and the new code for FreeBSD.
And that starts to look like a poor mans plugin. :)

But still I need to find the time, and sharpen my python skills.
Luckily mimic is 9 months away. :)

--WjW

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
       [not found]   ` <alpine.DEB.2.11.1710091448500.26711-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
  2017-10-10  0:50     ` Christian Balzer
@ 2017-10-12 11:39     ` Matthew Vernon
  2017-10-16 22:32     ` Anthony Verevkin
  2 siblings, 0 replies; 14+ messages in thread
From: Matthew Vernon @ 2017-10-12 11:39 UTC (permalink / raw)
  To: Sage Weil, Alfredo Deza; +Cc: ceph-devel, ceph-users-idqoXFIVOFJgJs9I8MT0rw

Hi,

On 09/10/17 16:09, Sage Weil wrote:
> To put this in context, the goal here is to kill ceph-disk in mimic.
> 
> One proposal is to make it so new OSDs can *only* be deployed with LVM,
> and old OSDs with the ceph-disk GPT partitions would be started via
> ceph-volume support that can only start (but not deploy new) OSDs in that
> style.
> 
> Is the LVM-only-ness concerning to anyone?
> 
> Looking further forward, NVMe OSDs will probably be handled a bit
> differently, as they'll eventually be using SPDK and kernel-bypass (hence,
> no LVM).  For the time being, though, they would use LVM.

This seems the best point to jump in on this thread. We have a ceph 
(Jewel / Ubuntu 16.04) cluster with around 3k OSDs, deployed with 
ceph-ansible. They are plain-disk OSDs with journal on NVME partitions. 
I don't think this is an unusual configuration :)

I think to get rid of ceph-disk, we would want at least some of the 
following:

* solid scripting for "move slowly through cluster migrating OSDs from 
disk to lvm" - 1 OSD at a time isn't going to produce unacceptable 
rebalance load, but it is going to take a long time, so such scripting 
would have to cope with being stopped and restarted and suchlike (and be 
able to use the correct journal partitions)

* ceph-ansible support for "some lvm, some plain disk" arrangements - 
presuming a "create new OSDs as lvm" approach when adding new OSDs or 
replacing failed disks

* support for plain disk (regardless of what provides it) that remains 
solid for some time yet

> On Fri, 6 Oct 2017, Alfredo Deza wrote:

>> Bluestore support should be the next step for `ceph-volume lvm`, and
>> while that is planned we are thinking of ways to improve the current
>> caveats (like OSDs not coming up) for clusters that have deployed OSDs
>> with ceph-disk.

These issues seem mostly to be down to timeouts being too short and the 
single global lock for activating OSDs.

> IMO we can't require any kind of data migration in order to upgrade, which
> means we either have to (1) keep ceph-disk around indefinitely, or (2)
> teach ceph-volume to start existing GPT-style OSDs.  Given all of the
> flakiness around udev, I'm partial to #2.  The big question for me is
> whether #2 alone is sufficient, or whether ceph-volume should also know
> how to provision new OSDs using partitions and no LVM.  Hopefully not?

I think this depends on how well tools such as ceph-ansible can cope 
with mixed OSD types (my feeling at the moment is "not terribly well", 
but I may be being unfair).

Regards,

Matthew



-- 
 The Wellcome Trust Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
       [not found]   ` <alpine.DEB.2.11.1710091448500.26711-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
  2017-10-10  0:50     ` Christian Balzer
  2017-10-12 11:39     ` Matthew Vernon
@ 2017-10-16 22:32     ` Anthony Verevkin
  2017-10-16 22:34       ` [ceph-users] " Sage Weil
  2017-10-16 23:25       ` Christian Balzer
  2 siblings, 2 replies; 14+ messages in thread
From: Anthony Verevkin @ 2017-10-16 22:32 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users-idqoXFIVOFJgJs9I8MT0rw


> From: "Sage Weil" <sage-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org>
> To: "Alfredo Deza" <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: "ceph-devel" <ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> Sent: Monday, October 9, 2017 11:09:29 AM
> Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
> 
> To put this in context, the goal here is to kill ceph-disk in mimic.
> 

 
> Perhaps the "out" here is to support a "dir" option where the user
> can
> manually provision and mount an OSD on /var/lib/ceph/osd/*, with
> 'journal'
> or 'block' symlinks, and ceph-volume will do the last bits that
> initialize
> the filestore or bluestore OSD from there.  Then if someone has a
> scenario
> that isn't captured by LVM (or whatever else we support) they can
> always
> do it manually?
> 


In fact, now that bluestore only requires a few small files and symlinks to remain in /var/lib/ceph/osd/* without the extra requirements for xattrs support and xfs, why not simply leave those folders on OS root filesystem and only point symlinks to bluestore block and db devices? That would simplify the osd deployment so much - and the symlinks can then point to /dev/disk/by-uuid or by-path or lvm path or whatever. The only downside for this approach that I see is that disks themselves would no longer be transferable between the hosts as those few files that describe the OSD are no longer on the disk itself.

Regards,
Anthony

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
  2017-10-16 22:32     ` Anthony Verevkin
@ 2017-10-16 22:34       ` Sage Weil
  2017-10-16 23:25       ` Christian Balzer
  1 sibling, 0 replies; 14+ messages in thread
From: Sage Weil @ 2017-10-16 22:34 UTC (permalink / raw)
  To: Anthony Verevkin; +Cc: ceph-devel, ceph-users, Alfredo Deza

On Mon, 16 Oct 2017, Anthony Verevkin wrote:
> 
> > From: "Sage Weil" <sage@newdream.net>
> > To: "Alfredo Deza" <adeza@redhat.com>
> > Cc: "ceph-devel" <ceph-devel@vger.kernel.org>, ceph-users@lists.ceph.com
> > Sent: Monday, October 9, 2017 11:09:29 AM
> > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
> > 
> > To put this in context, the goal here is to kill ceph-disk in mimic.
> > 
> 
>  
> > Perhaps the "out" here is to support a "dir" option where the user
> > can
> > manually provision and mount an OSD on /var/lib/ceph/osd/*, with
> > 'journal'
> > or 'block' symlinks, and ceph-volume will do the last bits that
> > initialize
> > the filestore or bluestore OSD from there.  Then if someone has a
> > scenario
> > that isn't captured by LVM (or whatever else we support) they can
> > always
> > do it manually?
> > 
> 
> In fact, now that bluestore only requires a few small files and symlinks 
> to remain in /var/lib/ceph/osd/* without the extra requirements for 
> xattrs support and xfs, why not simply leave those folders on OS root 
> filesystem and only point symlinks to bluestore block and db devices? 
> That would simplify the osd deployment so much - and the symlinks can 
> then point to /dev/disk/by-uuid or by-path or lvm path or whatever. The 
> only downside for this approach that I see is that disks themselves 
> would no longer be transferable between the hosts as those few files 
> that describe the OSD are no longer on the disk itself.

:) this is exactly what we're doing, actually:

	https://github.com/ceph/ceph/pull/18256

We plan to backport this to luminous, hopefully in time for the next 
point release.

dm-crypt is still slightly annoying to set up, but it will still be much 
easier.

sage

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
  2017-10-16 22:32     ` Anthony Verevkin
  2017-10-16 22:34       ` [ceph-users] " Sage Weil
@ 2017-10-16 23:25       ` Christian Balzer
  1 sibling, 0 replies; 14+ messages in thread
From: Christian Balzer @ 2017-10-16 23:25 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw; +Cc: ceph-devel

On Mon, 16 Oct 2017 18:32:06 -0400 (EDT) Anthony Verevkin wrote:

> > From: "Sage Weil" <sage-BnTBU8nroG7k1uMJSBkQmQ@public.gmane.org>
> > To: "Alfredo Deza" <adeza-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > Cc: "ceph-devel" <ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>, ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > Sent: Monday, October 9, 2017 11:09:29 AM
> > Subject: [ceph-users] killing ceph-disk [was Re: ceph-volume: migration and disk partition support]
> > 
> > To put this in context, the goal here is to kill ceph-disk in mimic.
> >   
> 
>  
> > Perhaps the "out" here is to support a "dir" option where the user
> > can
> > manually provision and mount an OSD on /var/lib/ceph/osd/*, with
> > 'journal'
> > or 'block' symlinks, and ceph-volume will do the last bits that
> > initialize
> > the filestore or bluestore OSD from there.  Then if someone has a
> > scenario
> > that isn't captured by LVM (or whatever else we support) they can
> > always
> > do it manually?
> >   
> 
> 
> In fact, now that bluestore only requires a few small files and symlinks to remain in /var/lib/ceph/osd/* without the extra requirements for xattrs support and xfs, why not simply leave those folders on OS root filesystem and only point symlinks to bluestore block and db devices? That would simplify the osd deployment so much - and the symlinks can then point to /dev/disk/by-uuid or by-path or lvm path or whatever. The only downside for this approach that I see is that disks themselves would no longer be transferable between the hosts as those few files that describe the OSD are no longer on the disk itself.
> 

If the OS is on a RAID1 the chances of things being lost entirely is
reduced very much, so moving OSDs to another host becomes a trivial
exercise one would assume.

But yeah, this sounds fine to me, as it's extremely flexible.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi-FW+hd8ioUD0@public.gmane.org   	Rakuten Communications

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-10-16 23:25 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-06 16:56 ceph-volume: migration and disk partition support Alfredo Deza
2017-10-09 15:09 ` killing ceph-disk [was Re: ceph-volume: migration and disk partition support] Sage Weil
     [not found]   ` <alpine.DEB.2.11.1710091448500.26711-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-10-10  0:50     ` Christian Balzer
     [not found]       ` <20171010095014.64940afa-9yhXNL7Kh0lSCLKNlHTxZM8NsWr+9BEh@public.gmane.org>
2017-10-10 11:51         ` Alfredo Deza
2017-10-10 12:14           ` [ceph-users] " Willem Jan Withagen
     [not found]             ` <a1c7e8df-44a1-a03f-ddbc-2bb66494256a-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
2017-10-10 12:21               ` Alfredo Deza
     [not found]                 ` <CAC-Np1wLx-fqJ2kJv9TT0Hny520bCLZAcen=8rTU0zsS-zGJhQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-10-10 12:42                   ` Willem Jan Withagen
2017-10-12 11:39     ` Matthew Vernon
2017-10-16 22:32     ` Anthony Verevkin
2017-10-16 22:34       ` [ceph-users] " Sage Weil
2017-10-16 23:25       ` Christian Balzer
     [not found] ` <CAC-Np1xUasg0C34HEuCFims18zPyRZ17p9-mp3xYCUETAYnn_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-10-10  7:28   ` ceph-volume: migration and disk partition support Stefan Kooman
2017-10-10 12:25     ` [ceph-users] " Alfredo Deza
2017-10-10  8:14 ` Dan van der Ster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.