All of lore.kernel.org
 help / color / mirror / Atom feed
* ceph-volume: migration and disk partition support
@ 2017-10-06 16:56 Alfredo Deza
  2017-10-09 15:09 ` killing ceph-disk [was Re: ceph-volume: migration and disk partition support] Sage Weil
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Alfredo Deza @ 2017-10-06 16:56 UTC (permalink / raw)
  To: ceph-devel, ceph-users

Hi,

Now that ceph-volume is part of the Luminous release, we've been able
to provide filestore support for LVM-based OSDs. We are making use of
LVM's powerful mechanisms to store metadata which allows the process
to no longer rely on UDEV and GPT labels (unlike ceph-disk).

Bluestore support should be the next step for `ceph-volume lvm`, and
while that is planned we are thinking of ways to improve the current
caveats (like OSDs not coming up) for clusters that have deployed OSDs
with ceph-disk.

--- New clusters ---
The `ceph-volume lvm` deployment is straightforward (currently
supported in ceph-ansible), but there isn't support for plain disks
(with partitions) currently, like there is with ceph-disk.

Is there a pressing interest in supporting plain disks with
partitions? Or only supporting LVM-based OSDs fine?

--- Existing clusters ---
Migration to ceph-volume, even with plain disk support means
re-creating the OSD from scratch, which would end up moving data.
There is no way to make a GPT/ceph-disk OSD become a ceph-volume one
without starting from scratch.

A temporary workaround would be to provide a way for existing OSDs to
be brought up without UDEV and ceph-disk, by creating logic in
ceph-volume that could load them with systemd directly. This wouldn't
make them lvm-based, nor it would mean there is direct support for
them, just a temporary workaround to make them start without UDEV and
ceph-disk.

I'm interested in what current users might look for here,: is it fine
to provide this workaround if the issues are that problematic? Or is
it OK to plan a migration towards ceph-volume OSDs?

-Alfredo

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-10-16 23:25 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-06 16:56 ceph-volume: migration and disk partition support Alfredo Deza
2017-10-09 15:09 ` killing ceph-disk [was Re: ceph-volume: migration and disk partition support] Sage Weil
     [not found]   ` <alpine.DEB.2.11.1710091448500.26711-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-10-10  0:50     ` Christian Balzer
     [not found]       ` <20171010095014.64940afa-9yhXNL7Kh0lSCLKNlHTxZM8NsWr+9BEh@public.gmane.org>
2017-10-10 11:51         ` Alfredo Deza
2017-10-10 12:14           ` [ceph-users] " Willem Jan Withagen
     [not found]             ` <a1c7e8df-44a1-a03f-ddbc-2bb66494256a-dOtk1Lsa4IaEVqv0pETR8A@public.gmane.org>
2017-10-10 12:21               ` Alfredo Deza
     [not found]                 ` <CAC-Np1wLx-fqJ2kJv9TT0Hny520bCLZAcen=8rTU0zsS-zGJhQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-10-10 12:42                   ` Willem Jan Withagen
2017-10-12 11:39     ` Matthew Vernon
2017-10-16 22:32     ` Anthony Verevkin
2017-10-16 22:34       ` [ceph-users] " Sage Weil
2017-10-16 23:25       ` Christian Balzer
     [not found] ` <CAC-Np1xUasg0C34HEuCFims18zPyRZ17p9-mp3xYCUETAYnn_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2017-10-10  7:28   ` ceph-volume: migration and disk partition support Stefan Kooman
2017-10-10 12:25     ` [ceph-users] " Alfredo Deza
2017-10-10  8:14 ` Dan van der Ster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.