From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andreas Calminder Subject: Re: ceph-disk is now deprecated Date: Tue, 28 Nov 2017 13:22:08 +0100 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ceph-users-bounces-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org Sender: "ceph-users" To: Alfredo Deza Cc: "ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org" , ceph-devel List-Id: ceph-devel.vger.kernel.org > For the `simple` sub-command there is no prepare/activate, it is just > a way of taking over management of an already deployed OSD. For *new* > OSDs, yes, we are implying that we are going only with Logical Volumes > for data devices. It is a bit more flexible for Journals, block.db, > and block.wal as those > can be either logical volumes or GPT partitions (ceph-volume will not > create these for you). Ok, so if I understand this correctly, for future one-device-per-osd setups I would create a volume group per device before handing it over to ceph-volume, to get the "same" functionality as ceph-disk. I understand the flexibility aspect of this, my setup will have an extra step setting up lvm for my osd devices which is fine. Apologies if I missed the information, but is it possible to get command output as json, something like "ceph-disk list --format json" since it's quite helpful while setting up stuff through ansible Thanks, Andreas On 28 November 2017 at 12:47, Alfredo Deza wrote: > On Tue, Nov 28, 2017 at 1:56 AM, Andreas Calminder > wrote: >> Hello, >> Thanks for the heads-up. As someone who's currently maintaining a >> Jewel cluster and are in the process of setting up a shiny new >> Luminous cluster and writing Ansible roles in the process to make >> setup reproducible. I immediately proceeded to look into ceph-volume >> and I've some questions/concerns, mainly due to my own setup, which is >> one osd per device, simple. >> >> Running ceph-volume in Luminous 12.2.1 suggests there's only the lvm >> subcommand available and the man-page only covers lvm. The online >> documentation http://docs.ceph.com/docs/master/ceph-volume/ lists >> simple however it's lacking some of the ceph-disk commands, like >> 'prepare' which seems crucial in the 'simple' scenario. Does the >> ceph-disk deprecation imply that lvm is mandatory for using devices >> with ceph or is just the documentation and tool features lagging >> behind, I.E the 'simple' parts will be added well in time for Mimic >> and during the Luminous lifecycle? Or am I missing something? > > In your case, all your existing OSDs will be able to be managed by > `ceph-volume` once scanned and the information persisted. So anything > from Jewel should still work. For 12.2.1 you are right, that command > is not yet available, it will be present in 12.2.2 > > For the `simple` sub-command there is no prepare/activate, it is just > a way of taking over management of an already deployed OSD. For *new* > OSDs, yes, we are implying that we are going only with Logical Volumes > for data devices. It is a bit more flexible for Journals, block.db, > and block.wal as those > can be either logical volumes or GPT partitions (ceph-volume will not > create these for you). > >> >> Best regards, >> Andreas >> >> On 27 November 2017 at 14:36, Alfredo Deza wrote: >>> For the upcoming Luminous release (12.2.2), ceph-disk will be >>> officially in 'deprecated' mode (bug fixes only). A large banner with >>> deprecation information has been added, which will try to raise >>> awareness. >>> >>> We are strongly suggesting using ceph-volume for new (and old) OSD >>> deployments. The only current exceptions to this are encrypted OSDs >>> and FreeBSD systems >>> >>> Encryption support is planned and will be coming soon to ceph-volume. >>> >>> A few items to consider: >>> >>> * ceph-disk is expected to be fully removed by the Mimic release >>> * Existing OSDs are supported by ceph-volume. They can be "taken over" [0] >>> * ceph-ansible already fully supports ceph-volume and will soon default to it >>> * ceph-deploy support is planned and should be fully implemented soon >>> >>> >>> [0] http://docs.ceph.com/docs/master/ceph-volume/simple/ >>> -- >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in >>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html