From: Sage Weil <sweil@redhat.com>
To: Hans van den Bogert <hansbogert@gmail.com>
Cc: ceph-devel@vger.kernel.org, ceph-users@ceph.com
Subject: Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
Date: Wed, 25 Oct 2017 19:40:21 +0000 (UTC) [thread overview]
Message-ID: <alpine.DEB.2.11.1710251928080.22592@piezo.us.to> (raw)
In-Reply-To: <CAOugy1vGkPDf7w_+ZMKpA7R2r-MV71=3KnaQ2V6WTaLg5fmTyA@mail.gmail.com>
[-- Attachment #1: Type: TEXT/PLAIN, Size: 4532 bytes --]
On Wed, 25 Oct 2017, Hans van den Bogert wrote:
> Very interesting.I've been toying around with Rook.io [1]. Did you know of this project, and if so can you tell if ceph-helm
> and Rook.io have similar goals?
Similar but a bit different.
Probably the main difference is that ceph-helm aims to run Ceph as part of
the container infrastructure. The containers are privileged so they can
interact with hardware where needed (e.g., lvm for dm-crypt) and the
cluster runs on the host network. We use kubernetes some orchestration:
kube is a bit of a headache for mons and osds but will be very helpful for
scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha,
samba, rbd-mirror, etc.
Rook, as I understand it at least (the rook folks on the list can speak up
here), aims to run Ceph more as a tenant of kubernetes. The cluster runs
in the container network space, and the aim is to be able to deploy ceph
more like an unprivileged application on e.g., a public cloud providing
kubernetes as the cloud api.
The other difference is around rook-operator, which is the thing that lets
you declare what you want (ceph clusters, pools, etc) via kubectl and goes
off and creates the cluster(s) and tells it/them what to do. It makes the
storage look like it is tightly integrated with and part of kubernetes but
means that kubectl becomes the interface for ceph cluster management.
Some of that seems useful to me (still developing opinions here!) and
perhaps isn't so different than the declarations in your chart's
values.yaml but I'm unsure about the wisdom of going too far down the road
of administering ceph via yaml.
Anyway, I'm still pretty new to kubernetes-land and very interested in
hearing what people are interested in or looking for here!
sage
> Regards,
>
> Hans
>
> [1] https://rook.io/
>
> On 25 Oct 2017 21:09, "Sage Weil" <sweil@redhat.com> wrote:
> There is a new repo under the ceph org, ceph-helm, which includes helm
> charts for deploying ceph on kubernetes. The code is based on the ceph
> charts from openstack-helm, but we've moved them into their own upstream
> repo here so that they can be developed more quickly and independently
> from the openstack-helm work. The code has already evolved a fair bit,
> mostly to support luminous and fix a range of issues:
>
> https://github.com/ceph/ceph-helm/tree/master/ceph/ceph
>
> The repo is a fork of the upstream kubernetes/charts.git repo with an eye
> toward eventually merging the chart upstream into that repo. How useful
> that would be in practice is not entirely clear to me since the version in
> the ceph-helm repo will presumably always be more up to date and users
> have to point to *some* source for the chart either way. Also the current
> structure of the files in the repo is carried over from openstack-helm,
> which uses the helm-toolkit stuff and isn't in the correct form for the
> upstream charts.git. Suggestions/input here on what direction makes more
> sense would be welcome!
>
> There are also some docs on getting a ceph cluster up in kubernetes using
> these charts at
>
> https://github.com/ceph/ceph/pull/18520
> http://docs.ceph.com/ceph-prs/18520/start/kube-helm/
>
> that should be merged shortly. Not terribly detailed and we're not
> covering much on the operations side yet, but that all is coming.
>
> A very rough sketch of the direction currently being considered from
> running ceph in kubernetes is here:
>
> http://pad.ceph.com/p/containers
>
> and there is a trello board here
>
> https://trello.com/b/kcXOllJp/kubehelm
>
> All of this builds on the container image that Sebastien has been working
> on for some time, that has recently been renamed from ceph-docker ->
> ceph-container
>
> https://github.com/ceph/ceph-container
>
> Dan is working on getting an image registry up at registry.ceph.com so
> that we can publish test build images, releases, or both.
>
> We also have a daily sync up call for the folks who are actively working
> on this.
>
> That's all for now! :)
> sage
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
next prev parent reply other threads:[~2017-10-25 19:40 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-25 19:09 announcing ceph-helm (ceph on kubernetes orchestration) Sage Weil
[not found] ` <alpine.DEB.2.11.1710251848140.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-10-25 19:26 ` Hans van den Bogert
2017-10-25 19:40 ` Sage Weil [this message]
2017-11-03 21:10 ` [ceph-users] " Bassam Tabbara
[not found] ` <alpine.DEB.2.11.1710251928080.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-11-03 21:13 ` Bassam Tabbara
[not found] ` <FD899CA1-D199-4E3F-AE28-292B0F9B062B-7zw3Wi+BkiFBDgjK7y7TUQ@public.gmane.org>
2017-11-06 9:25 ` Hunter Nield
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.11.1710251928080.22592@piezo.us.to \
--to=sweil@redhat.com \
--cc=ceph-devel@vger.kernel.org \
--cc=ceph-users@ceph.com \
--cc=hansbogert@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.