All of lore.kernel.org
 help / color / mirror / Atom feed
* announcing ceph-helm (ceph on kubernetes orchestration)
@ 2017-10-25 19:09 Sage Weil
       [not found] ` <alpine.DEB.2.11.1710251848140.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
  0 siblings, 1 reply; 6+ messages in thread
From: Sage Weil @ 2017-10-25 19:09 UTC (permalink / raw)
  To: ceph-devel; +Cc: ceph-users

There is a new repo under the ceph org, ceph-helm, which includes helm 
charts for deploying ceph on kubernetes.  The code is based on the ceph 
charts from openstack-helm, but we've moved them into their own upstream 
repo here so that they can be developed more quickly and independently 
from the openstack-helm work.  The code has already evolved a fair bit, 
mostly to support luminous and fix a range of issues:

	https://github.com/ceph/ceph-helm/tree/master/ceph/ceph

The repo is a fork of the upstream kubernetes/charts.git repo with an eye 
toward eventually merging the chart upstream into that repo.  How useful 
that would be in practice is not entirely clear to me since the version in 
the ceph-helm repo will presumably always be more up to date and users 
have to point to *some* source for the chart either way.  Also the current 
structure of the files in the repo is carried over from openstack-helm, 
which uses the helm-toolkit stuff and isn't in the correct form for the 
upstream charts.git.  Suggestions/input here on what direction makes more 
sense would be welcome!

There are also some docs on getting a ceph cluster up in kubernetes using 
these charts at

	https://github.com/ceph/ceph/pull/18520
	http://docs.ceph.com/ceph-prs/18520/start/kube-helm/

that should be merged shortly.  Not terribly detailed and we're not 
covering much on the operations side yet, but that all is coming.

A very rough sketch of the direction currently being considered from 
running ceph in kubernetes is here:

	http://pad.ceph.com/p/containers

and there is a trello board here

	https://trello.com/b/kcXOllJp/kubehelm

All of this builds on the container image that Sebastien has been working 
on for some time, that has recently been renamed from ceph-docker -> 
ceph-container

	https://github.com/ceph/ceph-container

Dan is working on getting an image registry up at registry.ceph.com so 
that we can publish test build images, releases, or both.

We also have a daily sync up call for the folks who are actively working 
on this.

That's all for now!  :)
sage


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: announcing ceph-helm (ceph on kubernetes orchestration)
       [not found] ` <alpine.DEB.2.11.1710251848140.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
@ 2017-10-25 19:26   ` Hans van den Bogert
  2017-10-25 19:40     ` [ceph-users] " Sage Weil
  0 siblings, 1 reply; 6+ messages in thread
From: Hans van den Bogert @ 2017-10-25 19:26 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 2670 bytes --]

Very interesting.
I've been toying around with Rook.io [1]. Did you know of this project, and
if so can you tell if ceph-helm and Rook.io have similar goals?

Regards,

Hans

[1] https://rook.io/

On 25 Oct 2017 21:09, "Sage Weil" <sweil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:

> There is a new repo under the ceph org, ceph-helm, which includes helm
> charts for deploying ceph on kubernetes.  The code is based on the ceph
> charts from openstack-helm, but we've moved them into their own upstream
> repo here so that they can be developed more quickly and independently
> from the openstack-helm work.  The code has already evolved a fair bit,
> mostly to support luminous and fix a range of issues:
>
>         https://github.com/ceph/ceph-helm/tree/master/ceph/ceph
>
> The repo is a fork of the upstream kubernetes/charts.git repo with an eye
> toward eventually merging the chart upstream into that repo.  How useful
> that would be in practice is not entirely clear to me since the version in
> the ceph-helm repo will presumably always be more up to date and users
> have to point to *some* source for the chart either way.  Also the current
> structure of the files in the repo is carried over from openstack-helm,
> which uses the helm-toolkit stuff and isn't in the correct form for the
> upstream charts.git.  Suggestions/input here on what direction makes more
> sense would be welcome!
>
> There are also some docs on getting a ceph cluster up in kubernetes using
> these charts at
>
>         https://github.com/ceph/ceph/pull/18520
>         http://docs.ceph.com/ceph-prs/18520/start/kube-helm/
>
> that should be merged shortly.  Not terribly detailed and we're not
> covering much on the operations side yet, but that all is coming.
>
> A very rough sketch of the direction currently being considered from
> running ceph in kubernetes is here:
>
>         http://pad.ceph.com/p/containers
>
> and there is a trello board here
>
>         https://trello.com/b/kcXOllJp/kubehelm
>
> All of this builds on the container image that Sebastien has been working
> on for some time, that has recently been renamed from ceph-docker ->
> ceph-container
>
>         https://github.com/ceph/ceph-container
>
> Dan is working on getting an image registry up at registry.ceph.com so
> that we can publish test build images, releases, or both.
>
> We also have a daily sync up call for the folks who are actively working
> on this.
>
> That's all for now!  :)
> sage
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 4223 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
  2017-10-25 19:26   ` Hans van den Bogert
@ 2017-10-25 19:40     ` Sage Weil
  2017-11-03 21:10       ` Bassam Tabbara
       [not found]       ` <alpine.DEB.2.11.1710251928080.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
  0 siblings, 2 replies; 6+ messages in thread
From: Sage Weil @ 2017-10-25 19:40 UTC (permalink / raw)
  To: Hans van den Bogert; +Cc: ceph-devel, ceph-users

[-- Attachment #1: Type: TEXT/PLAIN, Size: 4532 bytes --]

On Wed, 25 Oct 2017, Hans van den Bogert wrote:
> Very interesting.I've been toying around with Rook.io [1]. Did you know of this project, and if so can you tell if ceph-helm
> and Rook.io have similar goals?

Similar but a bit different.

Probably the main difference is that ceph-helm aims to run Ceph as part of 
the container infrastructure.  The containers are privileged so they can 
interact with hardware where needed (e.g., lvm for dm-crypt) and the 
cluster runs on the host network.  We use kubernetes some orchestration: 
kube is a bit of a headache for mons and osds but will be very helpful for 
scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha, 
samba, rbd-mirror, etc.

Rook, as I understand it at least (the rook folks on the list can speak up 
here), aims to run Ceph more as a tenant of kubernetes.  The cluster runs 
in the container network space, and the aim is to be able to deploy ceph 
more like an unprivileged application on e.g., a public cloud providing 
kubernetes as the cloud api.

The other difference is around rook-operator, which is the thing that lets 
you declare what you want (ceph clusters, pools, etc) via kubectl and goes 
off and creates the cluster(s) and tells it/them what to do.  It makes the 
storage look like it is tightly integrated with and part of kubernetes but 
means that kubectl becomes the interface for ceph cluster management.  

Some of that seems useful to me (still developing opinions here!) and 
perhaps isn't so different than the declarations in your chart's 
values.yaml but I'm unsure about the wisdom of going too far down the road 
of administering ceph via yaml.

Anyway, I'm still pretty new to kubernetes-land and very interested in 
hearing what people are interested in or looking for here!

sage


> Regards,
> 
> Hans
> 
> [1] https://rook.io/
> 
> On 25 Oct 2017 21:09, "Sage Weil" <sweil@redhat.com> wrote:
>       There is a new repo under the ceph org, ceph-helm, which includes helm
>       charts for deploying ceph on kubernetes.  The code is based on the ceph
>       charts from openstack-helm, but we've moved them into their own upstream
>       repo here so that they can be developed more quickly and independently
>       from the openstack-helm work.  The code has already evolved a fair bit,
>       mostly to support luminous and fix a range of issues:
> 
>               https://github.com/ceph/ceph-helm/tree/master/ceph/ceph
> 
>       The repo is a fork of the upstream kubernetes/charts.git repo with an eye
>       toward eventually merging the chart upstream into that repo.  How useful
>       that would be in practice is not entirely clear to me since the version in
>       the ceph-helm repo will presumably always be more up to date and users
>       have to point to *some* source for the chart either way.  Also the current
>       structure of the files in the repo is carried over from openstack-helm,
>       which uses the helm-toolkit stuff and isn't in the correct form for the
>       upstream charts.git.  Suggestions/input here on what direction makes more
>       sense would be welcome!
> 
>       There are also some docs on getting a ceph cluster up in kubernetes using
>       these charts at
> 
>               https://github.com/ceph/ceph/pull/18520
>               http://docs.ceph.com/ceph-prs/18520/start/kube-helm/
> 
>       that should be merged shortly.  Not terribly detailed and we're not
>       covering much on the operations side yet, but that all is coming.
> 
>       A very rough sketch of the direction currently being considered from
>       running ceph in kubernetes is here:
> 
>               http://pad.ceph.com/p/containers
> 
>       and there is a trello board here
> 
>               https://trello.com/b/kcXOllJp/kubehelm
> 
>       All of this builds on the container image that Sebastien has been working
>       on for some time, that has recently been renamed from ceph-docker ->
>       ceph-container
> 
>               https://github.com/ceph/ceph-container
> 
>       Dan is working on getting an image registry up at registry.ceph.com so
>       that we can publish test build images, releases, or both.
> 
>       We also have a daily sync up call for the folks who are actively working
>       on this.
> 
>       That's all for now!  :)
>       sage
> 
>       _______________________________________________
>       ceph-users mailing list
>       ceph-users@lists.ceph.com
>       http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [ceph-users] announcing ceph-helm (ceph on kubernetes orchestration)
  2017-10-25 19:40     ` [ceph-users] " Sage Weil
@ 2017-11-03 21:10       ` Bassam Tabbara
       [not found]       ` <alpine.DEB.2.11.1710251928080.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
  1 sibling, 0 replies; 6+ messages in thread
From: Bassam Tabbara @ 2017-11-03 21:10 UTC (permalink / raw)
  To: Sage Weil; +Cc: Hans van den Bogert, ceph-devel, ceph-users

(sorry for the late response, just catching up on ceph-users)

> Probably the main difference is that ceph-helm aims to run Ceph as part of 
> the container infrastructure.  The containers are privileged so they can 
> interact with hardware where needed (e.g., lvm for dm-crypt) and the 
> cluster runs on the host network.  We use kubernetes some orchestration: 
> kube is a bit of a headache for mons and osds but will be very helpful for 
> scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha, 
> samba, rbd-mirror, etc.
> 
> Rook, as I understand it at least (the rook folks on the list can speak up 
> here), aims to run Ceph more as a tenant of kubernetes.  The cluster runs 
> in the container network space, and the aim is to be able to deploy ceph 
> more like an unprivileged application on e.g., a public cloud providing 
> kubernetes as the cloud api.
> 

Yes Rook’s goal is to run wherever Kubernetes runs without making changes at the host level. Eventually we plan to remove the need to run some of the containers privileged, and automatically work with different kernel versions and heterogeneous environments. It's fair to think of Rook as an application of Kubernetes. As a result you could run it on AWS, Google, bare-metal or wherever.

> The other difference is around rook-operator, which is the thing that lets 
> you declare what you want (ceph clusters, pools, etc) via kubectl and goes 
> off and creates the cluster(s) and tells it/them what to do.  It makes the 
> storage look like it is tightly integrated with and part of kubernetes but 
> means that kubectl becomes the interface for ceph cluster management.  

Rook extends Kubernetes to understand storage concepts like Pool, Object Store, FileSystems. Our goal is for storage to be integrated deeply into Kuberentes. That said, you can easily launch the Rook toolbox and use the ceph tools at any point. I don’t think the goal is for Rook to replace the ceph tools, but instead to offer a Kuberentes-native alternative to them.

> Some of that seems useful to me (still developing opinions here!) and 
> perhaps isn't so different than the declarations in your chart's 
> values.yaml but I'm unsure about the wisdom of going too far down the road 
> of administering ceph via yaml.
> 
> Anyway, I'm still pretty new to kubernetes-land and very interested in 
> hearing what people are interested in or looking for here!

Would be great to find ways to get these two projects closer. 

Bassam


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: announcing ceph-helm (ceph on kubernetes orchestration)
       [not found]       ` <alpine.DEB.2.11.1710251928080.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
@ 2017-11-03 21:13         ` Bassam Tabbara
       [not found]           ` <FD899CA1-D199-4E3F-AE28-292B0F9B062B-7zw3Wi+BkiFBDgjK7y7TUQ@public.gmane.org>
  0 siblings, 1 reply; 6+ messages in thread
From: Bassam Tabbara @ 2017-11-03 21:13 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-devel, ceph-users-Qp0mS5GaXlQ

(sorry for the late response, just catching up on ceph-users)

> Probably the main difference is that ceph-helm aims to run Ceph as part of 
> the container infrastructure.  The containers are privileged so they can 
> interact with hardware where needed (e.g., lvm for dm-crypt) and the 
> cluster runs on the host network.  We use kubernetes some orchestration: 
> kube is a bit of a headache for mons and osds but will be very helpful for 
> scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha, 
> samba, rbd-mirror, etc.
> 
> Rook, as I understand it at least (the rook folks on the list can speak up 
> here), aims to run Ceph more as a tenant of kubernetes.  The cluster runs 
> in the container network space, and the aim is to be able to deploy ceph 
> more like an unprivileged application on e.g., a public cloud providing 
> kubernetes as the cloud api.

Yes Rook’s goal is to run wherever Kubernetes runs without making changes at the host level. Eventually we plan to remove the need to run some of the containers privileged, and automatically work with different kernel versions and heterogeneous environments. It's fair to think of Rook as an application of Kubernetes. As a result you could run it on AWS, Google, bare-metal or wherever.

> The other difference is around rook-operator, which is the thing that lets 
> you declare what you want (ceph clusters, pools, etc) via kubectl and goes 
> off and creates the cluster(s) and tells it/them what to do.  It makes the 
> storage look like it is tightly integrated with and part of kubernetes but 
> means that kubectl becomes the interface for ceph cluster management.  

Rook extends Kubernetes to understand storage concepts like Pool, Object Store, FileSystems. Our goal is for storage to be integrated deeply into Kuberentes. That said, you can easily launch the Rook toolbox and use the ceph tools at any point. I don’t think the goal is for Rook to replace the ceph tools, but instead to offer a Kuberentes-native alternative to them.

> Some of that seems useful to me (still developing opinions here!) and 
> perhaps isn't so different than the declarations in your chart's 
> values.yaml but I'm unsure about the wisdom of going too far down the road 
> of administering ceph via yaml.
> 
> Anyway, I'm still pretty new to kubernetes-land and very interested in 
> hearing what people are interested in or looking for here!

Would be great to find ways to get these two projects closer. 

Bassam

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: announcing ceph-helm (ceph on kubernetes orchestration)
       [not found]           ` <FD899CA1-D199-4E3F-AE28-292B0F9B062B-7zw3Wi+BkiFBDgjK7y7TUQ@public.gmane.org>
@ 2017-11-06  9:25             ` Hunter Nield
  0 siblings, 0 replies; 6+ messages in thread
From: Hunter Nield @ 2017-11-06  9:25 UTC (permalink / raw)
  To: Bassam Tabbara; +Cc: Sage Weil, ceph-devel, ceph-users-Qp0mS5GaXlQ


[-- Attachment #1.1: Type: text/plain, Size: 3994 bytes --]

I’m not sure how I missed this earlier in the lists but having done a lot
of work on Ceph helm charts, this is of definite interest to us. We’ve been
running various states of Ceph in Docker and Kubernetes (in production
environments) for over a year now.

There is a lot of overlap between the Rook and the Ceph related projects
(ceph-docker/ceph-container and now ceph-helm) and I agree with Bassam
about finding ways of bringing things closer. Having felt (and contributed
to) the pain of the complex ceph-container entrypoint scripts, the
importance of the simpler initial configuration and user experience with
Rook and it’s Operator approach can’t be understated. There is a definite
need for a vanilla project to run Ceph on Kubernetes but the most useful
part lies in encapsulating the day-to-day operation of a Ceph cluster that
builds on the strengths of Kubernetes (CRDs, Operators, dynamic scaling,
etc).

Looking forward to seeing where this goes (and joining the discussion)

Hunter

On Sat, Nov 4, 2017 at 5:13 AM Bassam Tabbara <bassam-7zw3Wi+BkiFBDgjK7y7TUQ@public.gmane.org> wrote:

> (sorry for the late response, just catching up on ceph-users)
>
> > Probably the main difference is that ceph-helm aims to run Ceph as part
> of
> > the container infrastructure.  The containers are privileged so they can
> > interact with hardware where needed (e.g., lvm for dm-crypt) and the
> > cluster runs on the host network.  We use kubernetes some orchestration:
> > kube is a bit of a headache for mons and osds but will be very helpful
> for
> > scheduling everything else: mgrs, rgw, rgw-nfs, iscsi, mds, ganesha,
> > samba, rbd-mirror, etc.
> >
> > Rook, as I understand it at least (the rook folks on the list can speak
> up
> > here), aims to run Ceph more as a tenant of kubernetes.  The cluster runs
> > in the container network space, and the aim is to be able to deploy ceph
> > more like an unprivileged application on e.g., a public cloud providing
> > kubernetes as the cloud api.
>
> Yes Rook’s goal is to run wherever Kubernetes runs without making changes
> at the host level. Eventually we plan to remove the need to run some of the
> containers privileged, and automatically work with different kernel
> versions and heterogeneous environments. It's fair to think of Rook as an
> application of Kubernetes. As a result you could run it on AWS, Google,
> bare-metal or wherever.
>
> > The other difference is around rook-operator, which is the thing that
> lets
> > you declare what you want (ceph clusters, pools, etc) via kubectl and
> goes
> > off and creates the cluster(s) and tells it/them what to do.  It makes
> the
> > storage look like it is tightly integrated with and part of kubernetes
> but
> > means that kubectl becomes the interface for ceph cluster management.
>
> Rook extends Kubernetes to understand storage concepts like Pool, Object
> Store, FileSystems. Our goal is for storage to be integrated deeply into
> Kuberentes. That said, you can easily launch the Rook toolbox and use the
> ceph tools at any point. I don’t think the goal is for Rook to replace the
> ceph tools, but instead to offer a Kuberentes-native alternative to them.
>
> > Some of that seems useful to me (still developing opinions here!) and
> > perhaps isn't so different than the declarations in your chart's
> > values.yaml but I'm unsure about the wisdom of going too far down the
> road
> > of administering ceph via yaml.
> >
> > Anyway, I'm still pretty new to kubernetes-land and very interested in
> > hearing what people are interested in or looking for here!
>
> Would be great to find ways to get these two projects closer.
>
> Bassam
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #1.2: Type: text/html, Size: 4652 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-11-06  9:25 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-25 19:09 announcing ceph-helm (ceph on kubernetes orchestration) Sage Weil
     [not found] ` <alpine.DEB.2.11.1710251848140.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-10-25 19:26   ` Hans van den Bogert
2017-10-25 19:40     ` [ceph-users] " Sage Weil
2017-11-03 21:10       ` Bassam Tabbara
     [not found]       ` <alpine.DEB.2.11.1710251928080.22592-ie3vfNGmdjePKud3HExfWg@public.gmane.org>
2017-11-03 21:13         ` Bassam Tabbara
     [not found]           ` <FD899CA1-D199-4E3F-AE28-292B0F9B062B-7zw3Wi+BkiFBDgjK7y7TUQ@public.gmane.org>
2017-11-06  9:25             ` Hunter Nield

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.