All of lore.kernel.org
 help / color / mirror / Atom feed
* Experiences with Ceph at the June'14 issue of USENIX ; login:
@ 2014-06-02 18:32 Filippos Giannakos
  2014-06-02 18:51 ` [ceph-users] " Patrick McGarry
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Filippos Giannakos @ 2014-06-02 18:32 UTC (permalink / raw)
  To: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

Hello all,

As you may already know, we have been using Ceph for quite some time now to back
the ~okeanos [1] public cloud service, which is powered by Synnefo [2].

A few months ago we were kindly invited to write an article about our
experiences with Ceph for the USENIX ;login: magazine. The article is out in
this month's (June '14) issue and we are really happy to share it with you all:

https://www.usenix.org/publications/login/june14/giannakos

In the article we describe our storage needs, how we use Ceph and how it has
worked so far. I hope you enjoy reading it.

Kind Regards,
Filippos

[1] http://okeanos.grnet.gr
[2] http://www.synnefo.org

-- 
Filippos
<philipgian-Sqt7GMbKoOQ@public.gmane.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login:
  2014-06-02 18:32 Experiences with Ceph at the June'14 issue of USENIX ; login: Filippos Giannakos
@ 2014-06-02 18:51 ` Patrick McGarry
  2014-06-02 21:40 ` Experiences with Ceph at the June'14 issue of USENIX ;login: Robin H. Johnson
  2014-06-02 22:37 ` [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login: Ian Colle
  2 siblings, 0 replies; 7+ messages in thread
From: Patrick McGarry @ 2014-06-02 18:51 UTC (permalink / raw)
  To: Filippos Giannakos; +Cc: ceph-users, Ceph Devel

This is great.  Thanks for sharing Filippos!


Best Regards,

Patrick McGarry
Director, Community || Inktank
http://ceph.com  ||  http://inktank.com
@scuttlemonkey || @ceph || @inktank


On Mon, Jun 2, 2014 at 2:32 PM, Filippos Giannakos <philipgian@grnet.gr> wrote:
> Hello all,
>
> As you may already know, we have been using Ceph for quite some time now to back
> the ~okeanos [1] public cloud service, which is powered by Synnefo [2].
>
> A few months ago we were kindly invited to write an article about our
> experiences with Ceph for the USENIX ;login: magazine. The article is out in
> this month's (June '14) issue and we are really happy to share it with you all:
>
> https://www.usenix.org/publications/login/june14/giannakos
>
> In the article we describe our storage needs, how we use Ceph and how it has
> worked so far. I hope you enjoy reading it.
>
> Kind Regards,
> Filippos
>
> [1] http://okeanos.grnet.gr
> [2] http://www.synnefo.org
>
> --
> Filippos
> <philipgian@grnet.gr>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Experiences with Ceph at the June'14 issue of USENIX ;login:
  2014-06-02 18:32 Experiences with Ceph at the June'14 issue of USENIX ; login: Filippos Giannakos
  2014-06-02 18:51 ` [ceph-users] " Patrick McGarry
@ 2014-06-02 21:40 ` Robin H. Johnson
  2014-06-03  9:12   ` Constantinos Venetsanopoulos
  2014-06-02 22:37 ` [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login: Ian Colle
  2 siblings, 1 reply; 7+ messages in thread
From: Robin H. Johnson @ 2014-06-02 21:40 UTC (permalink / raw)
  To: ceph-devel

On Mon, Jun 02, 2014 at 09:32:19PM +0300,  Filippos Giannakos wrote:
> As you may already know, we have been using Ceph for quite some time now to back
> the ~okeanos [1] public cloud service, which is powered by Synnefo [2].
(Background info for other readers: Synnefo is a cloud layer on top of
Ganeti).

> In the article we describe our storage needs, how we use Ceph and how it has
> worked so far. I hope you enjoy reading it.
Are you just using the existing kernel RBD mapping for Ganeti running
KVM, or did you implement the pieces for Ganeti to use the QEMU
userspace RBD driver?

I've got both Ceph & Ganeti clusters already, but am reluctant to marry
the two sets of functionality because the kernel RBD driver still seemed
to perform so much worse than the Qemu userspace RBD driver, and Ganeti
still hasn't implemented the userspace mapping pieces :-(

-- 
Robin Hugh Johnson
Gentoo Linux: Developer, Infrastructure Lead
E-Mail     : robbat2@gentoo.org
GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login:
  2014-06-02 18:32 Experiences with Ceph at the June'14 issue of USENIX ; login: Filippos Giannakos
  2014-06-02 18:51 ` [ceph-users] " Patrick McGarry
  2014-06-02 21:40 ` Experiences with Ceph at the June'14 issue of USENIX ;login: Robin H. Johnson
@ 2014-06-02 22:37 ` Ian Colle
       [not found]   ` <1235448490.9762058.1401748668812.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
  2 siblings, 1 reply; 7+ messages in thread
From: Ian Colle @ 2014-06-02 22:37 UTC (permalink / raw)
  To: Filippos Giannakos; +Cc: ceph-users, ceph-devel

Thanks, Filippos! Very interesting reading.

Are you comfortable enough yet to remove the RAID-1 from your architecture and get all that space back?

Ian R. Colle
Global Director
of Software Engineering
Red Hat (Inktank is now part of Red Hat!)
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: icolle@redhat.com

----- Original Message -----
From: "Filippos Giannakos" <philipgian@grnet.gr>
To: ceph-users@lists.ceph.com, ceph-devel@vger.kernel.org
Sent: Monday, June 2, 2014 11:32:19 AM
Subject: [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ;	login:

Hello all,

As you may already know, we have been using Ceph for quite some time now to back
the ~okeanos [1] public cloud service, which is powered by Synnefo [2].

A few months ago we were kindly invited to write an article about our
experiences with Ceph for the USENIX ;login: magazine. The article is out in
this month's (June '14) issue and we are really happy to share it with you all:

https://www.usenix.org/publications/login/june14/giannakos

In the article we describe our storage needs, how we use Ceph and how it has
worked so far. I hope you enjoy reading it.

Kind Regards,
Filippos

[1] http://okeanos.grnet.gr
[2] http://www.synnefo.org

-- 
Filippos
<philipgian@grnet.gr>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Experiences with Ceph at the June'14 issue of USENIX ;login:
  2014-06-02 21:40 ` Experiences with Ceph at the June'14 issue of USENIX ;login: Robin H. Johnson
@ 2014-06-03  9:12   ` Constantinos Venetsanopoulos
  0 siblings, 0 replies; 7+ messages in thread
From: Constantinos Venetsanopoulos @ 2014-06-03  9:12 UTC (permalink / raw)
  To: Robin H. Johnson, ceph-devel

Hello Robin,

On 6/3/14, 24:40 AM, Robin H. Johnson wrote:
> On Mon, Jun 02, 2014 at 09:32:19PM +0300,  Filippos Giannakos wrote:
>> As you may already know, we have been using Ceph for quite some time now to back
>> the ~okeanos [1] public cloud service, which is powered by Synnefo [2].
> (Background info for other readers: Synnefo is a cloud layer on top of
> Ganeti).
>
>> In the article we describe our storage needs, how we use Ceph and how it has
>> worked so far. I hope you enjoy reading it.
> Are you just using the existing kernel RBD mapping for Ganeti running
> KVM, or did you implement the pieces for Ganeti to use the QEMU
> userspace RBD driver?

Non of the above. From the Ceph project we are just using RADOS,
which we access via an Archipelago [1] backend driver that uses
librados from userspace.

We integrate Archipelago with Ganeti with the Archipelago ExtStorage
provider.

> I've got both Ceph & Ganeti clusters already, but am reluctant to marry
> the two sets of functionality because the kernel RBD driver still seemed
> to perform so much worse than the Qemu userspace RBD driver, and Ganeti
> still hasn't implemented the userspace mapping pieces :-(
>

Ganeti supports accessing RADOS from userspace (via the qemu-rbd
driver) since version 2.10. The current stable is 2.11. Not only that,
but starting v2.13 (not released yet), you will be able to configure the
access method per-disk, e.g. saying that the first disk of the instance
will be kernel backed and the second userspace backed. So, I'd suggest
you give it a try and see how it goes :)

Thanks,
Constantinos


[1] https://www.synnefo.org/docs/archipelago/latest/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Experiences with Ceph at the June'14 issue of USENIX ; login:
       [not found]   ` <1235448490.9762058.1401748668812.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2014-06-04 14:22     ` Filippos Giannakos
  2014-06-05  6:59       ` Christian Balzer
  0 siblings, 1 reply; 7+ messages in thread
From: Filippos Giannakos @ 2014-06-04 14:22 UTC (permalink / raw)
  To: Ian Colle
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA

Hello Ian,

Thanks for your interest.

On Mon, Jun 02, 2014 at 06:37:48PM -0400, Ian Colle wrote:
> Thanks, Filippos! Very interesting reading.
> 
> Are you comfortable enough yet to remove the RAID-1 from your architecture and
> get all that space back?

Actually, we are not ready to do that yet. There are three major things to
consider.

First, to be able to get rid of the RAID-1 setup, we need to increase the
replication level to at least 3x. So the space gain is not that great to begin
with.

Second, this operation can take about a month for our scale according to our
calculations and previous experience. During this period of increased I/O we
might get peaks of performance degradation. Plus, we currently do not have the
necessary hardware available to increase the replication level before we get rid
of the RAID setup.

Third, we have a few disk failures per month. The RAID-1 setup has allowed us to
seamlessly replace them without any hiccup or even a clue to the end user that
something went wrong. Surely we can rely on RADOS to avoid any data loss, but if
we currently rely on RADOS for recovery there might be some (minor) performance
degradation, especially for the VM I/O traffic.

Kind Regards,
-- 
Filippos
<philipgian-Sqt7GMbKoOQ@public.gmane.org>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Experiences with Ceph at the June'14 issue of USENIX ; login:
  2014-06-04 14:22     ` Filippos Giannakos
@ 2014-06-05  6:59       ` Christian Balzer
  0 siblings, 0 replies; 7+ messages in thread
From: Christian Balzer @ 2014-06-05  6:59 UTC (permalink / raw)
  To: Filippos Giannakos
  Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw, ceph-devel-u79uwXL29TY76Z2rM5mHXA


Hello Filippos,

On Wed, 4 Jun 2014 17:22:35 +0300 Filippos Giannakos wrote:

> Hello Ian,
> 
> Thanks for your interest.
> 
> On Mon, Jun 02, 2014 at 06:37:48PM -0400, Ian Colle wrote:
> > Thanks, Filippos! Very interesting reading.
> > 
> > Are you comfortable enough yet to remove the RAID-1 from your
> > architecture and get all that space back?
> 
> Actually, we are not ready to do that yet. There are three major things
> to consider.
> 
> First, to be able to get rid of the RAID-1 setup, we need to increase the
> replication level to at least 3x. So the space gain is not that great to
> begin with.
> 
> Second, this operation can take about a month for our scale according to
> our calculations and previous experience. During this period of
> increased I/O we might get peaks of performance degradation. Plus, we
> currently do not have the necessary hardware available to increase the
> replication level before we get rid of the RAID setup.
> 
> Third, we have a few disk failures per month. The RAID-1 setup has
> allowed us to seamlessly replace them without any hiccup or even a clue
> to the end user that something went wrong. Surely we can rely on RADOS
> to avoid any data loss, but if we currently rely on RADOS for recovery
> there might be some (minor) performance degradation, especially for the
> VM I/O traffic.
> 
That. 
And in addition you probably never had to do all that song and dance of
removing a failed OSD and bringing up a replacement. ^o^ 
One of the reasons I choose RAIDs as OSDs, especially since the Ceph
cluster in question is not local.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi-FW+hd8ioUD0@public.gmane.org   	Global OnLine Japan/Fusion Communications
http://www.gol.com/

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2014-06-05  6:59 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-06-02 18:32 Experiences with Ceph at the June'14 issue of USENIX ; login: Filippos Giannakos
2014-06-02 18:51 ` [ceph-users] " Patrick McGarry
2014-06-02 21:40 ` Experiences with Ceph at the June'14 issue of USENIX ;login: Robin H. Johnson
2014-06-03  9:12   ` Constantinos Venetsanopoulos
2014-06-02 22:37 ` [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login: Ian Colle
     [not found]   ` <1235448490.9762058.1401748668812.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2014-06-04 14:22     ` Filippos Giannakos
2014-06-05  6:59       ` Christian Balzer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.