All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christian Balzer <chibi-FW+hd8ioUD0@public.gmane.org>
To: Filippos Giannakos <philipgian-Sqt7GMbKoOQ@public.gmane.org>
Cc: ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: Experiences with Ceph at the June'14 issue of USENIX ; login:
Date: Thu, 5 Jun 2014 15:59:08 +0900	[thread overview]
Message-ID: <20140605155908.4dfe125a@batzmaru.gol.ad.jp> (raw)
In-Reply-To: <20140604142235.GI17479@philipgian-mac>


Hello Filippos,

On Wed, 4 Jun 2014 17:22:35 +0300 Filippos Giannakos wrote:

> Hello Ian,
> 
> Thanks for your interest.
> 
> On Mon, Jun 02, 2014 at 06:37:48PM -0400, Ian Colle wrote:
> > Thanks, Filippos! Very interesting reading.
> > 
> > Are you comfortable enough yet to remove the RAID-1 from your
> > architecture and get all that space back?
> 
> Actually, we are not ready to do that yet. There are three major things
> to consider.
> 
> First, to be able to get rid of the RAID-1 setup, we need to increase the
> replication level to at least 3x. So the space gain is not that great to
> begin with.
> 
> Second, this operation can take about a month for our scale according to
> our calculations and previous experience. During this period of
> increased I/O we might get peaks of performance degradation. Plus, we
> currently do not have the necessary hardware available to increase the
> replication level before we get rid of the RAID setup.
> 
> Third, we have a few disk failures per month. The RAID-1 setup has
> allowed us to seamlessly replace them without any hiccup or even a clue
> to the end user that something went wrong. Surely we can rely on RADOS
> to avoid any data loss, but if we currently rely on RADOS for recovery
> there might be some (minor) performance degradation, especially for the
> VM I/O traffic.
> 
That. 
And in addition you probably never had to do all that song and dance of
removing a failed OSD and bringing up a replacement. ^o^ 
One of the reasons I choose RAIDs as OSDs, especially since the Ceph
cluster in question is not local.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi-FW+hd8ioUD0@public.gmane.org   	Global OnLine Japan/Fusion Communications
http://www.gol.com/

      reply	other threads:[~2014-06-05  6:59 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-02 18:32 Experiences with Ceph at the June'14 issue of USENIX ; login: Filippos Giannakos
2014-06-02 18:51 ` [ceph-users] " Patrick McGarry
2014-06-02 21:40 ` Experiences with Ceph at the June'14 issue of USENIX ;login: Robin H. Johnson
2014-06-03  9:12   ` Constantinos Venetsanopoulos
2014-06-02 22:37 ` [ceph-users] Experiences with Ceph at the June'14 issue of USENIX ; login: Ian Colle
     [not found]   ` <1235448490.9762058.1401748668812.JavaMail.zimbra-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2014-06-04 14:22     ` Filippos Giannakos
2014-06-05  6:59       ` Christian Balzer [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140605155908.4dfe125a@batzmaru.gol.ad.jp \
    --to=chibi-fw+hd8ioud0@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
    --cc=philipgian-Sqt7GMbKoOQ@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.