All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Félix Ortega Hortigüela" <fortegah-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Neil Levine <neil.levine-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org>
Cc: "ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	"ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org"
	<ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org>
Subject: Re: CephFS First product release discussion
Date: Thu, 7 Mar 2013 14:11:33 +0100	[thread overview]
Message-ID: <CAONotKO50GVHFzvueWczvv3qK8c_fcxr_v88OWZYP9LKU+S+jQ@mail.gmail.com> (raw)
In-Reply-To: <CANygib-U_MQi1TMmQuT_Q9MVwPfT+PzJwN=+BMcBK69WuRfu3w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>


[-- Attachment #1.1: Type: text/plain, Size: 3179 bytes --]

I think stable mds daemon and fsck or a way to recover some of the data
once the mds crash is the only thing we need.

We are using ceph as a very big fs for doing nightly backups of our 3000+
servers. We have some front servers doing rsync over slow adsl lines,
saving all data on a very big cephfs mount. We have some kind of versioning
(with rsync --link-dest) and a custom software over all of this that allows
the user to copy his files or program uploads of the data.

We need to scale the storage quickly and to be able to recover one server
or disk malfunction searching minimum downtime. We don't need a lot of
speed (since the data lines we are using are slow).

It seems ceph is the perfect choice, but perhaps a Plan B for recovering
part of our data in case a catastrohpic failure arises is our most needed
feature.

Regards.

--
Félix Ortega Hortigüela


On Wed, Mar 6, 2013 at 6:01 AM, Neil Levine <neil.levine-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org> wrote:

> As an extra request, it would be great if people explained a little
> about their use-case for the filesystem so we can better understand
> how the features requested map to the type of workloads people are
> trying.
>
> Thanks
>
> Neil
>
> On Tue, Mar 5, 2013 at 9:03 AM, Greg Farnum <greg-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org> wrote:
> > This is a companion discussion to the blog post at
> http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
> >
> > The short and slightly alternate version: I spent most of about two
> weeks working on bugs related to snapshots in the MDS, and we started
> realizing that we could probably do our first supported release of CephFS
> and the related infrastructure much sooner if we didn't need to support all
> of the whizbang features. (This isn't to say that the base feature set is
> stable now, but it's much closer than when you turn on some of the other
> things.) I'd like to get feedback from you in the community on what minimum
> supported feature set would prompt or allow you to start using CephFS in
> real environments — not what you'd *like* to see, but what you *need* to
> see. This will allow us at Inktank to prioritize more effectively and
> hopefully get out a supported release much more quickly! :)
> >
> > The current proposed feature set is basically what's left over after
> we've trimmed off everything we can think to split off, but if any of the
> proposed included features are also particularly important or don't matter,
> be sure to mention them (NFS export in particular — it works right now but
> isn't in great shape due to NFS filehandle caching).
> >
> > Thanks,
> > -Greg
> >
> > Software Engineer #42 @ http://inktank.com | http://ceph.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[-- Attachment #1.2: Type: text/html, Size: 4422 bytes --]

[-- Attachment #2: Type: text/plain, Size: 178 bytes --]

_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

  parent reply	other threads:[~2013-03-07 13:11 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <sfid-H20130305-170326-+024.05-1@marduk.tchpc.tcd.ie>
2013-03-05 17:03 ` CephFS First product release discussion Greg Farnum
2013-03-05 18:08   ` Wido den Hollander
2013-03-05 18:17     ` Greg Farnum
2013-03-05 18:28       ` Sage Weil
2013-03-05 18:36         ` Wido den Hollander
2013-03-05 18:48           ` Jim Schutt
2013-03-05 19:33           ` Sage Weil
2013-03-06 17:24             ` Wido den Hollander
2013-03-06 19:07             ` Jim Schutt
2013-03-06 19:13               ` CephFS Space Accounting and Quotas (was: CephFS First product release discussion) Greg Farnum
2013-03-06 19:58                 ` CephFS Space Accounting and Quotas Jim Schutt
2013-03-06 20:21                   ` Greg Farnum
2013-03-06 21:28                     ` Jim Schutt
2013-03-06 21:39                       ` Greg Farnum
2013-03-06 23:14                         ` Jim Schutt
2013-03-07  0:18                           ` Greg Farnum
2013-03-07 15:15                             ` Jim Schutt
2013-03-08 22:45                               ` Jim Schutt
2013-03-09  2:05                                 ` Greg Farnum
2013-03-11 14:47                                   ` Jim Schutt
2013-03-11 15:48                                     ` Greg Farnum
2013-03-11 16:48                                       ` Jim Schutt
2013-03-11 16:57                                         ` Greg Farnum
2013-03-11 20:40                                           ` Jim Schutt
2013-03-12 22:34                                             ` Jim Schutt
     [not found]                                               ` <513FAE0F.2010608@sandia.gov>
     [not found]                                                 ` <BE627BF4B6E74BD49037D07821FC1DB9@inktank.com>
     [not found]                                                   ` <5143AA84.50409@sandia.gov>
2013-03-15 23:17                                                     ` Greg Farnum
2013-03-18 14:19                                                       ` Jim Schutt
2013-03-06 21:42                     ` Sage Weil
2013-03-06  5:01   ` [ceph-users] CephFS First product release discussion Neil Levine
     [not found]     ` <CANygib-U_MQi1TMmQuT_Q9MVwPfT+PzJwN=+BMcBK69WuRfu3w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-03-07 13:11       ` Félix Ortega Hortigüela [this message]
     [not found]   ` <E0B1337A572647BA9FCC0CE8CA946F42-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org>
2013-03-07 11:54     ` Jimmy Tang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAONotKO50GVHFzvueWczvv3qK8c_fcxr_v88OWZYP9LKU+S+jQ@mail.gmail.com \
    --to=fortegah-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=ceph-devel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org \
    --cc=neil.levine-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.