All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [ceph-users] Status of snapshots in CephFS
       [not found] <CAPUexz_pGTXdjbw90HKDQPzfXQNY=vormfxwvDLWpMvhD+eOYA@mail.gmail.com>
@ 2014-09-19 15:25 ` Sage Weil
  2014-09-24 18:27   ` Florian Haas
  2014-09-24 18:41   ` Florian Haas
  0 siblings, 2 replies; 3+ messages in thread
From: Sage Weil @ 2014-09-19 15:25 UTC (permalink / raw)
  To: Florian Haas; +Cc: ceph-users, ceph-devel

On Fri, 19 Sep 2014, Florian Haas wrote:
> Hello everyone,
> 
> Just thought I'd circle back on some discussions I've had with people
> earlier in the year:
> 
> Shortly before firefly, snapshot support for CephFS clients was
> effectively disabled by default at the MDS level, and can only be
> enabled after accepting a scary warning that your filesystem is highly
> likely to break if snapshot support is enabled. Has any progress been
> made on this in the interim?
> 
> With libcephfs support slowly maturing in Ganesha, the option of
> deploying a Ceph-backed userspace NFS server is becoming more
> attractive -- and it's probably a better use of resources than mapping
> a boatload of RBDs on an NFS head node and then exporting all the data
> from there. Recent snapshot trimming issues notwithstanding, RBD
> snapshot support is reasonably stable, but even so, making snapshot
> data available via NFS, that way, is rather ugly. In addition, the
> libcephfs/Ganesha approach would obviously include much better
> horizontal scalability.

We haven't done any work on snapshot stability.  It is probably moderately 
stable if snapshots are only done at the root or at a consistent point in 
the hierarcy (as opposed to random directories), but there are still some 
basic problems that need to be resolved.  I would not suggest deploying 
this in production!  But some stress testing woudl as always be very 
welcome.  :)

> In addition, https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_2.0#CEPH
> states:
> 
> "The current requirement to build and use the Ceph FSAL is a Ceph
> build environment which includes Ceph client enhancements staged on
> the libwipcephfs development branch. These changes are expected to be
> part of the Ceph Firefly release."
> 
> ... though it's not clear whether they ever did make it into firefly.
> Could someone in the know comment on that?

I think this is referring to the libcephfs API changes that the cohortfs 
folks did.  That all merged shortly before firefly.

By the way, we have some basic samba integration tests in our regular 
regression tests, but nothing based on ganesha.  If you really want this 
to the work, the most valuable thing you could do would be to help 
get the tests written and integrated into ceph-qa-suite.git.  Probably the 
biggest piece of work there is creating a task/ganesha.py that installs 
and configures ganesha with the ceph FSAL.

sage

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ceph-users] Status of snapshots in CephFS
  2014-09-19 15:25 ` [ceph-users] Status of snapshots in CephFS Sage Weil
@ 2014-09-24 18:27   ` Florian Haas
  2014-09-24 18:41   ` Florian Haas
  1 sibling, 0 replies; 3+ messages in thread
From: Florian Haas @ 2014-09-24 18:27 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-users, ceph-devel

[-- Attachment #1: Type: text/plain, Size: 3504 bytes --]

On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil <sweil@redhat.com> wrote:
> On Fri, 19 Sep 2014, Florian Haas wrote:
>> Hello everyone,
>>
>> Just thought I'd circle back on some discussions I've had with people
>> earlier in the year:
>>
>> Shortly before firefly, snapshot support for CephFS clients was
>> effectively disabled by default at the MDS level, and can only be
>> enabled after accepting a scary warning that your filesystem is highly
>> likely to break if snapshot support is enabled. Has any progress been
>> made on this in the interim?
>>
>> With libcephfs support slowly maturing in Ganesha, the option of
>> deploying a Ceph-backed userspace NFS server is becoming more
>> attractive -- and it's probably a better use of resources than mapping
>> a boatload of RBDs on an NFS head node and then exporting all the data
>> from there. Recent snapshot trimming issues notwithstanding, RBD
>> snapshot support is reasonably stable, but even so, making snapshot
>> data available via NFS, that way, is rather ugly. In addition, the
>> libcephfs/Ganesha approach would obviously include much better
>> horizontal scalability.
>
> We haven't done any work on snapshot stability.  It is probably moderately
> stable if snapshots are only done at the root or at a consistent point in
> the hierarcy (as opposed to random directories), but there are still some
> basic problems that need to be resolved.  I would not suggest deploying
> this in production!  But some stress testing woudl as always be very
> welcome.  :)

OK, on a semi-related note: is there any reasonably current
authoritative list of features that are supported and unsupported in
either ceph-fuse or kernel cephfs, and if so, at what minimal version?

The most comprehensive overview that seems to be available is one from
Greg, which however is a year and a half old:

http://ceph.com/dev-notes/cephfs-mds-status-discussion/

>> In addition, https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_2.0#CEPH
>> states:
>>
>> "The current requirement to build and use the Ceph FSAL is a Ceph
>> build environment which includes Ceph client enhancements staged on
>> the libwipcephfs development branch. These changes are expected to be
>> part of the Ceph Firefly release."
>>
>> ... though it's not clear whether they ever did make it into firefly.
>> Could someone in the know comment on that?
>
> I think this is referring to the libcephfs API changes that the cohortfs
> folks did.  That all merged shortly before firefly.

Great, thanks for the clarification.

> By the way, we have some basic samba integration tests in our regular
> regression tests, but nothing based on ganesha.  If you really want this
> to the work, the most valuable thing you could do would be to help
> get the tests written and integrated into ceph-qa-suite.git.  Probably the
> biggest piece of work there is creating a task/ganesha.py that installs
> and configures ganesha with the ceph FSAL.

Hmmm, given the excellent writeup that Niels de Vos of Gluster fame
wrote about this topic, I might actually be able to cargo-cult some of
what's in the Samba task and adapt it for ganesha.

Sorry while I'm being ignorant about Teuthology: what platform does it
normally run on? I ask because I understand most of your testing is done
on Ubuntu, and Ubuntu currently doesn't ship a Ganesha package, which
would make the install task a bit more complex.

Cheers,
Florian


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [ceph-users] Status of snapshots in CephFS
  2014-09-19 15:25 ` [ceph-users] Status of snapshots in CephFS Sage Weil
  2014-09-24 18:27   ` Florian Haas
@ 2014-09-24 18:41   ` Florian Haas
  1 sibling, 0 replies; 3+ messages in thread
From: Florian Haas @ 2014-09-24 18:41 UTC (permalink / raw)
  To: Sage Weil; +Cc: ceph-users, ceph-devel

On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil <sweil@redhat.com> wrote:
> On Fri, 19 Sep 2014, Florian Haas wrote:
>> Hello everyone,
>>
>> Just thought I'd circle back on some discussions I've had with people
>> earlier in the year:
>>
>> Shortly before firefly, snapshot support for CephFS clients was
>> effectively disabled by default at the MDS level, and can only be
>> enabled after accepting a scary warning that your filesystem is highly
>> likely to break if snapshot support is enabled. Has any progress been
>> made on this in the interim?
>>
>> With libcephfs support slowly maturing in Ganesha, the option of
>> deploying a Ceph-backed userspace NFS server is becoming more
>> attractive -- and it's probably a better use of resources than mapping
>> a boatload of RBDs on an NFS head node and then exporting all the data
>> from there. Recent snapshot trimming issues notwithstanding, RBD
>> snapshot support is reasonably stable, but even so, making snapshot
>> data available via NFS, that way, is rather ugly. In addition, the
>> libcephfs/Ganesha approach would obviously include much better
>> horizontal scalability.
>
> We haven't done any work on snapshot stability.  It is probably moderately
> stable if snapshots are only done at the root or at a consistent point in
> the hierarcy (as opposed to random directories), but there are still some
> basic problems that need to be resolved.  I would not suggest deploying
> this in production!  But some stress testing woudl as always be very
> welcome.  :)

OK, on a semi-related note: is there any reasonably current
authoritative list of features that are supported and unsupported in
either ceph-fuse or kernel cephfs, and if so, at what minimal version?

The most comprehensive overview that seems to be available is one from
Greg, which however is a year and a half old:

http://ceph.com/dev-notes/cephfs-mds-status-discussion/

>> In addition, https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_2.0#CEPH
>> states:
>>
>> "The current requirement to build and use the Ceph FSAL is a Ceph
>> build environment which includes Ceph client enhancements staged on
>> the libwipcephfs development branch. These changes are expected to be
>> part of the Ceph Firefly release."
>>
>> ... though it's not clear whether they ever did make it into firefly.
>> Could someone in the know comment on that?
>
> I think this is referring to the libcephfs API changes that the cohortfs
> folks did.  That all merged shortly before firefly.

Great, thanks for the clarification.

> By the way, we have some basic samba integration tests in our regular
> regression tests, but nothing based on ganesha.  If you really want this
> to the work, the most valuable thing you could do would be to help
> get the tests written and integrated into ceph-qa-suite.git.  Probably the
> biggest piece of work there is creating a task/ganesha.py that installs
> and configures ganesha with the ceph FSAL.

Hmmm, given the excellent writeup that Niels de Vos of Gluster fame
wrote about this topic, I might actually be able to cargo-cult some of
what's in the Samba task and adapt it for ganesha.

Sorry while I'm being ignorant about Teuthology: what platform does it
normally run on? I ask because I understand most of your testing is done
on Ubuntu, and Ubuntu currently doesn't ship a Ganesha package, which
would make the install task a bit more complex.

Cheers,
Florian

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2014-09-24 18:41 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <CAPUexz_pGTXdjbw90HKDQPzfXQNY=vormfxwvDLWpMvhD+eOYA@mail.gmail.com>
2014-09-19 15:25 ` [ceph-users] Status of snapshots in CephFS Sage Weil
2014-09-24 18:27   ` Florian Haas
2014-09-24 18:41   ` Florian Haas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.