* 0.61 Cuttlefish released
@ 2013-05-07 2:51 Sage Weil
[not found] ` <alpine.DEB.2.00.1305061744430.9909-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2013-05-15 11:08 ` 0.61 Cuttlefish / ceph-deploy missing Kasper Dieter
0 siblings, 2 replies; 8+ messages in thread
From: Sage Weil @ 2013-05-07 2:51 UTC (permalink / raw)
To: ceph-devel, ceph-users
Spring has arrived (at least for some of us), and a new stable release of
Ceph is ready! Thank you to everyone who has contributed to this release!
Bigger ticket items since v0.56.x "Bobtail":
* ceph-deploy: our new deployment tool to replace 'mkcephfs'
* robust RHEL/CentOS support
* ceph-disk: many improvements to support hot-plugging devices via chef
and ceph-deploy
* ceph-disk: dm-crypt support for OSD disks
* ceph-disk: 'list' command to see available (and used) disks
* rbd: incremental backups
* rbd-fuse: access RBD images via fuse
* librbd: autodetection of VM flush support to allow safe enablement of
the writeback cache
* osd: improved small write, snap trimming, and overall performance
* osd: PG splitting
* osd: per-pool quotas (object and byte)
* osd: tool for importing, exporting, removing PGs from OSD data store
* osd: improved clean-shutdown behavior
* osd: noscrub, nodeepscrub options
* osd: more robust scrubbing, repair, ENOSPC handling
* osd: improved memory usage, log trimming
* osd: improved journal corruption detection
* ceph: new 'df' command
* mon: new storage backend (leveldb)
* mon: config-keys service
* mon, crush: new commands to manage CRUSH entirely via CLI
* mon: avoid marking entire subtrees (e.g., racks) out automatically
* rgw: CORS support
* rgw: misc API fixes
* rgw: ability to listen to fastcgi on a port
* sysvinit, upstart: improved support for standardized data locations
* mds: backpointers on all data and metadata objects
* mds: faster fail-over
* mds: many many bug fixes
* ceph-fuse: many stability improvements
Notable changes since v0.60:
* rbd: incremental backups
* rbd: only set STRIPINGV2 feature if striping parameters are
incompatible with old versions
* rbd: require allow-shrink for resizing images down
* librbd: many bug fixes
* rgw: fix object corruption on COPY to self
* rgw: new sysvinit script for rpm-based systems
* rgw: allow buckets with _
* rgw: CORS support
* mon: many fixes
* mon: improved trimming behavior
* mon: fix data conversion/upgrade problem (from bobtail)
* mon: ability to tune leveldb
* mon: config-keys service to store arbitrary data on monitor
* mon: osd crush add|link|unlink|add-bucket ... commands
* mon: trigger leveldb compaction on trim
* osd: per-rados pool quotas (objects, bytes)
* osd: tool to export, import, and delete PGs from an individual OSD data
store
* osd: notify mon on clean shutdown to avoid IO stall
* osd: improved detection of corrupted journals
* osd: ability to tune leveldb
* osd: improve client request throttling
* osd, librados: fixes to the LIST_SNAPS operation
* osd: improvements to scrub error repair
* osd: better prevention of wedging OSDs with ENOSPC
* osd: many small fixes
* mds: fix xattr handling on root inode
* mds: fixed bugs in journal replay
* mds: many fixes
* librados: clean up snapshot constant definitions
* libcephfs: calls to query CRUSH topology (used by Hadoop)
* ceph-fuse, libcephfs: misc fixes to mds session management
* ceph-fuse: disabled cache invalidation (again) due to potential
deadlock with kernel
* sysvinit: try to start all daemons despite early failures
* ceph-disk: new list command
* ceph-disk: hotplug fixes for RHEL/CentOS
* ceph-disk: fix creation of OSD data partitions on >2TB disks
* osd: fix udev rules for RHEL/CentOS systems
* fix daemon logging during initial startup
There are a few things to keep in mind when upgrading from Bobtail,
specifically with the monitor daemons. Please see the upgrade guide
and/or the complete release notes. In short: upgrade all of your monitors
(more or less) at once.
Cuttlefish is the first Ceph release on our new three-month stable release
cycle. We are very pleased to have pulled everything together on schedule
(well, only a week later than planned). The next stable release, which
will be code-named Dumpling, is slated for three months from now
(beginning of August).
You can download v0.61 Cuttlefish from the usual locations:
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.com/download/ceph-0.61.tar.gz
* For Debian/Ubuntu packages, see http://ceph.com/docs/master/install/debian
* For RPMs, see http://ceph.com/docs/master/install/rpm
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 0.61 Cuttlefish released
[not found] ` <alpine.DEB.2.00.1305061744430.9909-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2013-05-07 10:25 ` Igor Laskovy
2013-05-07 14:25 ` [ceph-users] " Sage Weil
[not found] ` <CAC+dAwEnMrD8uDGLXCxa+B0hBCNfeUX5_6Aon+G8-RpvAqD-ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 2 replies; 8+ messages in thread
From: Igor Laskovy @ 2013-05-07 10:25 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ
[-- Attachment #1.1: Type: text/plain, Size: 4871 bytes --]
Hi,
where can I read more about ceph-disk?
On Tue, May 7, 2013 at 5:51 AM, Sage Weil <sage-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org> wrote:
> Spring has arrived (at least for some of us), and a new stable release of
> Ceph is ready! Thank you to everyone who has contributed to this release!
>
> Bigger ticket items since v0.56.x "Bobtail":
>
> * ceph-deploy: our new deployment tool to replace 'mkcephfs'
> * robust RHEL/CentOS support
> * ceph-disk: many improvements to support hot-plugging devices via chef
> and ceph-deploy
> * ceph-disk: dm-crypt support for OSD disks
> * ceph-disk: 'list' command to see available (and used) disks
> * rbd: incremental backups
> * rbd-fuse: access RBD images via fuse
> * librbd: autodetection of VM flush support to allow safe enablement of
> the writeback cache
> * osd: improved small write, snap trimming, and overall performance
> * osd: PG splitting
> * osd: per-pool quotas (object and byte)
> * osd: tool for importing, exporting, removing PGs from OSD data store
> * osd: improved clean-shutdown behavior
> * osd: noscrub, nodeepscrub options
> * osd: more robust scrubbing, repair, ENOSPC handling
> * osd: improved memory usage, log trimming
> * osd: improved journal corruption detection
> * ceph: new 'df' command
> * mon: new storage backend (leveldb)
> * mon: config-keys service
> * mon, crush: new commands to manage CRUSH entirely via CLI
> * mon: avoid marking entire subtrees (e.g., racks) out automatically
> * rgw: CORS support
> * rgw: misc API fixes
> * rgw: ability to listen to fastcgi on a port
> * sysvinit, upstart: improved support for standardized data locations
> * mds: backpointers on all data and metadata objects
> * mds: faster fail-over
> * mds: many many bug fixes
> * ceph-fuse: many stability improvements
>
> Notable changes since v0.60:
>
> * rbd: incremental backups
> * rbd: only set STRIPINGV2 feature if striping parameters are
> incompatible with old versions
> * rbd: require allow-shrink for resizing images down
> * librbd: many bug fixes
> * rgw: fix object corruption on COPY to self
> * rgw: new sysvinit script for rpm-based systems
> * rgw: allow buckets with _
> * rgw: CORS support
> * mon: many fixes
> * mon: improved trimming behavior
> * mon: fix data conversion/upgrade problem (from bobtail)
> * mon: ability to tune leveldb
> * mon: config-keys service to store arbitrary data on monitor
> * mon: osd crush add|link|unlink|add-bucket ... commands
> * mon: trigger leveldb compaction on trim
> * osd: per-rados pool quotas (objects, bytes)
> * osd: tool to export, import, and delete PGs from an individual OSD data
> store
> * osd: notify mon on clean shutdown to avoid IO stall
> * osd: improved detection of corrupted journals
> * osd: ability to tune leveldb
> * osd: improve client request throttling
> * osd, librados: fixes to the LIST_SNAPS operation
> * osd: improvements to scrub error repair
> * osd: better prevention of wedging OSDs with ENOSPC
> * osd: many small fixes
> * mds: fix xattr handling on root inode
> * mds: fixed bugs in journal replay
> * mds: many fixes
> * librados: clean up snapshot constant definitions
> * libcephfs: calls to query CRUSH topology (used by Hadoop)
> * ceph-fuse, libcephfs: misc fixes to mds session management
> * ceph-fuse: disabled cache invalidation (again) due to potential
> deadlock with kernel
> * sysvinit: try to start all daemons despite early failures
> * ceph-disk: new list command
> * ceph-disk: hotplug fixes for RHEL/CentOS
> * ceph-disk: fix creation of OSD data partitions on >2TB disks
> * osd: fix udev rules for RHEL/CentOS systems
> * fix daemon logging during initial startup
>
> There are a few things to keep in mind when upgrading from Bobtail,
> specifically with the monitor daemons. Please see the upgrade guide
> and/or the complete release notes. In short: upgrade all of your monitors
> (more or less) at once.
>
> Cuttlefish is the first Ceph release on our new three-month stable release
> cycle. We are very pleased to have pulled everything together on schedule
> (well, only a week later than planned). The next stable release, which
> will be code-named Dumpling, is slated for three months from now
> (beginning of August).
>
> You can download v0.61 Cuttlefish from the usual locations:
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://ceph.com/download/ceph-0.61.tar.gz
> * For Debian/Ubuntu packages, see
> http://ceph.com/docs/master/install/debian
> * For RPMs, see http://ceph.com/docs/master/install/rpm
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Igor Laskovy
facebook.com/igor.laskovy
studiogrizzly.com
[-- Attachment #1.2: Type: text/html, Size: 6264 bytes --]
[-- Attachment #2: Type: text/plain, Size: 178 bytes --]
_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [ceph-users] 0.61 Cuttlefish released
2013-05-07 10:25 ` Igor Laskovy
@ 2013-05-07 14:25 ` Sage Weil
[not found] ` <CAC+dAwEnMrD8uDGLXCxa+B0hBCNfeUX5_6Aon+G8-RpvAqD-ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
1 sibling, 0 replies; 8+ messages in thread
From: Sage Weil @ 2013-05-07 14:25 UTC (permalink / raw)
To: Igor Laskovy; +Cc: ceph-devel
[-- Attachment #1: Type: TEXT/PLAIN, Size: 6015 bytes --]
On Tue, 7 May 2013, Igor Laskovy wrote:
> Hi,
> where can I read more about ceph-disk?
Hmm... you can less /usr/bin/ceph-disk.. or you can read the tracker
ticket to write a man page at http://tracker.ceph.com/issues/2774 :)
In all seriousness, 'ceph-disk -h' or 'ceph-disk <command> -h' and looking
at how ceph-deploy uses it is the best reference.
sage
>
>
> On Tue, May 7, 2013 at 5:51 AM, Sage Weil <sage@inktank.com> wrote:
> Spring has arrived (at least for some of us), and a new stable
> release of
> Ceph is ready! Thank you to everyone who has contributed to
> this release!
>
> Bigger ticket items since v0.56.x "Bobtail":
>
> * ceph-deploy: our new deployment tool to replace 'mkcephfs'
> * robust RHEL/CentOS support
> * ceph-disk: many improvements to support hot-plugging devices
> via chef
> and ceph-deploy
> * ceph-disk: dm-crypt support for OSD disks
> * ceph-disk: 'list' command to see available (and used) disks
> * rbd: incremental backups
> * rbd-fuse: access RBD images via fuse
> * librbd: autodetection of VM flush support to allow safe
> enablement of
> the writeback cache
> * osd: improved small write, snap trimming, and overall
> performance
> * osd: PG splitting
> * osd: per-pool quotas (object and byte)
> * osd: tool for importing, exporting, removing PGs from OSD
> data store
> * osd: improved clean-shutdown behavior
> * osd: noscrub, nodeepscrub options
> * osd: more robust scrubbing, repair, ENOSPC handling
> * osd: improved memory usage, log trimming
> * osd: improved journal corruption detection
> * ceph: new 'df' command
> * mon: new storage backend (leveldb)
> * mon: config-keys service
> * mon, crush: new commands to manage CRUSH entirely via CLI
> * mon: avoid marking entire subtrees (e.g., racks) out
> automatically
> * rgw: CORS support
> * rgw: misc API fixes
> * rgw: ability to listen to fastcgi on a port
> * sysvinit, upstart: improved support for standardized data
> locations
> * mds: backpointers on all data and metadata objects
> * mds: faster fail-over
> * mds: many many bug fixes
> * ceph-fuse: many stability improvements
>
> Notable changes since v0.60:
>
> * rbd: incremental backups
> * rbd: only set STRIPINGV2 feature if striping parameters are
> incompatible with old versions
> * rbd: require allow-shrink for resizing images down
> * librbd: many bug fixes
> * rgw: fix object corruption on COPY to self
> * rgw: new sysvinit script for rpm-based systems
> * rgw: allow buckets with _
> * rgw: CORS support
> * mon: many fixes
> * mon: improved trimming behavior
> * mon: fix data conversion/upgrade problem (from bobtail)
> * mon: ability to tune leveldb
> * mon: config-keys service to store arbitrary data on monitor
> * mon: osd crush add|link|unlink|add-bucket ... commands
> * mon: trigger leveldb compaction on trim
> * osd: per-rados pool quotas (objects, bytes)
> * osd: tool to export, import, and delete PGs from an
> individual OSD data
> store
> * osd: notify mon on clean shutdown to avoid IO stall
> * osd: improved detection of corrupted journals
> * osd: ability to tune leveldb
> * osd: improve client request throttling
> * osd, librados: fixes to the LIST_SNAPS operation
> * osd: improvements to scrub error repair
> * osd: better prevention of wedging OSDs with ENOSPC
> * osd: many small fixes
> * mds: fix xattr handling on root inode
> * mds: fixed bugs in journal replay
> * mds: many fixes
> * librados: clean up snapshot constant definitions
> * libcephfs: calls to query CRUSH topology (used by Hadoop)
> * ceph-fuse, libcephfs: misc fixes to mds session management
> * ceph-fuse: disabled cache invalidation (again) due to
> potential
> deadlock with kernel
> * sysvinit: try to start all daemons despite early failures
> * ceph-disk: new list command
> * ceph-disk: hotplug fixes for RHEL/CentOS
> * ceph-disk: fix creation of OSD data partitions on >2TB disks
> * osd: fix udev rules for RHEL/CentOS systems
> * fix daemon logging during initial startup
>
> There are a few things to keep in mind when upgrading from
> Bobtail,
> specifically with the monitor daemons. Please see the upgrade
> guide
> and/or the complete release notes. In short: upgrade all of
> your monitors
> (more or less) at once.
>
> Cuttlefish is the first Ceph release on our new three-month
> stable release
> cycle. We are very pleased to have pulled everything together
> on schedule
> (well, only a week later than planned). The next stable
> release, which
> will be code-named Dumpling, is slated for three months from now
> (beginning of August).
>
> You can download v0.61 Cuttlefish from the usual locations:
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://ceph.com/download/ceph-0.61.tar.gz
> * For Debian/Ubuntu packages, see
> http://ceph.com/docs/master/install/debian
> * For RPMs, see http://ceph.com/docs/master/install/rpm
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 0.61 Cuttlefish released
[not found] ` <CAC+dAwEnMrD8uDGLXCxa+B0hBCNfeUX5_6Aon+G8-RpvAqD-ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-05-07 16:42 ` John Wilkins
0 siblings, 0 replies; 8+ messages in thread
From: John Wilkins @ 2013-05-07 16:42 UTC (permalink / raw)
To: Igor Laskovy; +Cc: ceph-devel, ceph-users
[-- Attachment #1.1: Type: text/plain, Size: 5584 bytes --]
Igor,
I haven't closed out 3674, because I haven't covered that part yet. Chef
docs are now in the wiki, but I'll be adding ceph-disk docs shortly.
On Tue, May 7, 2013 at 3:25 AM, Igor Laskovy <igor.laskovy-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi,
>
> where can I read more about ceph-disk?
>
>
> On Tue, May 7, 2013 at 5:51 AM, Sage Weil <sage-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org> wrote:
>
>> Spring has arrived (at least for some of us), and a new stable release of
>> Ceph is ready! Thank you to everyone who has contributed to this release!
>>
>> Bigger ticket items since v0.56.x "Bobtail":
>>
>> * ceph-deploy: our new deployment tool to replace 'mkcephfs'
>> * robust RHEL/CentOS support
>> * ceph-disk: many improvements to support hot-plugging devices via chef
>> and ceph-deploy
>> * ceph-disk: dm-crypt support for OSD disks
>> * ceph-disk: 'list' command to see available (and used) disks
>> * rbd: incremental backups
>> * rbd-fuse: access RBD images via fuse
>> * librbd: autodetection of VM flush support to allow safe enablement of
>> the writeback cache
>> * osd: improved small write, snap trimming, and overall performance
>> * osd: PG splitting
>> * osd: per-pool quotas (object and byte)
>> * osd: tool for importing, exporting, removing PGs from OSD data store
>> * osd: improved clean-shutdown behavior
>> * osd: noscrub, nodeepscrub options
>> * osd: more robust scrubbing, repair, ENOSPC handling
>> * osd: improved memory usage, log trimming
>> * osd: improved journal corruption detection
>> * ceph: new 'df' command
>> * mon: new storage backend (leveldb)
>> * mon: config-keys service
>> * mon, crush: new commands to manage CRUSH entirely via CLI
>> * mon: avoid marking entire subtrees (e.g., racks) out automatically
>> * rgw: CORS support
>> * rgw: misc API fixes
>> * rgw: ability to listen to fastcgi on a port
>> * sysvinit, upstart: improved support for standardized data locations
>> * mds: backpointers on all data and metadata objects
>> * mds: faster fail-over
>> * mds: many many bug fixes
>> * ceph-fuse: many stability improvements
>>
>> Notable changes since v0.60:
>>
>> * rbd: incremental backups
>> * rbd: only set STRIPINGV2 feature if striping parameters are
>> incompatible with old versions
>> * rbd: require allow-shrink for resizing images down
>> * librbd: many bug fixes
>> * rgw: fix object corruption on COPY to self
>> * rgw: new sysvinit script for rpm-based systems
>> * rgw: allow buckets with _
>> * rgw: CORS support
>> * mon: many fixes
>> * mon: improved trimming behavior
>> * mon: fix data conversion/upgrade problem (from bobtail)
>> * mon: ability to tune leveldb
>> * mon: config-keys service to store arbitrary data on monitor
>> * mon: osd crush add|link|unlink|add-bucket ... commands
>> * mon: trigger leveldb compaction on trim
>> * osd: per-rados pool quotas (objects, bytes)
>> * osd: tool to export, import, and delete PGs from an individual OSD data
>> store
>> * osd: notify mon on clean shutdown to avoid IO stall
>> * osd: improved detection of corrupted journals
>> * osd: ability to tune leveldb
>> * osd: improve client request throttling
>> * osd, librados: fixes to the LIST_SNAPS operation
>> * osd: improvements to scrub error repair
>> * osd: better prevention of wedging OSDs with ENOSPC
>> * osd: many small fixes
>> * mds: fix xattr handling on root inode
>> * mds: fixed bugs in journal replay
>> * mds: many fixes
>> * librados: clean up snapshot constant definitions
>> * libcephfs: calls to query CRUSH topology (used by Hadoop)
>> * ceph-fuse, libcephfs: misc fixes to mds session management
>> * ceph-fuse: disabled cache invalidation (again) due to potential
>> deadlock with kernel
>> * sysvinit: try to start all daemons despite early failures
>> * ceph-disk: new list command
>> * ceph-disk: hotplug fixes for RHEL/CentOS
>> * ceph-disk: fix creation of OSD data partitions on >2TB disks
>> * osd: fix udev rules for RHEL/CentOS systems
>> * fix daemon logging during initial startup
>>
>> There are a few things to keep in mind when upgrading from Bobtail,
>> specifically with the monitor daemons. Please see the upgrade guide
>> and/or the complete release notes. In short: upgrade all of your monitors
>> (more or less) at once.
>>
>> Cuttlefish is the first Ceph release on our new three-month stable release
>> cycle. We are very pleased to have pulled everything together on schedule
>> (well, only a week later than planned). The next stable release, which
>> will be code-named Dumpling, is slated for three months from now
>> (beginning of August).
>>
>> You can download v0.61 Cuttlefish from the usual locations:
>>
>> * Git at git://github.com/ceph/ceph.git
>> * Tarball at http://ceph.com/download/ceph-0.61.tar.gz
>> * For Debian/Ubuntu packages, see
>> http://ceph.com/docs/master/install/debian
>> * For RPMs, see http://ceph.com/docs/master/install/rpm
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Igor Laskovy
> facebook.com/igor.laskovy
> studiogrizzly.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
John Wilkins
Senior Technical Writer
Intank
john.wilkins-4GqslpFJ+cxBDgjK7y7TUQ@public.gmane.org
(415) 425-9599
http://inktank.com
[-- Attachment #1.2: Type: text/html, Size: 7723 bytes --]
[-- Attachment #2: Type: text/plain, Size: 178 bytes --]
_______________________________________________
ceph-users mailing list
ceph-users-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 0.61 Cuttlefish / ceph-deploy missing
2013-05-07 2:51 0.61 Cuttlefish released Sage Weil
[not found] ` <alpine.DEB.2.00.1305061744430.9909-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
@ 2013-05-15 11:08 ` Kasper Dieter
[not found] ` <20130515110842.GA28468-tbq9f/Eh0BZ9QGbx8OrcVQ@public.gmane.org>
1 sibling, 1 reply; 8+ messages in thread
From: Kasper Dieter @ 2013-05-15 11:08 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, ceph-users
Hi Sage,
I'm a little bit confused about 'ceph-deploy' in 0.61:
. the 0.61 release note says: "ceph-deploy: our new deployment tool to replace 'mkcephfs'"
. http://ceph.com/docs/master/rados/deployment/mkcephfs/
says "To deploy a test or development cluster, you can use the mkcephfs tool.
We do not recommend using this tool for production environments."
. http://ceph.com/docs/master/rados/deployment/
says "MKCEPHFS (DEPRECATED)
(...) As of Ceph v0.60, it is deprecated in favor of ceph-deploy."
But, /usr/sbin/mkcephfs is provided by the ceph*0.61.2*rpm(s)
whereas ceph-deploy is missing.
The only way to get ceph-deploy seems to be using git:
git clone https://github.com/ceph/ceph-deploy.git
Is this the way it should be ?
When will ceph-deploy be included in the Cuttlefish rpms ?
Best Regards,
-Dieter
On Tue, May 07, 2013 at 04:51:25AM +0200, Sage Weil wrote:
> Spring has arrived (at least for some of us), and a new stable release of
> Ceph is ready! Thank you to everyone who has contributed to this release!
>
> Bigger ticket items since v0.56.x "Bobtail":
>
> * ceph-deploy: our new deployment tool to replace 'mkcephfs'
(...)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 0.61 Cuttlefish / ceph-deploy missing
[not found] ` <20130515110842.GA28468-tbq9f/Eh0BZ9QGbx8OrcVQ@public.gmane.org>
@ 2013-05-15 15:48 ` Sage Weil
2013-05-15 17:30 ` 0.61 Cuttlefish / ceph-deploy missing in repos Kasper Dieter
0 siblings, 1 reply; 8+ messages in thread
From: Sage Weil @ 2013-05-15 15:48 UTC (permalink / raw)
To: Kasper Dieter; +Cc: ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-users-Qp0mS5GaXlQ
On Wed, 15 May 2013, Kasper Dieter wrote:
> Hi Sage,
> I'm a little bit confused about 'ceph-deploy' in 0.61:
>
> . the 0.61 release note says: "ceph-deploy: our new deployment tool to replace 'mkcephfs'"
> . http://ceph.com/docs/master/rados/deployment/mkcephfs/
> says "To deploy a test or development cluster, you can use the mkcephfs tool.
> We do not recommend using this tool for production environments."
Adjustd this in the docs, thanks!
> . http://ceph.com/docs/master/rados/deployment/
> says "MKCEPHFS (DEPRECATED)
> (...) As of Ceph v0.60, it is deprecated in favor of ceph-deploy."
>
> But, /usr/sbin/mkcephfs is provided by the ceph*0.61.2*rpm(s)
> whereas ceph-deploy is missing.
ceph-deploy is packaged separately as it is designed to deploy any version
of Ceph (>= bobtail). The ceph-deploy package should be in the same repos
as the ceph packages; if they are missing, let us know. We are still
streamlining the build process for that package so that updates and fixes
(for the deployment tool) get published more quickly.
> The only way to get ceph-deploy seems to be using git:
> git clone https://github.com/ceph/ceph-deploy.git
This also works.
Thanks!
sage
>
>
> Is this the way it should be ?
> When will ceph-deploy be included in the Cuttlefish rpms ?
>
>
> Best Regards,
> -Dieter
>
>
> On Tue, May 07, 2013 at 04:51:25AM +0200, Sage Weil wrote:
> > Spring has arrived (at least for some of us), and a new stable release of
> > Ceph is ready! Thank you to everyone who has contributed to this release!
> >
> > Bigger ticket items since v0.56.x "Bobtail":
> >
> > * ceph-deploy: our new deployment tool to replace 'mkcephfs'
> (...)
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 0.61 Cuttlefish / ceph-deploy missing in repos
2013-05-15 15:48 ` Sage Weil
@ 2013-05-15 17:30 ` Kasper Dieter
2013-05-15 18:14 ` Gary Lowell
0 siblings, 1 reply; 8+ messages in thread
From: Kasper Dieter @ 2013-05-15 17:30 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel, Kasper Dieter, ceph-users
On Wed, May 15, 2013 at 05:48:22PM +0200, Sage Weil wrote:
> On Wed, 15 May 2013, Kasper Dieter wrote:
> > Hi Sage,
> > I'm a little bit confused about 'ceph-deploy' in 0.61:
> >
> > . the 0.61 release note says: "ceph-deploy: our new deployment tool to replace 'mkcephfs'"
> > . http://ceph.com/docs/master/rados/deployment/mkcephfs/
> > says "To deploy a test or development cluster, you can use the mkcephfs tool.
> > We do not recommend using this tool for production environments."
>
> Adjustd this in the docs, thanks!
>
> > . http://ceph.com/docs/master/rados/deployment/
> > says "MKCEPHFS (DEPRECATED)
> > (...) As of Ceph v0.60, it is deprecated in favor of ceph-deploy."
> >
> > But, /usr/sbin/mkcephfs is provided by the ceph*0.61.2*rpm(s)
> > whereas ceph-deploy is missing.
>
> ceph-deploy is packaged separately as it is designed to deploy any version
> of Ceph (>= bobtail). The ceph-deploy package should be in the same repos
> as the ceph packages; if they are missing, let us know.
yes, they are missing
e.g.
http://ceph.com/rpm-cuttlefish/fc17/x86_64/
ceph-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 9.9M
ceph-debuginfo-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 278M
ceph-devel-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 36K
ceph-fuse-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.0M
ceph-radosgw-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.3M
ceph-release-1-0.fc17.noarch.rpm 13-May-2013 17:15 3.5K
ceph-test-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 10M
cephfs-java-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 22K
libcephfs1-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.3M
libcephfs_jni1-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 33K
librados2-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.1M
librbd1-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 224K
python-ceph-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 32K
rbd-fuse-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 12K
rest-bench-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 226K
same in fc18, el6, sles11, ...
Best Regards,
-Dieter
> We are still
> streamlining the build process for that package so that updates and fixes
> (for the deployment tool) get published more quickly.
>
> > The only way to get ceph-deploy seems to be using git:
> > git clone https://github.com/ceph/ceph-deploy.git
>
> This also works.
>
> Thanks!
> sage
>
>
> >
> >
> > Is this the way it should be ?
> > When will ceph-deploy be included in the Cuttlefish rpms ?
> >
> >
> > Best Regards,
> > -Dieter
> >
> >
> > On Tue, May 07, 2013 at 04:51:25AM +0200, Sage Weil wrote:
> > > Spring has arrived (at least for some of us), and a new stable release of
> > > Ceph is ready! Thank you to everyone who has contributed to this release!
> > >
> > > Bigger ticket items since v0.56.x "Bobtail":
> > >
> > > * ceph-deploy: our new deployment tool to replace 'mkcephfs'
> > (...)
> > --
> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 0.61 Cuttlefish / ceph-deploy missing in repos
2013-05-15 17:30 ` 0.61 Cuttlefish / ceph-deploy missing in repos Kasper Dieter
@ 2013-05-15 18:14 ` Gary Lowell
0 siblings, 0 replies; 8+ messages in thread
From: Gary Lowell @ 2013-05-15 18:14 UTC (permalink / raw)
To: Kasper Dieter; +Cc: Sage Weil, ceph-devel, ceph-users
Hi Kasper -
ceph-deploy has a couple open bugs on Fedora that we need to fix before it works. Currently the only supported rpm platform is Centos/RHEL 6. Those packages can be found at http://ceph.com/rpm-cuttlefish/el6/noarch/ . The modules install into the python 2.6.6 site library so they may not be useful for Fedora.
Fedora rpms will be available as soon as we sort out the open bugs.
Cheers,
Gary
On May 15, 2013, at 10:30 AM, Kasper Dieter wrote:
> On Wed, May 15, 2013 at 05:48:22PM +0200, Sage Weil wrote:
>> On Wed, 15 May 2013, Kasper Dieter wrote:
>>> Hi Sage,
>>> I'm a little bit confused about 'ceph-deploy' in 0.61:
>>>
>>> . the 0.61 release note says: "ceph-deploy: our new deployment tool to replace 'mkcephfs'"
>>> . http://ceph.com/docs/master/rados/deployment/mkcephfs/
>>> says "To deploy a test or development cluster, you can use the mkcephfs tool.
>>> We do not recommend using this tool for production environments."
>>
>> Adjustd this in the docs, thanks!
>>
>>> . http://ceph.com/docs/master/rados/deployment/
>>> says "MKCEPHFS (DEPRECATED)
>>> (...) As of Ceph v0.60, it is deprecated in favor of ceph-deploy."
>>>
>>> But, /usr/sbin/mkcephfs is provided by the ceph*0.61.2*rpm(s)
>>> whereas ceph-deploy is missing.
>>
>> ceph-deploy is packaged separately as it is designed to deploy any version
>> of Ceph (>= bobtail). The ceph-deploy package should be in the same repos
>> as the ceph packages; if they are missing, let us know.
>
> yes, they are missing
>
> e.g.
> http://ceph.com/rpm-cuttlefish/fc17/x86_64/
> ceph-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 9.9M
> ceph-debuginfo-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 278M
> ceph-devel-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 36K
> ceph-fuse-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.0M
> ceph-radosgw-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.3M
> ceph-release-1-0.fc17.noarch.rpm 13-May-2013 17:15 3.5K
> ceph-test-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 10M
> cephfs-java-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 22K
> libcephfs1-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.3M
> libcephfs_jni1-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 33K
> librados2-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 1.1M
> librbd1-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 224K
> python-ceph-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 32K
> rbd-fuse-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 12K
> rest-bench-0.61.2-0.fc17.x86_64.rpm 13-May-2013 17:12 226K
>
> same in fc18, el6, sles11, ...
>
> Best Regards,
> -Dieter
>
>
>> We are still
>> streamlining the build process for that package so that updates and fixes
>> (for the deployment tool) get published more quickly.
>>
>>> The only way to get ceph-deploy seems to be using git:
>>> git clone https://github.com/ceph/ceph-deploy.git
>>
>> This also works.
>>
>> Thanks!
>> sage
>>
>>
>>>
>>>
>>> Is this the way it should be ?
>>> When will ceph-deploy be included in the Cuttlefish rpms ?
>>>
>>>
>>> Best Regards,
>>> -Dieter
>>>
>>>
>>> On Tue, May 07, 2013 at 04:51:25AM +0200, Sage Weil wrote:
>>>> Spring has arrived (at least for some of us), and a new stable release of
>>>> Ceph is ready! Thank you to everyone who has contributed to this release!
>>>>
>>>> Bigger ticket items since v0.56.x "Bobtail":
>>>>
>>>> * ceph-deploy: our new deployment tool to replace 'mkcephfs'
>>> (...)
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2013-05-15 18:14 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-07 2:51 0.61 Cuttlefish released Sage Weil
[not found] ` <alpine.DEB.2.00.1305061744430.9909-vIokxiIdD2AQNTJnQDzGJqxOck334EZe@public.gmane.org>
2013-05-07 10:25 ` Igor Laskovy
2013-05-07 14:25 ` [ceph-users] " Sage Weil
[not found] ` <CAC+dAwEnMrD8uDGLXCxa+B0hBCNfeUX5_6Aon+G8-RpvAqD-ZQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-05-07 16:42 ` John Wilkins
2013-05-15 11:08 ` 0.61 Cuttlefish / ceph-deploy missing Kasper Dieter
[not found] ` <20130515110842.GA28468-tbq9f/Eh0BZ9QGbx8OrcVQ@public.gmane.org>
2013-05-15 15:48 ` Sage Weil
2013-05-15 17:30 ` 0.61 Cuttlefish / ceph-deploy missing in repos Kasper Dieter
2013-05-15 18:14 ` Gary Lowell
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.