* v0.47 released
@ 2012-05-21 4:38 Sage Weil
2012-05-21 4:43 ` Sage Weil
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Sage Weil @ 2012-05-21 4:38 UTC (permalink / raw)
To: ceph-devel
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1722 bytes --]
It's been another three weeks and v0.47 is ready. The highlights include:
* mon: admin tools to control unwieldy clusters (temporarily block osd
boots, failures, etc.)
* osd: reduced memory footprint for peering/thrashing
* librbd: write-thru cache mode
* librbd: improved error handling
* osd: removal of ill-conceived 'localized pg' feature (those annoying
PGs with 'p' in them)
* rados-bench: simple tool to benchmark radosgw (or S3) (based on 'rados
bench' command)
In truth it wasn't the most productive sprint because of the work that
went into the launch of the web sites, the launch party, and the
subsequent inebriation. However, the new RBD caching feature is looking
very good at this point, and patches are working their way upstream in
Qemu/KVM to enable it with the generic 'cache=writethrough' or
'cache=writeback' settings.
One other noteworthy item is that I generated a new PGP key to sign
releases with. The key is now in ceph.git, and has been signed by my
personal key. If you are installing debs from our repositories, you'll
want to add the new key to your APT keyring to avoid annoying security
warnings.
For v0.48, we are working on a ceph-osd refactor to improve threading and
performance, multi-monitor and OSD hotplugging support for upstart and
Chef, improvements to the OSD and monitor bootstrapping to make that
possible, and RBD groundwork for the much-anticipated layering feature.
You can get v0.47 from the usual places:
* Git at git://github.com/ceph/ceph.git
* Tarball at http://ceph.newdream.net/download/ceph-0.47.tar.gz
* For Debian/Ubuntu packages, see http://ceph.newdream.net/docs/master/install/debian
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: v0.47 released
2012-05-21 4:38 v0.47 released Sage Weil
@ 2012-05-21 4:43 ` Sage Weil
2012-05-21 6:46 ` Stefan Priebe - Profihost AG
2012-05-21 7:37 ` Josh Durgin
2 siblings, 0 replies; 4+ messages in thread
From: Sage Weil @ 2012-05-21 4:43 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
On Sun, 20 May 2012, Sage Weil wrote:
> * rados-bench: simple tool to benchmark radosgw (or S3) (based on 'rados
> bench' command)
Whoops, that's "rest-bench".
sage
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: v0.47 released
2012-05-21 4:38 v0.47 released Sage Weil
2012-05-21 4:43 ` Sage Weil
@ 2012-05-21 6:46 ` Stefan Priebe - Profihost AG
2012-05-21 7:37 ` Josh Durgin
2 siblings, 0 replies; 4+ messages in thread
From: Stefan Priebe - Profihost AG @ 2012-05-21 6:46 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
Hi,
the debian archive still shows: 0.46 =>
http://ceph.com/debian/dists/squeeze/main/binary-amd64/Packages
Greets
Stefan
Am 21.05.2012 06:38, schrieb Sage Weil:
> It's been another three weeks and v0.47 is ready. The highlights include:
>
> * mon: admin tools to control unwieldy clusters (temporarily block osd
> boots, failures, etc.)
> * osd: reduced memory footprint for peering/thrashing
> * librbd: write-thru cache mode
> * librbd: improved error handling
> * osd: removal of ill-conceived 'localized pg' feature (those annoying
> PGs with 'p' in them)
> * rados-bench: simple tool to benchmark radosgw (or S3) (based on 'rados
> bench' command)
>
> In truth it wasn't the most productive sprint because of the work that
> went into the launch of the web sites, the launch party, and the
> subsequent inebriation. However, the new RBD caching feature is looking
> very good at this point, and patches are working their way upstream in
> Qemu/KVM to enable it with the generic 'cache=writethrough' or
> 'cache=writeback' settings.
>
> One other noteworthy item is that I generated a new PGP key to sign
> releases with. The key is now in ceph.git, and has been signed by my
> personal key. If you are installing debs from our repositories, you'll
> want to add the new key to your APT keyring to avoid annoying security
> warnings.
>
> For v0.48, we are working on a ceph-osd refactor to improve threading and
> performance, multi-monitor and OSD hotplugging support for upstart and
> Chef, improvements to the OSD and monitor bootstrapping to make that
> possible, and RBD groundwork for the much-anticipated layering feature.
>
> You can get v0.47 from the usual places:
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://ceph.newdream.net/download/ceph-0.47.tar.gz
> * For Debian/Ubuntu packages, see http://ceph.newdream.net/docs/master/install/debian
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: v0.47 released
2012-05-21 4:38 v0.47 released Sage Weil
2012-05-21 4:43 ` Sage Weil
2012-05-21 6:46 ` Stefan Priebe - Profihost AG
@ 2012-05-21 7:37 ` Josh Durgin
2 siblings, 0 replies; 4+ messages in thread
From: Josh Durgin @ 2012-05-21 7:37 UTC (permalink / raw)
To: Sage Weil; +Cc: ceph-devel
On 05/20/2012 09:38 PM, Sage Weil wrote:
> It's been another three weeks and v0.47 is ready. The highlights include:
>
> * librbd: write-thru cache mode
Some more detail on rbd caching:
By default librbd does no caching - writes and reads go directly to the
storage cluster, and writes return only when the data is on disk on all
replicas.
With caching enabled, writes return immediately, unless there are more
than rbd_cache_max_dirty unflushed bytes. In this case, the write
triggers writeback and blocks until enough bytes are flushed.
To enable writethrough mode, set rbd_cache_max_dirty to 0. This means
writes return only when the data is on disk on all replicas, but reads
may come from the cache.
The cache is in memory on the client, and each rbd image has its own.
Since it's local to the client, and there's no coherency if there are
others accessing the image, running something like GFS or OCFS on top
of rbd would not work with caching enabled.
The options for controlling the cache are:
option | type | default | description
-----------------------+-----------+---------+------------
rbd_cache | bool | false | whether caching is enabled
rbd_cache_size | long long | 32 MiB | total cache size in bytes
rbd_cache_max_dirty | long long | 24 MiB | maximum number of dirty
bytes before triggering writeback
rbd_cache_target_dirty | long long | 16 MiB | writeback starts at this
threshold, but does not block the write
rbd_cache_max_age | float | 1.0 | seconds in cache before
writeback starts
The cache code was written for ceph-fuse a few years ago, so it's been
in use for a while now. It was just tweaked a bit to allow librbd
to use it. The rbd_cache_* options have the same meanings as the
client_oc_* options for ceph-fuse.
> * librbd: improved error handling
To clarify, these were fixes for error handling in the caching module
used by ceph-fuse and librbd, and does not matter if you aren't using
ceph-fuse or rbd caching. It made write errors be returned to the
caller when the cache is flushed, and exposed read errors to the client
as well.
0.47 also includes a fix for a deadlock that was more likely to
triggered with rbd caching enabled. I don't know of any outstanding
issues with rbd caching since that was fixed.
Josh
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2012-05-21 7:37 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-21 4:38 v0.47 released Sage Weil
2012-05-21 4:43 ` Sage Weil
2012-05-21 6:46 ` Stefan Priebe - Profihost AG
2012-05-21 7:37 ` Josh Durgin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.