* v13.2.4 Mimic released
@ 2019-01-07 10:37 Abhishek Lekshmanan
[not found] ` <87pnt83ga6.fsf-IBi9RG/b67k@public.gmane.org>
0 siblings, 1 reply; 4+ messages in thread
From: Abhishek Lekshmanan @ 2019-01-07 10:37 UTC (permalink / raw)
To: ceph-announce-idqoXFIVOFJgJs9I8MT0rw,
ceph-users-idqoXFIVOFJgJs9I8MT0rw,
ceph-maintainers-idqoXFIVOFJgJs9I8MT0rw,
ceph-devel-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1.1: Type: text/plain, Size: 2780 bytes --]
This is the fourth bugfix release of the Mimic v13.2.x long term stable
release series. This release includes two security fixes atop of v13.2.3
We recommend all users upgrade to this version. If you've already
upgraded to v13.2.3, the same restrictions from v13.2.2->v13.2.3 apply
here as well.
Notable Changes
---------------
* CVE-2018-16846: rgw: enforce bounds on max-keys/max-uploads/max-parts (`issue#35994 <http://tracker.ceph.com/issues/35994>`_)
* CVE-2018-14662: mon: limit caps allowed to access the config store
Notable Changes in v13.2.3
---------------------------
* The default memory utilization for the mons has been increased
somewhat. Rocksdb now uses 512 MB of RAM by default, which should
be sufficient for small to medium-sized clusters; large clusters
should tune this up. Also, the `mon_osd_cache_size` has been
increase from 10 OSDMaps to 500, which will translate to an
additional 500 MB to 1 GB of RAM for large clusters, and much less
for small clusters.
* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
'damaged' state when upgrading Ceph cluster from previous version.
The bug is fixed in v13.2.3. If you are already running v13.2.2,
upgrading to v13.2.3 does not require special action.
* The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.
For more details, see the `BlueStore docs <http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#automatic-cache-sizing>`_.
* This version contains an upgrade bug, http://tracker.ceph.com/issues/36686,
due to which upgrading during recovery/backfill can cause OSDs to fail. This
bug can be worked around, either by restarting all the OSDs after the upgrade,
or by upgrading when all PGs are in "active+clean" state. If you have already
successfully upgraded to 13.2.2, this issue should not impact you. Going
forward, we are working on a clean upgrade path for this feature.
For more details please refer to the release blog at
https://ceph.com/releases/13-2-4-mimic-released/
Getting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.4.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b10be4d44915a4d78a8e06aa31919e74927b142e
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 487 bytes --]
[-- Attachment #2: Type: text/plain, Size: 187 bytes --]
_______________________________________________
Ceph-announce mailing list
Ceph-announce-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
http://lists.ceph.com/listinfo.cgi/ceph-announce-ceph.com
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: v13.2.4 Mimic released
[not found] ` <87pnt83ga6.fsf-IBi9RG/b67k@public.gmane.org>
@ 2019-01-07 15:10 ` Alexandre DERUMIER
[not found] ` <1637654245.3029503.1546873809792.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
0 siblings, 1 reply; 4+ messages in thread
From: Alexandre DERUMIER @ 2019-01-07 15:10 UTC (permalink / raw)
To: Abhishek Lekshmanan
Cc: ceph-users, ceph-maintainers-idqoXFIVOFJgJs9I8MT0rw, ceph-devel,
ceph-announce
Hi,
>>* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
>>'damaged' state when upgrading Ceph cluster from previous version.
>>The bug is fixed in v13.2.3. If you are already running v13.2.2,
>>upgrading to v13.2.3 does not require special action.
Any special action for upgrading from 13.2.1 ?
----- Mail original -----
De: "Abhishek Lekshmanan" <abhishek@suse.com>
À: "ceph-announce" <ceph-announce@lists.ceph.com>, "ceph-users" <ceph-users@lists.ceph.com>, ceph-maintainers@lists.ceph.com, "ceph-devel" <ceph-devel@vger.kernel.org>
Envoyé: Lundi 7 Janvier 2019 11:37:05
Objet: v13.2.4 Mimic released
This is the fourth bugfix release of the Mimic v13.2.x long term stable
release series. This release includes two security fixes atop of v13.2.3
We recommend all users upgrade to this version. If you've already
upgraded to v13.2.3, the same restrictions from v13.2.2->v13.2.3 apply
here as well.
Notable Changes
---------------
* CVE-2018-16846: rgw: enforce bounds on max-keys/max-uploads/max-parts (`issue#35994 <http://tracker.ceph.com/issues/35994>`_)
* CVE-2018-14662: mon: limit caps allowed to access the config store
Notable Changes in v13.2.3
---------------------------
* The default memory utilization for the mons has been increased
somewhat. Rocksdb now uses 512 MB of RAM by default, which should
be sufficient for small to medium-sized clusters; large clusters
should tune this up. Also, the `mon_osd_cache_size` has been
increase from 10 OSDMaps to 500, which will translate to an
additional 500 MB to 1 GB of RAM for large clusters, and much less
for small clusters.
* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
'damaged' state when upgrading Ceph cluster from previous version.
The bug is fixed in v13.2.3. If you are already running v13.2.2,
upgrading to v13.2.3 does not require special action.
* The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.
For more details, see the `BlueStore docs <http://docs.ceph.com/docs/mimic/rados/configuration/bluestore-config-ref/#automatic-cache-sizing>`_.
* This version contains an upgrade bug, http://tracker.ceph.com/issues/36686,
due to which upgrading during recovery/backfill can cause OSDs to fail. This
bug can be worked around, either by restarting all the OSDs after the upgrade,
or by upgrading when all PGs are in "active+clean" state. If you have already
successfully upgraded to 13.2.2, this issue should not impact you. Going
forward, we are working on a clean upgrade path for this feature.
For more details please refer to the release blog at
https://ceph.com/releases/13-2-4-mimic-released/
Getting ceph
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-13.2.4.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: b10be4d44915a4d78a8e06aa31919e74927b142e
--
Abhishek Lekshmanan
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Ceph-maintainers] v13.2.4 Mimic released
[not found] ` <1637654245.3029503.1546873809792.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
@ 2019-01-08 17:42 ` Patrick Donnelly
[not found] ` <CA+2bHPZqCnm_QPwW8pt05ndzbLE=ypzJ65bmixGgLD1QFXC4cw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 4+ messages in thread
From: Patrick Donnelly @ 2019-01-08 17:42 UTC (permalink / raw)
To: Alexandre DERUMIER
Cc: ceph-maintainers-idqoXFIVOFJgJs9I8MT0rw, ceph-users, ceph-devel,
ceph-announce
On Mon, Jan 7, 2019 at 7:10 AM Alexandre DERUMIER <aderumier-U/x3PoR4x10AvxtiuMwx3w@public.gmane.org> wrote:
>
> Hi,
>
> >>* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
> >>'damaged' state when upgrading Ceph cluster from previous version.
> >>The bug is fixed in v13.2.3. If you are already running v13.2.2,
> >>upgrading to v13.2.3 does not require special action.
>
> Any special action for upgrading from 13.2.1 ?
No special actions for CephFS are required for the upgrade.
--
Patrick Donnelly
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Ceph-maintainers] v13.2.4 Mimic released
[not found] ` <CA+2bHPZqCnm_QPwW8pt05ndzbLE=ypzJ65bmixGgLD1QFXC4cw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-01-08 17:49 ` Neha Ojha
0 siblings, 0 replies; 4+ messages in thread
From: Neha Ojha @ 2019-01-08 17:49 UTC (permalink / raw)
To: Patrick Donnelly
Cc: ceph-announce, ceph-users,
ceph-maintainers-idqoXFIVOFJgJs9I8MT0rw, ceph-devel
When upgrading from 13.2.1 to 13.2.4, you should be careful about
http://tracker.ceph.com/issues/36686. It might be worth considering
the workaround mentioned here:
https://github.com/ceph/ceph/blob/master/doc/releases/mimic.rst#v1322-mimic.
Thanks,
Neha
On Tue, Jan 8, 2019 at 9:42 AM Patrick Donnelly <pdonnell-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>
> On Mon, Jan 7, 2019 at 7:10 AM Alexandre DERUMIER <aderumier-U/x3PoR4x10AvxtiuMwx3w@public.gmane.org> wrote:
> >
> > Hi,
> >
> > >>* Ceph v13.2.2 includes a wrong backport, which may cause mds to go into
> > >>'damaged' state when upgrading Ceph cluster from previous version.
> > >>The bug is fixed in v13.2.3. If you are already running v13.2.2,
> > >>upgrading to v13.2.3 does not require special action.
> >
> > Any special action for upgrading from 13.2.1 ?
>
> No special actions for CephFS are required for the upgrade.
>
> --
> Patrick Donnelly
> _______________________________________________
> Ceph-maintainers mailing list
> Ceph-maintainers-idqoXFIVOFJgJs9I8MT0rw@public.gmane.org
> http://lists.ceph.com/listinfo.cgi/ceph-maintainers-ceph.com
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2019-01-08 17:49 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-07 10:37 v13.2.4 Mimic released Abhishek Lekshmanan
[not found] ` <87pnt83ga6.fsf-IBi9RG/b67k@public.gmane.org>
2019-01-07 15:10 ` Alexandre DERUMIER
[not found] ` <1637654245.3029503.1546873809792.JavaMail.zimbra-M8QNeUgB6UTyG1zEObXtfA@public.gmane.org>
2019-01-08 17:42 ` [Ceph-maintainers] " Patrick Donnelly
[not found] ` <CA+2bHPZqCnm_QPwW8pt05ndzbLE=ypzJ65bmixGgLD1QFXC4cw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-01-08 17:49 ` Neha Ojha
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.