* v15.2.4 Octopus released
@ 2020-06-30 20:37 David Galloway
[not found] ` <d21c2b58-1191-5e5f-3df1-a84d42750b48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: David Galloway @ 2020-06-30 20:37 UTC (permalink / raw)
To: ceph-announce, ceph-users, dev, ceph-devel, ceph-maintainers
We're happy to announce the fourth bugfix release in the Octopus series.
In addition to a security fix in RGW, this release brings a range of fixes
across all components. We recommend that all Octopus users upgrade to this
release. For a detailed release notes with links & changelog please
refer to the official blog entry at https://ceph.io/releases/v15-2-4-octopus-released
Notable Changes
---------------
* CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
(William Bowling, Adam Mohammed, Casey Bodley)
* Cephadm: There were a lot of small usability improvements and bug fixes:
* Grafana when deployed by Cephadm now binds to all network interfaces.
* `cephadm check-host` now prints all detected problems at once.
* Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
when generating an SSL certificate for Grafana.
* The Alertmanager is now correctly pointed to the Ceph Dashboard
* `cephadm adopt` now supports adopting an Alertmanager
* `ceph orch ps` now supports filtering by service name
* `ceph orch host ls` now marks hosts as offline, if they are not
accessible.
* Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS with
a service id of mynfs, that will use the RADOS pool nfs-ganesha and namespace
nfs-ns::
ceph orch apply nfs mynfs nfs-ganesha nfs-ns
* Cephadm: `ceph orch ls --export` now returns all service specifications in
yaml representation that is consumable by `ceph orch apply`. In addition,
the commands `orch ps` and `orch ls` now support `--format yaml` and
`--format json-pretty`.
* Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a preview of
the OSD specification before deploying OSDs. This makes it possible to
verify that the specification is correct, before applying it.
* RGW: The `radosgw-admin` sub-commands dealing with orphans --
`radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
`radosgw-admin orphans list-jobs` -- have been deprecated. They have
not been actively maintained and they store intermediate results on
the cluster, which could fill a nearly-full cluster. They have been
replaced by a tool, currently considered experimental,
`rgw-orphan-list`.
* RBD: The name of the rbd pool object that is used to store
rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
to "rbd_trash_purge_schedule". Users that have already started using
`rbd trash purge schedule` functionality and have per pool or namespace
schedules configured should copy "rbd_trash_trash_purge_schedule"
object to "rbd_trash_purge_schedule" before the upgrade and remove
"rbd_trash_purge_schedule" using the following commands in every RBD
pool and namespace where a trash purge schedule was previously
configured::
rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule rbd_trash_purge_schedule
rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
or use any other convenient way to restore the schedule after the
upgrade.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
--
David Galloway
Systems Administrator, RDU
Ceph Engineering
IRC: dgalloway
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: v15.2.4 Octopus released
[not found] ` <d21c2b58-1191-5e5f-3df1-a84d42750b48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2020-07-01 0:57 ` Sasha Litvak
2020-07-01 1:03 ` [ceph-users] " Dan Mick
0 siblings, 1 reply; 6+ messages in thread
From: Sasha Litvak @ 2020-07-01 0:57 UTC (permalink / raw)
To: David Galloway
Cc: ceph-announce-a8pt6IJUokc, ceph-users, dev-a8pt6IJUokc,
ceph-devel, ceph-maintainers-a8pt6IJUokc
David,
Download link points to 14.2.10 tarball.
On Tue, Jun 30, 2020, 3:38 PM David Galloway <dgallowa-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
> We're happy to announce the fourth bugfix release in the Octopus series.
> In addition to a security fix in RGW, this release brings a range of fixes
> across all components. We recommend that all Octopus users upgrade to this
> release. For a detailed release notes with links & changelog please
> refer to the official blog entry at
> https://ceph.io/releases/v15-2-4-octopus-released
>
> Notable Changes
> ---------------
> * CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's
> ExposeHeader
> (William Bowling, Adam Mohammed, Casey Bodley)
>
> * Cephadm: There were a lot of small usability improvements and bug fixes:
> * Grafana when deployed by Cephadm now binds to all network interfaces.
> * `cephadm check-host` now prints all detected problems at once.
> * Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
> when generating an SSL certificate for Grafana.
> * The Alertmanager is now correctly pointed to the Ceph Dashboard
> * `cephadm adopt` now supports adopting an Alertmanager
> * `ceph orch ps` now supports filtering by service name
> * `ceph orch host ls` now marks hosts as offline, if they are not
> accessible.
>
> * Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS
> with
> a service id of mynfs, that will use the RADOS pool nfs-ganesha and
> namespace
> nfs-ns::
>
> ceph orch apply nfs mynfs nfs-ganesha nfs-ns
>
> * Cephadm: `ceph orch ls --export` now returns all service specifications
> in
> yaml representation that is consumable by `ceph orch apply`. In addition,
> the commands `orch ps` and `orch ls` now support `--format yaml` and
> `--format json-pretty`.
>
> * Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a
> preview of
> the OSD specification before deploying OSDs. This makes it possible to
> verify that the specification is correct, before applying it.
>
> * RGW: The `radosgw-admin` sub-commands dealing with orphans --
> `radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
> `radosgw-admin orphans list-jobs` -- have been deprecated. They have
> not been actively maintained and they store intermediate results on
> the cluster, which could fill a nearly-full cluster. They have been
> replaced by a tool, currently considered experimental,
> `rgw-orphan-list`.
>
> * RBD: The name of the rbd pool object that is used to store
> rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
> to "rbd_trash_purge_schedule". Users that have already started using
> `rbd trash purge schedule` functionality and have per pool or namespace
> schedules configured should copy "rbd_trash_trash_purge_schedule"
> object to "rbd_trash_purge_schedule" before the upgrade and remove
> "rbd_trash_purge_schedule" using the following commands in every RBD
> pool and namespace where a trash purge schedule was previously
> configured::
>
> rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule
> rbd_trash_purge_schedule
> rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
>
> or use any other convenient way to restore the schedule after the
> upgrade.
>
> Getting Ceph
> ------------
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
>
> --
> David Galloway
> Systems Administrator, RDU
> Ceph Engineering
> IRC: dgalloway
> _______________________________________________
> Dev mailing list -- dev-a8pt6IJUokc@public.gmane.org
> To unsubscribe send an email to dev-leave-a8pt6IJUokc@public.gmane.org
>
_______________________________________________
ceph-users mailing list -- ceph-users-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-users-leave-a8pt6IJUokc@public.gmane.org
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ceph-users] Re: v15.2.4 Octopus released
2020-07-01 0:57 ` Sasha Litvak
@ 2020-07-01 1:03 ` Dan Mick
2020-07-01 1:06 ` Neha Ojha
0 siblings, 1 reply; 6+ messages in thread
From: Dan Mick @ 2020-07-01 1:03 UTC (permalink / raw)
To: Sasha Litvak, David Galloway
Cc: ceph-announce, ceph-users, dev, ceph-devel, ceph-maintainers
True. That said, the blog post points to
http://download.ceph.com/tarballs/ where all the tarballs, including
15.2.4, live.
On 6/30/2020 5:57 PM, Sasha Litvak wrote:
> David,
>
> Download link points to 14.2.10 tarball.
>
> On Tue, Jun 30, 2020, 3:38 PM David Galloway <dgallowa@redhat.com> wrote:
>
>> We're happy to announce the fourth bugfix release in the Octopus series.
>> In addition to a security fix in RGW, this release brings a range of fixes
>> across all components. We recommend that all Octopus users upgrade to this
>> release. For a detailed release notes with links & changelog please
>> refer to the official blog entry at
>> https://ceph.io/releases/v15-2-4-octopus-released
>>
>> Notable Changes
>> ---------------
>> * CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's
>> ExposeHeader
>> (William Bowling, Adam Mohammed, Casey Bodley)
>>
>> * Cephadm: There were a lot of small usability improvements and bug fixes:
>> * Grafana when deployed by Cephadm now binds to all network interfaces.
>> * `cephadm check-host` now prints all detected problems at once.
>> * Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
>> when generating an SSL certificate for Grafana.
>> * The Alertmanager is now correctly pointed to the Ceph Dashboard
>> * `cephadm adopt` now supports adopting an Alertmanager
>> * `ceph orch ps` now supports filtering by service name
>> * `ceph orch host ls` now marks hosts as offline, if they are not
>> accessible.
>>
>> * Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS
>> with
>> a service id of mynfs, that will use the RADOS pool nfs-ganesha and
>> namespace
>> nfs-ns::
>>
>> ceph orch apply nfs mynfs nfs-ganesha nfs-ns
>>
>> * Cephadm: `ceph orch ls --export` now returns all service specifications
>> in
>> yaml representation that is consumable by `ceph orch apply`. In addition,
>> the commands `orch ps` and `orch ls` now support `--format yaml` and
>> `--format json-pretty`.
>>
>> * Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a
>> preview of
>> the OSD specification before deploying OSDs. This makes it possible to
>> verify that the specification is correct, before applying it.
>>
>> * RGW: The `radosgw-admin` sub-commands dealing with orphans --
>> `radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
>> `radosgw-admin orphans list-jobs` -- have been deprecated. They have
>> not been actively maintained and they store intermediate results on
>> the cluster, which could fill a nearly-full cluster. They have been
>> replaced by a tool, currently considered experimental,
>> `rgw-orphan-list`.
>>
>> * RBD: The name of the rbd pool object that is used to store
>> rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
>> to "rbd_trash_purge_schedule". Users that have already started using
>> `rbd trash purge schedule` functionality and have per pool or namespace
>> schedules configured should copy "rbd_trash_trash_purge_schedule"
>> object to "rbd_trash_purge_schedule" before the upgrade and remove
>> "rbd_trash_purge_schedule" using the following commands in every RBD
>> pool and namespace where a trash purge schedule was previously
>> configured::
>>
>> rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule
>> rbd_trash_purge_schedule
>> rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
>>
>> or use any other convenient way to restore the schedule after the
>> upgrade.
>>
>> Getting Ceph
>> ------------
>> * Git at git://github.com/ceph/ceph.git
>> * Tarball at http://download.ceph.com/tarballs/ceph-14.2.10.tar.gz
>> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
>> * Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
>>
>> --
>> David Galloway
>> Systems Administrator, RDU
>> Ceph Engineering
>> IRC: dgalloway
>> _______________________________________________
>> Dev mailing list -- dev@ceph.io
>> To unsubscribe send an email to dev-leave@ceph.io
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-leave@ceph.io
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [ceph-users] Re: v15.2.4 Octopus released
2020-07-01 1:03 ` [ceph-users] " Dan Mick
@ 2020-07-01 1:06 ` Neha Ojha
0 siblings, 0 replies; 6+ messages in thread
From: Neha Ojha @ 2020-07-01 1:06 UTC (permalink / raw)
To: Dan Mick
Cc: Sasha Litvak, David Galloway, ceph-announce, ceph-users, dev,
ceph-devel, ceph-maintainers
On Tue, Jun 30, 2020 at 6:04 PM Dan Mick <dmick@redhat.com> wrote:
>
> True. That said, the blog post points to
> http://download.ceph.com/tarballs/ where all the tarballs, including
> 15.2.4, live.
>
> On 6/30/2020 5:57 PM, Sasha Litvak wrote:
> > David,
> >
> > Download link points to 14.2.10 tarball.
> >
> > On Tue, Jun 30, 2020, 3:38 PM David Galloway <dgallowa@redhat.com> wrote:
> >
> >> We're happy to announce the fourth bugfix release in the Octopus series.
> >> In addition to a security fix in RGW, this release brings a range of fixes
> >> across all components. We recommend that all Octopus users upgrade to this
> >> release. For a detailed release notes with links & changelog please
> >> refer to the official blog entry at
> >> https://ceph.io/releases/v15-2-4-octopus-released
> >>
> >> Notable Changes
> >> ---------------
> >> * CVE-2020-10753: rgw: sanitize newlines in s3 CORSConfiguration's
> >> ExposeHeader
> >> (William Bowling, Adam Mohammed, Casey Bodley)
> >>
> >> * Cephadm: There were a lot of small usability improvements and bug fixes:
> >> * Grafana when deployed by Cephadm now binds to all network interfaces.
> >> * `cephadm check-host` now prints all detected problems at once.
> >> * Cephadm now calls `ceph dashboard set-grafana-api-ssl-verify false`
> >> when generating an SSL certificate for Grafana.
> >> * The Alertmanager is now correctly pointed to the Ceph Dashboard
> >> * `cephadm adopt` now supports adopting an Alertmanager
> >> * `ceph orch ps` now supports filtering by service name
> >> * `ceph orch host ls` now marks hosts as offline, if they are not
> >> accessible.
> >>
> >> * Cephadm can now deploy NFS Ganesha services. For example, to deploy NFS
> >> with
> >> a service id of mynfs, that will use the RADOS pool nfs-ganesha and
> >> namespace
> >> nfs-ns::
> >>
> >> ceph orch apply nfs mynfs nfs-ganesha nfs-ns
> >>
> >> * Cephadm: `ceph orch ls --export` now returns all service specifications
> >> in
> >> yaml representation that is consumable by `ceph orch apply`. In addition,
> >> the commands `orch ps` and `orch ls` now support `--format yaml` and
> >> `--format json-pretty`.
> >>
> >> * Cephadm: `ceph orch apply osd` supports a `--preview` flag that prints a
> >> preview of
> >> the OSD specification before deploying OSDs. This makes it possible to
> >> verify that the specification is correct, before applying it.
> >>
> >> * RGW: The `radosgw-admin` sub-commands dealing with orphans --
> >> `radosgw-admin orphans find`, `radosgw-admin orphans finish`, and
> >> `radosgw-admin orphans list-jobs` -- have been deprecated. They have
> >> not been actively maintained and they store intermediate results on
> >> the cluster, which could fill a nearly-full cluster. They have been
> >> replaced by a tool, currently considered experimental,
> >> `rgw-orphan-list`.
> >>
> >> * RBD: The name of the rbd pool object that is used to store
> >> rbd trash purge schedule is changed from "rbd_trash_trash_purge_schedule"
> >> to "rbd_trash_purge_schedule". Users that have already started using
> >> `rbd trash purge schedule` functionality and have per pool or namespace
> >> schedules configured should copy "rbd_trash_trash_purge_schedule"
> >> object to "rbd_trash_purge_schedule" before the upgrade and remove
> >> "rbd_trash_purge_schedule" using the following commands in every RBD
> >> pool and namespace where a trash purge schedule was previously
> >> configured::
> >>
> >> rados -p <pool-name> [-N namespace] cp rbd_trash_trash_purge_schedule
> >> rbd_trash_purge_schedule
> >> rados -p <pool-name> [-N namespace] rm rbd_trash_trash_purge_schedule
> >>
> >> or use any other convenient way to restore the schedule after the
> >> upgrade.
> >>
> >> Getting Ceph
> >> ------------
> >> * Git at git://github.com/ceph/ceph.git
Correction:
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.4.tar.gz
> >> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> >> * Release git sha1: 7447c15c6ff58d7fce91843b705a268a1917325c
> >>
> >> --
> >> David Galloway
> >> Systems Administrator, RDU
> >> Ceph Engineering
> >> IRC: dgalloway
> >> _______________________________________________
> >> Dev mailing list -- dev@ceph.io
> >> To unsubscribe send an email to dev-leave@ceph.io
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-leave@ceph.io
> >
> _______________________________________________
> Dev mailing list -- dev@ceph.io
> To unsubscribe send an email to dev-leave@ceph.io
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* v14.2.12 Nautilus released
2020-06-30 20:37 v15.2.4 Octopus released David Galloway
[not found] ` <d21c2b58-1191-5e5f-3df1-a84d42750b48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2020-10-20 21:16 ` David Galloway
2020-12-17 2:53 ` v15.2.8 Octopus released David Galloway
2 siblings, 0 replies; 6+ messages in thread
From: David Galloway @ 2020-10-20 21:16 UTC (permalink / raw)
To: ceph-announce, ceph-users, dev, ceph-devel, ceph-maintainers
This is the 12th backport release in the Nautilus series. This release
brings a number of bugfixes across all major components of Ceph. We
recommend that all Nautilus users upgrade to this release. For a
detailed release notes with links & changelog please
refer to the official blog entry at
https://ceph.io/releases/v14-2-12-nautilus-released
Notable Changes
---------------
* The `ceph df` command now lists the number of pgs in each pool.
* Monitors now have a config option `mon_osd_warn_num_repaired`, 10 by
default. If any OSD has repaired more than this many I/O errors in
stored data a `OSD_TOO_MANY_REPAIRS` health warning is generated. In
order to allow clearing of the warning, a new command `ceph tell osd.#
clear_shards_repaired [count]` has been added. By default it will set
the repair count to 0. If you wanted to be warned again if additional
repairs are performed you can provide a value to the command and specify
the value of `mon_osd_warn_num_repaired`. This command will be replaced
in future releases by the health mute/unmute feature.
* It is now possible to specify the initial monitor to contact for Ceph
tools and daemons using the `mon_host_override` config option or
`--mon-host-override <ip>` command-line switch. This generally should
only be used for debugging and only affects initial communication with
Ceph’s monitor cluster.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.12.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: 2f3caa3b8b3d5c5f2719a1e9d8e7deea5ae1a5c6
^ permalink raw reply [flat|nested] 6+ messages in thread
* v15.2.8 Octopus released
2020-06-30 20:37 v15.2.4 Octopus released David Galloway
[not found] ` <d21c2b58-1191-5e5f-3df1-a84d42750b48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2020-10-20 21:16 ` v14.2.12 Nautilus released David Galloway
@ 2020-12-17 2:53 ` David Galloway
2 siblings, 0 replies; 6+ messages in thread
From: David Galloway @ 2020-12-17 2:53 UTC (permalink / raw)
To: ceph-announce, ceph-users, dev, ceph-devel, ceph-maintainers
We're happy to announce the 8th backport release in the Octopus series. This release fixes a security flaw in CephFS and includes a number of bug fixes. We recommend users to update to this release. For a detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/releases/v15-2-8-octopus-released
Notable Changes
---------------
* CVE-2020-27781 : OpenStack Manila use of ceph_volume_client.py library allowed
tenant access to any Ceph credential's secret. (Kotresh Hiremath Ravishankar,
Ramana Raja)
* ceph-volume: The `lvm batch` subcommand received a major rewrite. This closed
a number of bugs and improves usability in terms of size specification and
calculation, as well as idempotency behaviour and disk replacement process.
Please refer to https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for
more detailed information.
* MON: The cluster log now logs health detail every `mon_health_to_clog_interval`,
which has been changed from 1hr to 10min. Logging of health detail will be
skipped if there is no change in health summary since last known.
* The `ceph df` command now lists the number of pgs in each pool.
* The `bluefs_preextend_wal_files` option has been removed.
* It is now possible to specify the initial monitor to contact for Ceph tools
and daemons using the `mon_host_override` config option or
`--mon-host-override <ip>` command-line switch. This generally should only
be used for debugging and only affects initial communication with Ceph's
monitor cluster.
Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-15.2.8.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2020-12-17 2:55 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-30 20:37 v15.2.4 Octopus released David Galloway
[not found] ` <d21c2b58-1191-5e5f-3df1-a84d42750b48-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2020-07-01 0:57 ` Sasha Litvak
2020-07-01 1:03 ` [ceph-users] " Dan Mick
2020-07-01 1:06 ` Neha Ojha
2020-10-20 21:16 ` v14.2.12 Nautilus released David Galloway
2020-12-17 2:53 ` v15.2.8 Octopus released David Galloway
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).