All of lore.kernel.org
 help / color / mirror / Atom feed
* v14.2.5 Nautilus released
@ 2019-12-10  9:45 Abhishek Lekshmanan
       [not found] ` <87sglscy4z.fsf-IBi9RG/b67k@public.gmane.org>
  0 siblings, 1 reply; 5+ messages in thread
From: Abhishek Lekshmanan @ 2019-12-10  9:45 UTC (permalink / raw)
  To: ceph-announce-a8pt6IJUokc, ceph-users-a8pt6IJUokc,
	dev-a8pt6IJUokc, ceph-devel-u79uwXL29TY76Z2rM5mHXA

This is the fifth release of the Ceph Nautilus release series. Among the many
notable changes, this release fixes a critical BlueStore bug that was introduced
in 14.2.3. All Nautilus users are advised to upgrade to this release.

For the complete changelog entry, please visit the release blog at
https://ceph.io/releases/v14-2-5-nautilus-released/

Notable Changes
---------------

Critical fix:

* This release fixes a `critical BlueStore bug <https://tracker.ceph.com/issues/42223>`_
  introduced in 14.2.3 (and also present in 14.2.4) that can lead to data
  corruption when a separate "WAL" device is used.

New health warnings:

* Ceph will now issue health warnings if daemons have recently crashed. Ceph
  has been collecting crash reports since the initial Nautilus release, but the
  health alerts are new. To view new crashes (or all crashes, if you've just
  upgraded)::

    ceph crash ls-new

  To acknowledge a particular crash (or all crashes) and silence the health warning::

    ceph crash archive <crash-id>
    ceph crash archive-all

* Ceph will now issue a health warning if a RADOS pool has a ``pg_num``
  value that is not a power of two. This can be fixed by adjusting
  the pool to a nearby power of two::

    ceph osd pool set <pool-name> pg_num <new-pg-num>

  Alternatively, the warning can be silenced with::

    ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false

* Ceph will issue a health warning if a RADOS pool's ``size`` is set to 1
  or, in other words, if the pool is configured with no redundancy. Ceph will
  stop issuing the warning if the pool size is set to the minimum
  recommended value::

    ceph osd pool set <pool-name> size <num-replicas>

  The warning can be silenced with::

    ceph config set global mon_warn_on_pool_no_redundancy false

* A health warning is now generated if the average osd heartbeat ping
  time exceeds a configurable threshold for any of the intervals
  computed. The OSD computes 1 minute, 5 minute and 15 minute
  intervals with average, minimum and maximum values.  New configuration
  option `mon_warn_on_slow_ping_ratio` specifies a percentage of
  `osd_heartbeat_grace` to determine the threshold.  A value of zero
  disables the warning. New configuration option `mon_warn_on_slow_ping_time`
  specified in milliseconds over-rides the computed value, causes a warning
  when OSD heartbeat pings take longer than the specified amount.
  A new admin command, `ceph daemon mgr.# dump_osd_network [threshold]`, will
  list all connections with a ping time longer than the specified threshold or
  value determined by the config options, for the average for any of the 3 intervals.
  Another new admin command, `ceph daemon osd.# dump_osd_network [threshold]`,
  will do the same but only including heartbeats initiated by the specified OSD.

Changes in the telemetry module:

* The telemetry module now has a 'device' channel, enabled by default, that
  will report anonymized hard disk and SSD health metrics to telemetry.ceph.com
  in order to build and improve device failure prediction algorithms. Because
  the content of telemetry reports has changed, you will need to re-opt-in
  with::

    ceph telemetry on

  You can view exactly what information will be reported first with::

    ceph telemetry show
    ceph telemetry show device   # specifically show the device channel

  If you are not comfortable sharing device metrics, you can disable that
  channel first before re-opting-in:

    ceph config set mgr mgr/telemetry/channel_crash false
    ceph telemetry on

* The telemetry module now reports more information about CephFS file systems,
  including:

    - how many MDS daemons (in total and per file system)
    - which features are (or have been) enabled
    - how many data pools
    - approximate file system age (year + month of creation)
    - how many files, bytes, and snapshots
    - how much metadata is being cached

  We have also added:

    - which Ceph release the monitors are running
    - whether msgr v1 or v2 addresses are used for the monitors
    - whether IPv4 or IPv6 addresses are used for the monitors
    - whether RADOS cache tiering is enabled (and which mode)
    - whether pools are replicated or erasure coded, and
      which erasure code profile plugin and parameters are in use
    - how many hosts are in the cluster, and how many hosts have each type of daemon
    - whether a separate OSD cluster network is being used
    - how many RBD pools and images are in the cluster, and how many pools have RBD mirroring enabled
    - how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
    - aggregate stats about the CRUSH map, like which algorithms are used, how
      big buckets are, how many rules are defined, and what tunables are in
      use

  If you had telemetry enabled, you will need to re-opt-in with::

    ceph telemetry on

  You can view exactly what information will be reported first with::

    ceph telemetry show        # see everything
    ceph telemetry show basic  # basic cluster info (including all of the new info)

OSD:

* A new OSD daemon command, 'dump_recovery_reservations', reveals the
  recovery locks held (in_progress) and waiting in priority queues.

* Another new OSD daemon command, 'dump_scrub_reservations', reveals the
  scrub reservations that are held for local (primary) and remote (replica) PGs.

RGW:

* RGW now supports S3 Object Lock set of APIs allowing for a WORM model for
  storing objects. 6 new APIs have been added put/get bucket object lock,
  put/get object retention, put/get object legal hold.

* RGW now supports List Objects V2

Getting Ceph
------------

* Git at git://github.com/ceph/ceph.git
* Tarball at http://download.ceph.com/tarballs/ceph-14.2.5.tar.gz
* For packages, see http://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: ad5bd132e1492173c85fda2cc863152730b16a92

-- 
Abhishek Lekshmanan
SUSE Software Solutions Germany GmbH
_______________________________________________
Ceph-announce mailing list -- ceph-announce-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-announce-leave-a8pt6IJUokc@public.gmane.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v14.2.5 Nautilus released
       [not found] ` <87sglscy4z.fsf-IBi9RG/b67k@public.gmane.org>
@ 2019-12-10 13:52   ` Ilya Dryomov
       [not found]     ` <CAOi1vP8JyKzsc9mxHjMZWs_SaXQD7GrNbxRic9NvChxi_dUbow-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2019-12-12 14:05   ` Dan van der Ster
  1 sibling, 1 reply; 5+ messages in thread
From: Ilya Dryomov @ 2019-12-10 13:52 UTC (permalink / raw)
  To: Abhishek Lekshmanan
  Cc: ceph-announce-a8pt6IJUokc, ceph-users, dev-a8pt6IJUokc, Ceph Development

On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan <abhishek-IBi9RG/b67k@public.gmane.org> wrote:
>
> This is the fifth release of the Ceph Nautilus release series. Among the many
> notable changes, this release fixes a critical BlueStore bug that was introduced
> in 14.2.3. All Nautilus users are advised to upgrade to this release.
>
> For the complete changelog entry, please visit the release blog at
> https://ceph.io/releases/v14-2-5-nautilus-released/
>
> Notable Changes
> ---------------
>
> Critical fix:
>
> * This release fixes a `critical BlueStore bug <https://tracker.ceph.com/issues/42223>`_
>   introduced in 14.2.3 (and also present in 14.2.4) that can lead to data
>   corruption when a separate "WAL" device is used.
>
> New health warnings:
>
> * Ceph will now issue health warnings if daemons have recently crashed. Ceph
>   has been collecting crash reports since the initial Nautilus release, but the
>   health alerts are new. To view new crashes (or all crashes, if you've just
>   upgraded)::
>
>     ceph crash ls-new
>
>   To acknowledge a particular crash (or all crashes) and silence the health warning::
>
>     ceph crash archive <crash-id>
>     ceph crash archive-all
>
> * Ceph will now issue a health warning if a RADOS pool has a ``pg_num``
>   value that is not a power of two. This can be fixed by adjusting
>   the pool to a nearby power of two::
>
>     ceph osd pool set <pool-name> pg_num <new-pg-num>
>
>   Alternatively, the warning can be silenced with::
>
>     ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
>
> * Ceph will issue a health warning if a RADOS pool's ``size`` is set to 1
>   or, in other words, if the pool is configured with no redundancy. Ceph will
>   stop issuing the warning if the pool size is set to the minimum
>   recommended value::
>
>     ceph osd pool set <pool-name> size <num-replicas>
>
>   The warning can be silenced with::
>
>     ceph config set global mon_warn_on_pool_no_redundancy false
>
> * A health warning is now generated if the average osd heartbeat ping
>   time exceeds a configurable threshold for any of the intervals
>   computed. The OSD computes 1 minute, 5 minute and 15 minute
>   intervals with average, minimum and maximum values.  New configuration
>   option `mon_warn_on_slow_ping_ratio` specifies a percentage of
>   `osd_heartbeat_grace` to determine the threshold.  A value of zero
>   disables the warning. New configuration option `mon_warn_on_slow_ping_time`
>   specified in milliseconds over-rides the computed value, causes a warning
>   when OSD heartbeat pings take longer than the specified amount.
>   A new admin command, `ceph daemon mgr.# dump_osd_network [threshold]`, will
>   list all connections with a ping time longer than the specified threshold or
>   value determined by the config options, for the average for any of the 3 intervals.
>   Another new admin command, `ceph daemon osd.# dump_osd_network [threshold]`,
>   will do the same but only including heartbeats initiated by the specified OSD.
>
> Changes in the telemetry module:
>
> * The telemetry module now has a 'device' channel, enabled by default, that
>   will report anonymized hard disk and SSD health metrics to telemetry.ceph.com
>   in order to build and improve device failure prediction algorithms. Because
>   the content of telemetry reports has changed, you will need to re-opt-in
>   with::
>
>     ceph telemetry on
>
>   You can view exactly what information will be reported first with::
>
>     ceph telemetry show
>     ceph telemetry show device   # specifically show the device channel
>
>   If you are not comfortable sharing device metrics, you can disable that
>   channel first before re-opting-in:
>
>     ceph config set mgr mgr/telemetry/channel_crash false

This should be channel_device, right?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-users-leave-a8pt6IJUokc@public.gmane.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v14.2.5 Nautilus released
       [not found]     ` <CAOi1vP8JyKzsc9mxHjMZWs_SaXQD7GrNbxRic9NvChxi_dUbow-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-12-10 14:32       ` Sage Weil
  0 siblings, 0 replies; 5+ messages in thread
From: Sage Weil @ 2019-12-10 14:32 UTC (permalink / raw)
  To: Ilya Dryomov
  Cc: ceph-announce-a8pt6IJUokc, ceph-users, dev-a8pt6IJUokc, Ceph Development

> >   If you are not comfortable sharing device metrics, you can disable that
> >   channel first before re-opting-in:
> >
> >     ceph config set mgr mgr/telemetry/channel_crash false
> 
> This should be channel_device, right?

Yep!

	https://github.com/ceph/ceph/pull/32148

Thanks,
sage
_______________________________________________
ceph-users mailing list -- ceph-users-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-users-leave-a8pt6IJUokc@public.gmane.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v14.2.5 Nautilus released
       [not found] ` <87sglscy4z.fsf-IBi9RG/b67k@public.gmane.org>
  2019-12-10 13:52   ` Ilya Dryomov
@ 2019-12-12 14:05   ` Dan van der Ster
       [not found]     ` <CABZ+qq==8cpO3_16aj4ZQtgKmcg59qqZaMYk+L+9faGKQ5-yLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 5+ messages in thread
From: Dan van der Ster @ 2019-12-12 14:05 UTC (permalink / raw)
  To: Abhishek Lekshmanan
  Cc: ceph-announce-a8pt6IJUokc, ceph-users, dev-a8pt6IJUokc,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA

Thanks for this!

We're having trouble installing the ceph-debuginfo rpm, which shows a
negative file size (apparently because it now exceeds 2GB):
https://pastebin.com/5bnNCGHh

Does this change need to be applied to the release builds too?
https://tracker.ceph.com/issues/39387

-- Dan

On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan <abhishek-IBi9RG/b67k@public.gmane.org> wrote:
>
> This is the fifth release of the Ceph Nautilus release series. Among the many
> notable changes, this release fixes a critical BlueStore bug that was introduced
> in 14.2.3. All Nautilus users are advised to upgrade to this release.
>
> For the complete changelog entry, please visit the release blog at
> https://ceph.io/releases/v14-2-5-nautilus-released/
>
> Notable Changes
> ---------------
>
> Critical fix:
>
> * This release fixes a `critical BlueStore bug <https://tracker.ceph.com/issues/42223>`_
>   introduced in 14.2.3 (and also present in 14.2.4) that can lead to data
>   corruption when a separate "WAL" device is used.
>
> New health warnings:
>
> * Ceph will now issue health warnings if daemons have recently crashed. Ceph
>   has been collecting crash reports since the initial Nautilus release, but the
>   health alerts are new. To view new crashes (or all crashes, if you've just
>   upgraded)::
>
>     ceph crash ls-new
>
>   To acknowledge a particular crash (or all crashes) and silence the health warning::
>
>     ceph crash archive <crash-id>
>     ceph crash archive-all
>
> * Ceph will now issue a health warning if a RADOS pool has a ``pg_num``
>   value that is not a power of two. This can be fixed by adjusting
>   the pool to a nearby power of two::
>
>     ceph osd pool set <pool-name> pg_num <new-pg-num>
>
>   Alternatively, the warning can be silenced with::
>
>     ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
>
> * Ceph will issue a health warning if a RADOS pool's ``size`` is set to 1
>   or, in other words, if the pool is configured with no redundancy. Ceph will
>   stop issuing the warning if the pool size is set to the minimum
>   recommended value::
>
>     ceph osd pool set <pool-name> size <num-replicas>
>
>   The warning can be silenced with::
>
>     ceph config set global mon_warn_on_pool_no_redundancy false
>
> * A health warning is now generated if the average osd heartbeat ping
>   time exceeds a configurable threshold for any of the intervals
>   computed. The OSD computes 1 minute, 5 minute and 15 minute
>   intervals with average, minimum and maximum values.  New configuration
>   option `mon_warn_on_slow_ping_ratio` specifies a percentage of
>   `osd_heartbeat_grace` to determine the threshold.  A value of zero
>   disables the warning. New configuration option `mon_warn_on_slow_ping_time`
>   specified in milliseconds over-rides the computed value, causes a warning
>   when OSD heartbeat pings take longer than the specified amount.
>   A new admin command, `ceph daemon mgr.# dump_osd_network [threshold]`, will
>   list all connections with a ping time longer than the specified threshold or
>   value determined by the config options, for the average for any of the 3 intervals.
>   Another new admin command, `ceph daemon osd.# dump_osd_network [threshold]`,
>   will do the same but only including heartbeats initiated by the specified OSD.
>
> Changes in the telemetry module:
>
> * The telemetry module now has a 'device' channel, enabled by default, that
>   will report anonymized hard disk and SSD health metrics to telemetry.ceph.com
>   in order to build and improve device failure prediction algorithms. Because
>   the content of telemetry reports has changed, you will need to re-opt-in
>   with::
>
>     ceph telemetry on
>
>   You can view exactly what information will be reported first with::
>
>     ceph telemetry show
>     ceph telemetry show device   # specifically show the device channel
>
>   If you are not comfortable sharing device metrics, you can disable that
>   channel first before re-opting-in:
>
>     ceph config set mgr mgr/telemetry/channel_crash false
>     ceph telemetry on
>
> * The telemetry module now reports more information about CephFS file systems,
>   including:
>
>     - how many MDS daemons (in total and per file system)
>     - which features are (or have been) enabled
>     - how many data pools
>     - approximate file system age (year + month of creation)
>     - how many files, bytes, and snapshots
>     - how much metadata is being cached
>
>   We have also added:
>
>     - which Ceph release the monitors are running
>     - whether msgr v1 or v2 addresses are used for the monitors
>     - whether IPv4 or IPv6 addresses are used for the monitors
>     - whether RADOS cache tiering is enabled (and which mode)
>     - whether pools are replicated or erasure coded, and
>       which erasure code profile plugin and parameters are in use
>     - how many hosts are in the cluster, and how many hosts have each type of daemon
>     - whether a separate OSD cluster network is being used
>     - how many RBD pools and images are in the cluster, and how many pools have RBD mirroring enabled
>     - how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
>     - aggregate stats about the CRUSH map, like which algorithms are used, how
>       big buckets are, how many rules are defined, and what tunables are in
>       use
>
>   If you had telemetry enabled, you will need to re-opt-in with::
>
>     ceph telemetry on
>
>   You can view exactly what information will be reported first with::
>
>     ceph telemetry show        # see everything
>     ceph telemetry show basic  # basic cluster info (including all of the new info)
>
> OSD:
>
> * A new OSD daemon command, 'dump_recovery_reservations', reveals the
>   recovery locks held (in_progress) and waiting in priority queues.
>
> * Another new OSD daemon command, 'dump_scrub_reservations', reveals the
>   scrub reservations that are held for local (primary) and remote (replica) PGs.
>
> RGW:
>
> * RGW now supports S3 Object Lock set of APIs allowing for a WORM model for
>   storing objects. 6 new APIs have been added put/get bucket object lock,
>   put/get object retention, put/get object legal hold.
>
> * RGW now supports List Objects V2
>
> Getting Ceph
> ------------
>
> * Git at git://github.com/ceph/ceph.git
> * Tarball at http://download.ceph.com/tarballs/ceph-14.2.5.tar.gz
> * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: ad5bd132e1492173c85fda2cc863152730b16a92
>
> --
> Abhishek Lekshmanan
> SUSE Software Solutions Germany GmbH
_______________________________________________
ceph-users mailing list -- ceph-users-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-users-leave-a8pt6IJUokc@public.gmane.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: v14.2.5 Nautilus released
       [not found]     ` <CABZ+qq==8cpO3_16aj4ZQtgKmcg59qqZaMYk+L+9faGKQ5-yLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2019-12-12 14:24       ` Alfredo Deza
  0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Deza @ 2019-12-12 14:24 UTC (permalink / raw)
  To: Dan van der Ster
  Cc: ceph-announce-a8pt6IJUokc, ceph-users, dev-a8pt6IJUokc,
	ceph-devel-u79uwXL29TY76Z2rM5mHXA

On Thu, Dec 12, 2019 at 9:06 AM Dan van der Ster <dan-EOCVfBHj35C+XT7JhA+gdA@public.gmane.org> wrote:
>
> Thanks for this!
>
> We're having trouble installing the ceph-debuginfo rpm, which shows a
> negative file size (apparently because it now exceeds 2GB):
> https://pastebin.com/5bnNCGHh
>
> Does this change need to be applied to the release builds too?
> https://tracker.ceph.com/issues/39387

Good catch! I just checked and yes, we need to use the --no-database flag

>
> -- Dan
>
> On Tue, Dec 10, 2019 at 10:45 AM Abhishek Lekshmanan <abhishek-IBi9RG/b67k@public.gmane.org> wrote:
> >
> > This is the fifth release of the Ceph Nautilus release series. Among the many
> > notable changes, this release fixes a critical BlueStore bug that was introduced
> > in 14.2.3. All Nautilus users are advised to upgrade to this release.
> >
> > For the complete changelog entry, please visit the release blog at
> > https://ceph.io/releases/v14-2-5-nautilus-released/
> >
> > Notable Changes
> > ---------------
> >
> > Critical fix:
> >
> > * This release fixes a `critical BlueStore bug <https://tracker.ceph.com/issues/42223>`_
> >   introduced in 14.2.3 (and also present in 14.2.4) that can lead to data
> >   corruption when a separate "WAL" device is used.
> >
> > New health warnings:
> >
> > * Ceph will now issue health warnings if daemons have recently crashed. Ceph
> >   has been collecting crash reports since the initial Nautilus release, but the
> >   health alerts are new. To view new crashes (or all crashes, if you've just
> >   upgraded)::
> >
> >     ceph crash ls-new
> >
> >   To acknowledge a particular crash (or all crashes) and silence the health warning::
> >
> >     ceph crash archive <crash-id>
> >     ceph crash archive-all
> >
> > * Ceph will now issue a health warning if a RADOS pool has a ``pg_num``
> >   value that is not a power of two. This can be fixed by adjusting
> >   the pool to a nearby power of two::
> >
> >     ceph osd pool set <pool-name> pg_num <new-pg-num>
> >
> >   Alternatively, the warning can be silenced with::
> >
> >     ceph config set global mon_warn_on_pool_pg_num_not_power_of_two false
> >
> > * Ceph will issue a health warning if a RADOS pool's ``size`` is set to 1
> >   or, in other words, if the pool is configured with no redundancy. Ceph will
> >   stop issuing the warning if the pool size is set to the minimum
> >   recommended value::
> >
> >     ceph osd pool set <pool-name> size <num-replicas>
> >
> >   The warning can be silenced with::
> >
> >     ceph config set global mon_warn_on_pool_no_redundancy false
> >
> > * A health warning is now generated if the average osd heartbeat ping
> >   time exceeds a configurable threshold for any of the intervals
> >   computed. The OSD computes 1 minute, 5 minute and 15 minute
> >   intervals with average, minimum and maximum values.  New configuration
> >   option `mon_warn_on_slow_ping_ratio` specifies a percentage of
> >   `osd_heartbeat_grace` to determine the threshold.  A value of zero
> >   disables the warning. New configuration option `mon_warn_on_slow_ping_time`
> >   specified in milliseconds over-rides the computed value, causes a warning
> >   when OSD heartbeat pings take longer than the specified amount.
> >   A new admin command, `ceph daemon mgr.# dump_osd_network [threshold]`, will
> >   list all connections with a ping time longer than the specified threshold or
> >   value determined by the config options, for the average for any of the 3 intervals.
> >   Another new admin command, `ceph daemon osd.# dump_osd_network [threshold]`,
> >   will do the same but only including heartbeats initiated by the specified OSD.
> >
> > Changes in the telemetry module:
> >
> > * The telemetry module now has a 'device' channel, enabled by default, that
> >   will report anonymized hard disk and SSD health metrics to telemetry.ceph.com
> >   in order to build and improve device failure prediction algorithms. Because
> >   the content of telemetry reports has changed, you will need to re-opt-in
> >   with::
> >
> >     ceph telemetry on
> >
> >   You can view exactly what information will be reported first with::
> >
> >     ceph telemetry show
> >     ceph telemetry show device   # specifically show the device channel
> >
> >   If you are not comfortable sharing device metrics, you can disable that
> >   channel first before re-opting-in:
> >
> >     ceph config set mgr mgr/telemetry/channel_crash false
> >     ceph telemetry on
> >
> > * The telemetry module now reports more information about CephFS file systems,
> >   including:
> >
> >     - how many MDS daemons (in total and per file system)
> >     - which features are (or have been) enabled
> >     - how many data pools
> >     - approximate file system age (year + month of creation)
> >     - how many files, bytes, and snapshots
> >     - how much metadata is being cached
> >
> >   We have also added:
> >
> >     - which Ceph release the monitors are running
> >     - whether msgr v1 or v2 addresses are used for the monitors
> >     - whether IPv4 or IPv6 addresses are used for the monitors
> >     - whether RADOS cache tiering is enabled (and which mode)
> >     - whether pools are replicated or erasure coded, and
> >       which erasure code profile plugin and parameters are in use
> >     - how many hosts are in the cluster, and how many hosts have each type of daemon
> >     - whether a separate OSD cluster network is being used
> >     - how many RBD pools and images are in the cluster, and how many pools have RBD mirroring enabled
> >     - how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
> >     - aggregate stats about the CRUSH map, like which algorithms are used, how
> >       big buckets are, how many rules are defined, and what tunables are in
> >       use
> >
> >   If you had telemetry enabled, you will need to re-opt-in with::
> >
> >     ceph telemetry on
> >
> >   You can view exactly what information will be reported first with::
> >
> >     ceph telemetry show        # see everything
> >     ceph telemetry show basic  # basic cluster info (including all of the new info)
> >
> > OSD:
> >
> > * A new OSD daemon command, 'dump_recovery_reservations', reveals the
> >   recovery locks held (in_progress) and waiting in priority queues.
> >
> > * Another new OSD daemon command, 'dump_scrub_reservations', reveals the
> >   scrub reservations that are held for local (primary) and remote (replica) PGs.
> >
> > RGW:
> >
> > * RGW now supports S3 Object Lock set of APIs allowing for a WORM model for
> >   storing objects. 6 new APIs have been added put/get bucket object lock,
> >   put/get object retention, put/get object legal hold.
> >
> > * RGW now supports List Objects V2
> >
> > Getting Ceph
> > ------------
> >
> > * Git at git://github.com/ceph/ceph.git
> > * Tarball at http://download.ceph.com/tarballs/ceph-14.2.5.tar.gz
> > * For packages, see http://docs.ceph.com/docs/master/install/get-packages/
> > * Release git sha1: ad5bd132e1492173c85fda2cc863152730b16a92
> >
> > --
> > Abhishek Lekshmanan
> > SUSE Software Solutions Germany GmbH
>
_______________________________________________
Ceph-announce mailing list -- ceph-announce-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-announce-leave-a8pt6IJUokc@public.gmane.org

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-12-12 14:24 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-10  9:45 v14.2.5 Nautilus released Abhishek Lekshmanan
     [not found] ` <87sglscy4z.fsf-IBi9RG/b67k@public.gmane.org>
2019-12-10 13:52   ` Ilya Dryomov
     [not found]     ` <CAOi1vP8JyKzsc9mxHjMZWs_SaXQD7GrNbxRic9NvChxi_dUbow-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-12-10 14:32       ` Sage Weil
2019-12-12 14:05   ` Dan van der Ster
     [not found]     ` <CABZ+qq==8cpO3_16aj4ZQtgKmcg59qqZaMYk+L+9faGKQ5-yLQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-12-12 14:24       ` Alfredo Deza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.